model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
Zagarsuren/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1290
- Accuracy: 0.5523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.2513 | 1.0 | 35 | 1.2502 | 0.4862 |
| 1.1966 | 2.0 | 70 | 1.2707 | 0.4936 |
| 1.2163 | 3.0 | 105 | 1.1919 | 0.5468 |
| 1.098 | 4.0 | 140 | 1.1604 | 0.5284 |
| 1.1705 | 5.0 | 175 | 1.1649 | 0.5193 |
| 1.0887 | 6.0 | 210 | 1.1538 | 0.5450 |
| 1.1253 | 7.0 | 245 | 1.1397 | 0.5486 |
| 1.0148 | 8.0 | 280 | 1.1312 | 0.5505 |
| 1.1047 | 9.0 | 315 | 1.1376 | 0.5486 |
| 1.1108 | 9.7299 | 340 | 1.1290 | 0.5523 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"atelectasis",
"cardiomegaly",
"no finding",
"nodule",
"pneumothorax"
] |
Granitagushi/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1893
- Accuracy: 0.9405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3976 | 1.0 | 370 | 0.2921 | 0.9364 |
| 0.2273 | 2.0 | 740 | 0.2257 | 0.9445 |
| 0.1742 | 3.0 | 1110 | 0.2102 | 0.9445 |
| 0.1352 | 4.0 | 1480 | 0.2023 | 0.9459 |
| 0.1326 | 5.0 | 1850 | 0.2006 | 0.9459 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
## Zero-Shot classification model
This section compares the performance of a zero-shot model (`openai/clip-vit-large-patch14`) on the Oxford Pets dataset (`pcuenq/oxford-pets`).
- **Model used**: `openai/clip-vit-large-patch14`
- **Dataset**: `pcuenq/oxford-pets` (train split)
- **Evaluation Task**: Zero-Shot Image Classification
- **Candidate Labels**: 37 pet breeds from the dataset
### Results:
Zero-Shot Evaluation mit CLIP:
- **Accuracy**: 0.8800
- **Precision**: 0.8768
- **Recall**: 0.8800
Evaluated using Hugging Face `transformers` pipeline and `sklearn.metrics` on the full training set.
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
ricardoSLabs/mozilla_dataset_processed_mel_spec_vit_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mozilla_dataset_processed_mel_spec_vit_1
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4348
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7532 | 1.0 | 11 | 0.6034 | 0.6367 |
| 0.4888 | 2.0 | 22 | 0.2861 | 0.9033 |
| 0.2919 | 3.0 | 33 | 0.2482 | 0.92 |
| 0.1771 | 4.0 | 44 | 0.2018 | 0.9233 |
| 0.1011 | 5.0 | 55 | 0.2074 | 0.9233 |
| 0.0563 | 6.0 | 66 | 0.2219 | 0.9367 |
| 0.0251 | 7.0 | 77 | 0.2835 | 0.9333 |
| 0.0041 | 8.0 | 88 | 0.3132 | 0.9367 |
| 0.001 | 9.0 | 99 | 0.4014 | 0.94 |
| 0.0 | 10.0 | 110 | 0.4260 | 0.9433 |
| 0.0 | 11.0 | 121 | 0.4316 | 0.94 |
| 0.0 | 12.0 | 132 | 0.4329 | 0.94 |
| 0.0 | 13.0 | 143 | 0.4327 | 0.9433 |
| 0.0 | 14.0 | 154 | 0.4334 | 0.94 |
| 0.0 | 15.0 | 165 | 0.4339 | 0.94 |
| 0.0 | 16.0 | 176 | 0.4340 | 0.94 |
| 0.0 | 17.0 | 187 | 0.4344 | 0.94 |
| 0.0 | 18.0 | 198 | 0.4346 | 0.94 |
| 0.0 | 19.0 | 209 | 0.4347 | 0.94 |
| 0.0 | 20.0 | 220 | 0.4348 | 0.94 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
[
"female",
"male"
] |
emmahuan28/iemocap
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7"
] |
prithivMLmods/Emoji-Scope
|

# **Emoji-Scope**
> **Emoji-Scope** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify emoji images into different style categories using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
Apple Style 0.9336 0.8538 0.8919 725
DoCoMo Style 0.9130 0.8400 0.8750 100
Facebook Style 0.8713 0.8915 0.8813 691
Gmail Style 0.8289 0.8750 0.8514 288
Google Style 0.8725 0.9505 0.9098 727
JoyPixels Style 0.8960 0.9614 0.9276 726
KDDI Style 0.9444 0.9333 0.9389 255
Samsung Style 0.9584 0.9681 0.9632 690
SoftBank Style 0.8407 0.8053 0.8226 190
Twitter Style 0.9939 0.8900 0.9390 727
Windows Style 0.9949 0.9949 0.9949 583
accuracy 0.9200 5702
macro avg 0.9134 0.9058 0.9087 5702
weighted avg 0.9222 0.9200 0.9201 5702
```

The model categorizes images into eleven emoji styles:
- **Class 0:** "Apple Style"
- **Class 1:** "DoCoMo Style"
- **Class 2:** "Facebook Style"
- **Class 3:** "Gmail Style"
- **Class 4:** "Google Style"
- **Class 5:** "JoyPixels Style"
- **Class 6:** "KDDI Style"
- **Class 7:** "Samsung Style"
- **Class 8:** "SoftBank Style"
- **Class 9:** "Twitter Style"
- **Class 10:** "Windows Style"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Emoji-Scope"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def emoji_classification(image):
"""Predicts the style category of an emoji image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "Apple Style",
"1": "DoCoMo Style",
"2": "Facebook Style",
"3": "Gmail Style",
"4": "Google Style",
"5": "JoyPixels Style",
"6": "KDDI Style",
"7": "Samsung Style",
"8": "SoftBank Style",
"9": "Twitter Style",
"10": "Windows Style"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=emoji_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Emoji Style Classification",
description="Upload an emoji image to classify its style."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **Emoji-Scope** model is designed to classify emoji images based on different style categories. Potential use cases include:
- **Emoji Standardization:** Identifying different emoji styles across platforms.
- **User Experience Design:** Helping developers ensure consistency in emoji usage.
- **Digital Art & Design:** Assisting artists in selecting preferred emoji styles.
- **Educational Purposes:** Teaching differences in emoji representation.
|
[
"apple style",
"docomo style",
"facebook style",
"gmail style",
"google style",
"joypixels style",
"kddi style",
"samsung style",
"softbank style",
"twitter style",
"windows style"
] |
ricardoSLabs/RAVDESS_Speaker_Id_spec_dataset_vit_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RAVDESS_Speaker_Id_spec_dataset_vit_1
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8945
- Accuracy: 0.6898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 3.2704 | 0.0648 |
| 3.565 | 2.0 | 16 | 2.9777 | 0.1204 |
| 2.9674 | 3.0 | 24 | 2.4613 | 0.2963 |
| 2.2864 | 4.0 | 32 | 1.9951 | 0.4028 |
| 1.5685 | 5.0 | 40 | 1.6858 | 0.5 |
| 1.5685 | 6.0 | 48 | 1.4689 | 0.5509 |
| 1.0277 | 7.0 | 56 | 1.3074 | 0.6389 |
| 0.6389 | 8.0 | 64 | 1.2081 | 0.6713 |
| 0.3924 | 9.0 | 72 | 1.1233 | 0.6944 |
| 0.2291 | 10.0 | 80 | 1.0603 | 0.6991 |
| 0.2291 | 11.0 | 88 | 0.9899 | 0.7222 |
| 0.129 | 12.0 | 96 | 0.9880 | 0.6713 |
| 0.0812 | 13.0 | 104 | 0.9722 | 0.7037 |
| 0.0481 | 14.0 | 112 | 0.9296 | 0.7176 |
| 0.0325 | 15.0 | 120 | 0.9181 | 0.6852 |
| 0.0325 | 16.0 | 128 | 0.9013 | 0.7176 |
| 0.0212 | 17.0 | 136 | 0.9077 | 0.6944 |
| 0.0151 | 18.0 | 144 | 0.8954 | 0.7037 |
| 0.0121 | 19.0 | 152 | 0.8941 | 0.7037 |
| 0.0103 | 20.0 | 160 | 0.8945 | 0.6898 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
[
"actor_01",
"actor_02",
"actor_11",
"actor_12",
"actor_13",
"actor_14",
"actor_15",
"actor_16",
"actor_17",
"actor_18",
"actor_19",
"actor_20",
"actor_03",
"actor_21",
"actor_22",
"actor_23",
"actor_24",
"actor_04",
"actor_05",
"actor_06",
"actor_07",
"actor_08",
"actor_09",
"actor_10"
] |
prithivMLmods/Trash-Net
|

# **Trash-Net**
> **Trash-Net** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images of waste materials into different categories using the **SiglipForImageClassification** architecture.
The model categorizes images into six classes:
- **Class 0:** "cardboard"
- **Class 1:** "glass"
- **Class 2:** "metal"
- **Class 3:** "paper"
- **Class 4:** "plastic"
- **Class 5:** "trash"
```py
Classification Report:
precision recall f1-score support
cardboard 0.9912 0.9739 0.9825 806
glass 0.9564 0.9641 0.9602 1002
metal 0.9523 0.9744 0.9632 820
paper 0.9520 0.9848 0.9681 1188
plastic 0.9835 0.9274 0.9546 964
trash 0.9127 0.9161 0.9144 274
accuracy 0.9626 5054
macro avg 0.9580 0.9568 0.9572 5054
weighted avg 0.9631 0.9626 0.9626 5054
```

# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Trash-Net"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def trash_classification(image):
"""Predicts the category of waste material in the image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "cardboard",
"1": "glass",
"2": "metal",
"3": "paper",
"4": "plastic",
"5": "trash"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=trash_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Trash Classification",
description="Upload an image to classify the type of waste material."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **Trash-Net** model is designed to classify waste materials into different categories. Potential use cases include:
- **Waste Management:** Assisting in automated waste sorting and recycling.
- **Environmental Monitoring:** Identifying and categorizing waste in public spaces.
- **Educational Purposes:** Teaching waste classification and sustainability.
- **Smart Cities:** Enhancing waste disposal systems through AI-driven classification.
|
[
"cardboard",
"glass",
"metal",
"paper",
"plastic",
"trash"
] |
Gloria56/m-plant-classification-model
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13"
] |
nishawarschonvergeben/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2226
- Accuracy: 0.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3799 | 1.0 | 370 | 0.3052 | 0.9188 |
| 0.2185 | 2.0 | 740 | 0.2473 | 0.9242 |
| 0.1544 | 3.0 | 1110 | 0.2151 | 0.9405 |
| 0.135 | 4.0 | 1480 | 0.2067 | 0.9405 |
| 0.1401 | 5.0 | 1850 | 0.2052 | 0.9378 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
### Zero-Shot Evaluation
• Model used: openai/clip-vit-large-patch14
• Dataset: Oxford-IIIT-Pets ( pcuenq/oxford-pets )
• Accuracy:0.8800
• Precision:0.8768
• Recall: 0.8800
The zero-shot evaluation was done using Hugging Face Transformers and the CLIP model on the Oxford-Pet dataset.
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
prithivMLmods/Road-Subsigns-Classification
|

# **Road-Subsigns-Classification**
> **Road-Subsigns-Classification** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images of road subsigns using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
M1 0.9907 0.9815 0.9860 324
M11c1-E 1.0000 0.9787 0.9892 47
M2 0.9950 0.9853 0.9901 204
M3a-droite 0.9699 0.9680 0.9690 500
M3a-gauche 0.9431 0.9375 0.9403 336
M3b-gauche 1.0000 1.0000 1.0000 14
M4a 0.9914 0.9664 0.9787 119
M4b 0.8929 1.0000 0.9434 25
M4c 0.8947 1.0000 0.9444 17
M4d1 0.9887 1.0000 0.9943 175
M4d2 0.9844 0.9844 0.9844 64
M4f 0.9826 1.0000 0.9912 452
M4g 0.9940 1.0000 0.9970 329
M4h 0.0000 0.0000 0.0000 1
M4u 0.8571 0.9231 0.8889 13
M4v 1.0000 1.0000 1.0000 100
M4z1 1.0000 1.0000 1.0000 45
M4z2 0.0000 0.0000 0.0000 1
M5-STOP 1.0000 0.9872 0.9935 234
M6a 0.9940 0.9920 0.9930 500
M6h 1.0000 0.9943 0.9972 353
M6i 0.9885 1.0000 0.9942 86
M6j 0.9855 1.0000 0.9927 68
M8a 0.9619 0.9528 0.9573 106
M8b 0.7407 0.9091 0.8163 22
M8c 0.8485 0.9825 0.9106 57
M8d 0.9739 0.9739 0.9739 115
M8e 0.9754 0.9835 0.9794 121
M8f 0.9972 0.9756 0.9863 369
M9Z-INTERDIT-HORS-CASES 0.9787 0.9919 0.9852 370
M9Z-SAUF-BUS 0.9650 0.9452 0.9550 146
M9Z-SAUF-BUS-SCOLAIRE 0.9688 0.9394 0.9538 66
M9c 0.9843 1.0000 0.9921 500
M9d 0.9945 0.9759 0.9851 373
M9v 0.9952 1.0000 0.9976 418
M9z 0.7760 0.7132 0.7433 136
M9z-DES-DEUX-COTES 0.9741 0.9496 0.9617 119
M9z-ECOLE 1.0000 0.9474 0.9730 38
M9z-PARKING-PRIVE 1.0000 1.0000 1.0000 9
M9z-PASSAGE-SURELEVE 0.9808 0.9808 0.9808 104
M9z-PROPRIETE-PRIVEE 0.9091 0.8333 0.8696 12
M9z-RAPPEL 0.9933 0.9978 0.9955 447
M9z-SAUF-CHANTIER 1.0000 0.7273 0.8421 11
M9z-SAUF-CONVOIS-EXCEPT 0.0000 0.0000 0.0000 2
M9z-SAUF-CYCLISTES 0.9626 0.9836 0.9730 183
M9z-SAUF-DESSERTE 0.9307 0.9792 0.9543 96
M9z-SAUF-LIVRAISONS 0.8478 0.9286 0.8864 42
M9z-SAUF-POLICE 1.0000 0.8667 0.9286 15
M9z-SAUF-RIVERAINS 0.9677 0.9615 0.9646 312
M9z-SAUF-SERVICE 0.9160 0.9375 0.9266 128
M9z-SAUF-TAXIS 0.7778 0.8235 0.8000 17
M9z-SAUF-VEHICULES-AGRICOLES 0.9712 0.9018 0.9352 112
M9z-SAUF-VEHICULES-AUTORISES 0.9253 0.9817 0.9527 164
M9z-SECOURS 1.0000 0.6667 0.8000 9
M9z-SIGNAL-AUTO 0.9892 0.9892 0.9892 93
M9z-SORTIE-POMPIERS 0.9062 0.9355 0.9206 31
M9z-SORTIE-VEHICULES 1.0000 0.7857 0.8800 14
M9z-SUR-LE-TROTTOIR 0.9444 0.9444 0.9444 18
M9z-VERGLAS 1.0000 0.6875 0.8148 16
zz 0.9486 0.9600 0.9543 500
accuracy 0.9732 9298
macro avg 0.9093 0.8968 0.9009 9298
weighted avg 0.9731 0.9732 0.9729 9298
```
The model categorizes road subsigns into 60 classes:
- **Class 0:** "M1"
- **Class 1:** "M11c1-E"
- **Class 2:** "M2"
- **Class 3:** "M3a-droite"
- **Class 4:** "M3a-gauche"
- **Class 5:** "M3b-gauche"
- **Class 6:** "M4a"
- **Class 7:** "M4b"
- **Class 8:** "M4c"
- **Class 9:** "M4d1"
- **Class 10:** "M4d2"
- **Class 11:** "M4f"
- **Class 12:** "M4g"
- **Class 13:** "M4h"
- **Class 14:** "M4u"
- **Class 15:** "M4v"
- **Class 16:** "M4z1"
- **Class 17:** "M4z2"
- **Class 18:** "M5-STOP"
- **Class 19:** "M6a"
- **Class 20:** "M6h"
- **Class 21:** "M6i"
- **Class 22:** "M6j"
- **Class 23:** "M8a"
- **Class 24:** "M8b"
- **Class 25:** "M8c"
- **Class 26:** "M8d"
- **Class 27:** "M8e"
- **Class 28:** "M8f"
- **Class 29:** "M9Z-INTERDIT-HORS-CASES"
- **Class 30:** "M9Z-SAUF-BUS"
- **Class 31:** "M9Z-SAUF-BUS-SCOLAIRE"
- **Class 32:** "M9c"
- **Class 33:** "M9d"
- **Class 34:** "M9v"
- **Class 35:** "M9z"
- **Class 36:** "M9z-DES-DEUX-COTES"
- **Class 37:** "M9z-ECOLE"
- **Class 38:** "M9z-PARKING-PRIVE"
- **Class 39:** "M9z-PASSAGE-SURELEVE"
- **Class 40:** "M9z-PROPRIETE-PRIVEE"
- **Class 41:** "M9z-RAPPEL"
- **Class 42:** "M9z-SAUF-CHANTIER"
- **Class 43:** "M9z-SAUF-CONVOIS-EXCEPT"
- **Class 44:** "M9z-SAUF-CYCLISTES"
- **Class 45:** "M9z-SAUF-DESSERTE"
- **Class 46:** "M9z-SAUF-LIVRAISONS"
- **Class 47:** "M9z-SAUF-POLICE"
- **Class 48:** "M9z-SAUF-RIVERAINS"
- **Class 49:** "M9z-SAUF-SERVICE"
- **Class 50:** "M9z-SAUF-TAXIS"
- **Class 51:** "M9z-SAUF-VEHICULES-AGRICOLES"
- **Class 52:** "M9z-SAUF-VEHICULES-AUTORISES"
- **Class 53:** "M9z-SECOURS"
- **Class 54:** "M9z-SIGNAL-AUTO"
- **Class 55:** "M9z-SORTIE-POMPIERS"
- **Class 56:** "M9z-SORTIE-VEHICULES"
- **Class 57:** "M9z-SUR-LE-TROTTOIR"
- **Class 58:** "M9z-VERGLAS"
- **Class 59:** "zz"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```py
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Road-Subsigns-Classification"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
labels = {
"0": "M1", "1": "M11c1-E", "2": "M2", "3": "M3a-droite", "4": "M3a-gauche",
"5": "M3b-gauche", "6": "M4a", "7": "M4b", "8": "M4c", "9": "M4d1",
"10": "M4d2", "11": "M4f", "12": "M4g", "13": "M4h", "14": "M4u",
"15": "M4v", "16": "M4z1", "17": "M4z2", "18": "M5-STOP", "19": "M6a",
"20": "M6h", "21": "M6i", "22": "M6j", "23": "M8a", "24": "M8b",
"25": "M8c", "26": "M8d", "27": "M8e", "28": "M8f", "29": "M9Z-INTERDIT-HORS-CASES",
"30": "M9Z-SAUF-BUS", "31": "M9Z-SAUF-BUS-SCOLAIRE", "32": "M9c", "33": "M9d", "34": "M9v",
"35": "M9z", "36": "M9z-DES-DEUX-COTES", "37": "M9z-ECOLE", "38": "M9z-PARKING-PRIVE",
"39": "M9z-PASSAGE-SURELEVE", "40": "M9z-PROPRIETE-PRIVEE", "41": "M9z-RAPPEL",
"42": "M9z-SAUF-CHANTIER", "43": "M9z-SAUF-CONVOIS-EXCEPT", "44": "M9z-SAUF-CYCLISTES",
"45": "M9z-SAUF-DESSERTE", "46": "M9z-SAUF-LIVRAISONS", "47": "M9z-SAUF-POLICE",
"48": "M9z-SAUF-RIVERAINS", "49": "M9z-SAUF-SERVICE", "50": "M9z-SAUF-TAXIS",
"51": "M9z-SAUF-VEHICULES-AGRICOLES", "52": "M9z-SAUF-VEHICULES-AUTORISES", "53": "M9z-SECOURS",
"54": "M9z-SIGNAL-AUTO", "55": "M9z-SORTIE-POMPIERS", "56": "M9z-SORTIE-VEHICULES",
"57": "M9z-SUR-LE-TROTTOIR", "58": "M9z-VERGLAS", "59": "zz"
}
def classify_subsign(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
return {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
# Create Gradio interface
iface = gr.Interface(
fn=classify_subsign,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Road Subsigns Classification",
description="Upload an image to predict the road subsign category."
)
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use:**
The **Road-Subsigns-Classification** model is designed to classify images of road subsigns into 60 categories. Potential use cases include:
- **Traffic Management:** Assisting in automated monitoring and analysis of road signs.
- **Autonomous Vehicles:** Helping vehicles understand road sign information.
- **Smart Cities:** Enhancing traffic regulation systems.
- **Driver Assistance Systems:** Providing visual cues for safer driving.
- **Urban Planning:** Analyzing road sign data for infrastructure improvements.
|
[
"m1",
"m11c1-e",
"m2",
"m3a-droite",
"m3a-gauche",
"m3b-gauche",
"m4a",
"m4b",
"m4c",
"m4d1",
"m4d2",
"m4f",
"m4g",
"m4h",
"m4u",
"m4v",
"m4z1",
"m4z2",
"m5-stop",
"m6a",
"m6h",
"m6i",
"m6j",
"m8a",
"m8b",
"m8c",
"m8d",
"m8e",
"m8f",
"m9z-interdit-hors-cases",
"m9z-sauf-bus",
"m9z-sauf-bus-scolaire",
"m9c",
"m9d",
"m9v",
"m9z",
"m9z-des-deux-cotes",
"m9z-ecole",
"m9z-parking-prive",
"m9z-passage-sureleve",
"m9z-propriete-privee",
"m9z-rappel",
"m9z-sauf-chantier",
"m9z-sauf-convois-except",
"m9z-sauf-cyclistes",
"m9z-sauf-desserte",
"m9z-sauf-livraisons",
"m9z-sauf-police",
"m9z-sauf-riverains",
"m9z-sauf-service",
"m9z-sauf-taxis",
"m9z-sauf-vehicules-agricoles",
"m9z-sauf-vehicules-autorises",
"m9z-secours",
"m9z-signal-auto",
"m9z-sortie-pompiers",
"m9z-sortie-vehicules",
"m9z-sur-le-trottoir",
"m9z-verglas",
"zz"
] |
ISxOdin/vit-base-oxford-iiit-pets
|
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1924
- Accuracy: 0.9445
## Model description
This model is a fine-tuned version of a pre-trained Vision Transformer (`google/vit-base-patch16-224`) for image classification on the Oxford-IIIT Pet Dataset.
It uses transfer learning to adapt a generic vision model to identify 37 different cat and dog breeds.
The model head is adjusted to output the number of classes in the dataset, and it is trained end-to-end using standard classification loss.
---
## Intended uses & limitations
**Intended Uses:**
- Educational demos on transfer learning and fine-tuning vision models.
- Pet breed classification in structured datasets similar to Oxford Pets.
- Comparative analysis with zero-shot models like CLIP.
**Limitations:**
- May not generalize well to breeds outside of the Oxford-IIIT dataset.
- Not suitable for real-world medical or safety-critical applications.
- Input images should be clear, centered, and close in style to the training data (cropped pet portraits).
## Training and evaluation data
The model is trained and evaluated on the [Oxford-IIIT Pet Dataset](https://huggingface.co/datasets/pcuenq/oxford-pets), which contains 7,349 images of cats and dogs spanning 37 different breeds. The dataset includes equal representation of pets and was split into training, validation, and test sets. Evaluation metrics used include accuracy, precision, and recall.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3716 | 1.0 | 370 | 0.3013 | 0.9242 |
| 0.2048 | 2.0 | 740 | 0.2342 | 0.9310 |
| 0.1764 | 3.0 | 1110 | 0.2124 | 0.9350 |
| 0.1617 | 4.0 | 1480 | 0.2050 | 0.9350 |
| 0.1235 | 5.0 | 1850 | 0.2032 | 0.9350 |
## Zero-Shot Classification Evaluation (CLIP)
Evaluated the Oxford-IIIT Pet dataset using a **zero-shot image classification model**: [`openai/clip-vit-base-patch32`](https://huggingface.co/openai/clip-vit-base-patch32).
Instead of training, the CLIP model was evaluated using a list of breed names (e.g., "Siamese", "Persian", "Chihuahua") as candidate labels for zero-shot classification.
### Evaluation Results:
- Accuracy: 0.8800
- Precision: 0.8768
- Recall: 0.8800

### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
belpin/vitmodel_skincheck
|
tags:
- vision
- vit
license: mit
---
# Skin Check with Vision Transformer
Model klasifikasi jenis kulit wajah ke dalam 5 kelas.
|
[
"bekas jerawat",
"hiperpigmentasi",
"jerawat",
"kerutan",
"komedo"
] |
prithivMLmods/PussyCat-vs-Doggie-SigLIP2
|

# **PussyCat-vs-Doggie-SigLIP2**
> **PussyCat-vs-Doggie-SigLIP2** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images as either a cat or a dog using the **SiglipForImageClassification** architecture.
The model categorizes images into two classes:
- **Class 0:** "Pussy Cat"
- **Class 1:** "Doggie"
```py
Classification Report:
precision recall f1-score support
Pussy Cat 0.9194 0.8745 0.8964 12500
Doggie 0.8803 0.9234 0.9013 12500
accuracy 0.8989 25000
macro avg 0.8999 0.8989 0.8989 25000
weighted avg 0.8999 0.8989 0.8989 25000
```

# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/PussyCat-vs-Doggie-SigLIP2"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def animal_classification(image):
"""Predicts whether the image contains a cat or a dog."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "Pussy Cat",
"1": "Doggie"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=animal_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Cat vs Dog Classification",
description="Upload an image to classify whether it contains a cat or a dog."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **PussyCat-vs-Doggie-SigLIP2** model is designed to classify images as either a cat or a dog. Potential use cases include:
- **Pet Identification:** Helping users distinguish between cats and dogs.
- **Automated Pet Sorting:** Useful for shelters and pet adoption platforms.
- **Educational Purposes:** Assisting in teaching image classification concepts.
- **Surveillance & Security:** Identifying animals in security footage.
|
[
"pussy cat",
"doggie"
] |
crocutacrocuto/dinov2-large-MEG7-20
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"aardvark",
"baboon",
"badger",
"bird",
"black-and-white colobus",
"blue duiker",
"blue monkey",
"buffalo",
"bushbuck",
"bushpig",
"chimpanzee",
"civet_genet",
"elephant",
"galago_potto",
"golden cat",
"gorilla",
"guineafowl",
"hyrax",
"jackal",
"leopard",
"lhoests monkey",
"mandrill",
"mongoose",
"monkey",
"pangolin",
"porcupine",
"red colobus_red-capped mangabey",
"red duiker",
"rodent",
"serval",
"spotted hyena",
"squirrel",
"water chevrotain",
"yellow-backed duiker"
] |
prithivMLmods/Human-vs-NonHuman-Detection
|

# **Human-vs-NonHuman-Detection**
> **Human-vs-NonHuman-Detection** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images as either human or non-human using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
Human 𖨆 0.9939 0.9735 0.9836 6646
Non Human メ 0.9807 0.9956 0.9881 8989
accuracy 0.9862 15635
macro avg 0.9873 0.9845 0.9858 15635
weighted avg 0.9863 0.9862 0.9862 15635
```

The model categorizes images into two classes:
- **Class 0:** "Human 𖨆"
- **Class 1:** "Non Human メ"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Human-vs-NonHuman-Detection"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def human_detection(image):
"""Predicts whether the image contains a human or non-human entity."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "Human 𖨆",
"1": "Non Human メ"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=human_detection,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Human vs Non-Human Detection",
description="Upload an image to classify whether it contains a human or non-human entity."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **Human-vs-NonHuman-Detection** model is designed to distinguish between human and non-human entities. Potential use cases include:
- **Surveillance & Security:** Enhancing monitoring systems to detect human presence.
- **Autonomous Systems:** Helping robots and AI systems identify humans.
- **Image Filtering:** Automatically categorizing human vs. non-human images.
- **Smart Access Control:** Identifying human presence for secure authentication.
|
[
"human 𖨆",
"non human メ"
] |
mbiarreta/vit-ena24
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-ena24
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3273
- Accuracy: 0.7539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.5119 | 0.1259 | 100 | 1.8353 | 0.5625 |
| 0.809 | 0.2519 | 200 | 1.4106 | 0.6396 |
| 0.6754 | 0.3778 | 300 | 1.5657 | 0.5771 |
| 0.5017 | 0.5038 | 400 | 1.3136 | 0.6865 |
| 0.2595 | 0.6297 | 500 | 1.2942 | 0.6865 |
| 0.243 | 0.7557 | 600 | 1.3563 | 0.6914 |
| 0.3432 | 0.8816 | 700 | 1.4268 | 0.6689 |
| 0.1115 | 1.0076 | 800 | 1.4286 | 0.6973 |
| 0.1615 | 1.1335 | 900 | 1.4697 | 0.6963 |
| 0.115 | 1.2594 | 1000 | 1.4701 | 0.7109 |
| 0.0656 | 1.3854 | 1100 | 1.4417 | 0.7217 |
| 0.1229 | 1.5113 | 1200 | 1.3150 | 0.7451 |
| 0.1064 | 1.6373 | 1300 | 1.3941 | 0.7432 |
| 0.0345 | 1.7632 | 1400 | 1.2879 | 0.7607 |
| 0.0587 | 1.8892 | 1500 | 1.3273 | 0.7539 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
[
"american black bear",
"american crow",
"eastern fox squirrel",
"eastern gray squirrel",
"grey fox",
"horse",
"northern raccoon",
"red fox",
"striped skunk",
"virginia opossum",
"white_tailed_deer",
"wild turkey",
"bird",
"woodchuck",
"bobcat",
"chicken",
"coyote",
"dog",
"domestic cat",
"eastern chipmunk",
"eastern cottontail"
] |
prithivMLmods/Hand-Gesture-19
|

# **Hand-Gesture-19**
> **Hand-Gesture-19** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify hand gesture images into different categories using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
call 0.9889 0.9739 0.9813 6939
dislike 0.9892 0.9863 0.9877 7028
fist 0.9956 0.9923 0.9940 6882
four 0.9632 0.9653 0.9643 7183
like 0.9668 0.9855 0.9760 6823
mute 0.9848 0.9976 0.9912 7139
no_gesture 0.9960 0.9957 0.9958 27823
ok 0.9872 0.9831 0.9852 6924
one 0.9817 0.9854 0.9835 7062
palm 0.9793 0.9848 0.9820 7050
peace 0.9723 0.9635 0.9679 6965
peace_inverted 0.9806 0.9836 0.9821 6876
rock 0.9853 0.9865 0.9859 6883
stop 0.9614 0.9901 0.9756 6893
stop_inverted 0.9933 0.9712 0.9821 7142
three 0.9712 0.9478 0.9594 6940
three2 0.9785 0.9799 0.9792 6870
two_up 0.9848 0.9863 0.9855 7346
two_up_inverted 0.9855 0.9871 0.9863 6967
accuracy 0.9833 153735
macro avg 0.9813 0.9814 0.9813 153735
weighted avg 0.9833 0.9833 0.9833 153735
```

The model categorizes images into nineteen hand gestures:
- **Class 0:** "call"
- **Class 1:** "dislike"
- **Class 2:** "fist"
- **Class 3:** "four"
- **Class 4:** "like"
- **Class 5:** "mute"
- **Class 6:** "no_gesture"
- **Class 7:** "ok"
- **Class 8:** "one"
- **Class 9:** "palm"
- **Class 10:** "peace"
- **Class 11:** "peace_inverted"
- **Class 12:** "rock"
- **Class 13:** "stop"
- **Class 14:** "stop_inverted"
- **Class 15:** "three"
- **Class 16:** "three2"
- **Class 17:** "two_up"
- **Class 18:** "two_up_inverted"
# **Run with Transformers🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Hand-Gesture-19"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def hand_gesture_classification(image):
"""Predicts the hand gesture category from an image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {
"0": "call",
"1": "dislike",
"2": "fist",
"3": "four",
"4": "like",
"5": "mute",
"6": "no_gesture",
"7": "ok",
"8": "one",
"9": "palm",
"10": "peace",
"11": "peace_inverted",
"12": "rock",
"13": "stop",
"14": "stop_inverted",
"15": "three",
"16": "three2",
"17": "two_up",
"18": "two_up_inverted"
}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=hand_gesture_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Hand Gesture Classification",
description="Upload an image to classify the hand gesture."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
# **Intended Use:**
The **Hand-Gesture-19** model is designed to classify hand gesture images into different categories. Potential use cases include:
- **Human-Computer Interaction:** Enabling gesture-based controls for devices.
- **Sign Language Interpretation:** Assisting in recognizing sign language gestures.
- **Gaming & VR:** Enhancing immersive experiences with hand gesture recognition.
- **Robotics:** Facilitating gesture-based robotic control.
- **Security & Surveillance:** Identifying gestures for access control and safety monitoring.
|
[
"call",
"dislike",
"fist",
"four",
"like",
"mute",
"no_gesture",
"ok",
"one",
"palm",
"peace",
"peace_inverted",
"rock",
"stop",
"stop_inverted",
"three",
"three2",
"two_up",
"two_up_inverted"
] |
Sara5115/swin-tiny-patch4-window7-224-BlurClassification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-BlurClassification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2873
- Accuracy: 0.9405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 3 | 0.4959 | 0.7024 |
| No log | 2.0 | 6 | 0.2873 | 0.9405 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
[
"blur",
"not blur"
] |
mksachs/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1887
- Accuracy: 0.9364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3791 | 1.0 | 370 | 0.2872 | 0.9364 |
| 0.2261 | 2.0 | 740 | 0.2153 | 0.9499 |
| 0.1785 | 3.0 | 1110 | 0.1984 | 0.9486 |
| 0.1454 | 4.0 | 1480 | 0.1927 | 0.9472 |
| 0.1402 | 5.0 | 1850 | 0.1906 | 0.9486 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
Monyrak/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2031
- Accuracy: 0.9459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3727 | 1.0 | 370 | 0.2756 | 0.9337 |
| 0.2145 | 2.0 | 740 | 0.2168 | 0.9378 |
| 0.1835 | 3.0 | 1110 | 0.1918 | 0.9459 |
| 0.147 | 4.0 | 1480 | 0.1857 | 0.9472 |
| 0.1315 | 5.0 | 1850 | 0.1818 | 0.9472 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
yudenn-s5/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3783
- Accuracy: 0.9792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3814 | 1.0 | 105 | 0.3783 | 0.9792 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Grey3000/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3694
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3798 | 1.0 | 105 | 0.3694 | 0.9857 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
GryffindorSTY/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3864
- Accuracy: 0.9798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3886 | 1.0 | 105 | 0.3864 | 0.9798 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Sonamyangzom/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3840
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.399 | 1.0 | 105 | 0.3840 | 0.9851 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Tshering12/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3685
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3853 | 1.0 | 105 | 0.3685 | 0.9833 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Tndd/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Tshering12/Age-detection
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
SY750/swin-tiny-patch4-window7-224-finetuned-ViT
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-ViT
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2048
- Accuracy: 0.9371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1549 | 1.0 | 41 | 0.8420 | 0.7654 |
| 0.5664 | 2.0 | 82 | 0.4272 | 0.8525 |
| 0.4053 | 3.0 | 123 | 0.3613 | 0.8767 |
| 0.3093 | 4.0 | 164 | 0.3808 | 0.8658 |
| 0.3 | 5.0 | 205 | 0.2971 | 0.8863 |
| 0.2445 | 6.0 | 246 | 0.3009 | 0.8851 |
| 0.2508 | 7.0 | 287 | 0.2582 | 0.9105 |
| 0.1998 | 8.0 | 328 | 0.2356 | 0.9202 |
| 0.1722 | 9.0 | 369 | 0.2187 | 0.9287 |
| 0.1176 | 10.0 | 410 | 0.2048 | 0.9371 |
| 0.1073 | 11.0 | 451 | 0.2066 | 0.9274 |
| 0.1075 | 12.0 | 492 | 0.2239 | 0.9262 |
| 0.1109 | 13.0 | 533 | 0.2336 | 0.9202 |
| 0.095 | 14.0 | 574 | 0.2187 | 0.9347 |
| 0.0824 | 15.0 | 615 | 0.2178 | 0.9287 |
| 0.0757 | 16.0 | 656 | 0.1958 | 0.9359 |
| 0.0643 | 17.0 | 697 | 0.2183 | 0.9335 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
[
"adenocarcinoma",
"high-grade in",
"low-grade in",
"normal",
"polyp"
] |
YesheyDema/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3348
- Accuracy: 0.9845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3468 | 1.0 | 105 | 0.3348 | 0.9845 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Tshenduw/Image_Classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Som3zzz/Waste_classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"battery",
"biological",
"cardboard",
"clothes",
"glass",
"metal",
"paper",
"plastic",
"shoes",
"trash"
] |
Tsheltrim/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3800
- Accuracy: 0.9827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3944 | 1.0 | 105 | 0.3800 | 0.9827 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
pecziflo/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2336
- Accuracy: 0.9337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3759 | 1.0 | 370 | 0.2724 | 0.9242 |
| 0.2225 | 2.0 | 740 | 0.2050 | 0.9364 |
| 0.179 | 3.0 | 1110 | 0.1858 | 0.9391 |
| 0.1415 | 4.0 | 1480 | 0.1781 | 0.9405 |
| 0.1268 | 5.0 | 1850 | 0.1759 | 0.9391 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
### Zero-Shot Classification Results
- model: openai/clip-vit-large-patch14
- Accuracy: 0.8800
- Precision: 0.8768
- Recall: 0.8800
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
yudenn-s5/Image_Classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Tsheltrim/Car_Classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
babsii/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1903
- Accuracy: 0.9553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3831 | 1.0 | 370 | 0.3375 | 0.9066 |
| 0.2 | 2.0 | 740 | 0.2736 | 0.9202 |
| 0.1622 | 3.0 | 1110 | 0.2580 | 0.9229 |
| 0.1309 | 4.0 | 1480 | 0.2469 | 0.9215 |
| 0.1253 | 5.0 | 1850 | 0.2435 | 0.9229 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
### Zero Shot Evaluation
- model: openai/clip-vit-large-patch14
- Accuracy: 0.8800
- Precision: 0.8768
- Recall: 0.8800
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
Wangzin20/bhutanese_textile_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese_textile_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.6442
- eval_accuracy: 0.7857
- eval_runtime: 96.1655
- eval_samples_per_second: 1.456
- eval_steps_per_second: 0.094
- epoch: 0.9143
- step: 8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
MeowKun/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 1.7735 | 0.6696 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
BeckerAnas/convnext-tiny-finetuned-cifar10
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
TsheringChojay/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9143 | 8 | 1.8348 | 0.4857 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
szangmo/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8660
- Accuracy: 0.6881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.0161 | 0.9730 | 18 | 1.8660 | 0.6881 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"1_pet",
"2_hdpe",
"3_pvs",
"4_ldpe",
"5_pp",
"6_ps",
"7_o",
"non-waste"
] |
dupthotshering/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9143 | 8 | 1.4820 | 0.7714 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
UgyenR/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9143 | 8 | 1.7863 | 0.5929 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
szangmo/age-detection
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"child 0-12",
"teenager 13-20",
"adult 21-44",
"middle age 45-64",
"aged 65+"
] |
YosuNamgay/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9143 | 8 | 1.8026 | 0.5286 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Tndd/Food_Classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Crackeo/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9143 | 8 | 1.7771 | 0.6429 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
kuhs/pokemon-vit
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pokemon-vit
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pokemon dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2805
- Accuracy: 0.6842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 1.8412 | 0.2105 |
| No log | 2.0 | 20 | 1.6505 | 0.3684 |
| No log | 3.0 | 30 | 1.5253 | 0.6316 |
| No log | 4.0 | 40 | 1.4592 | 0.6316 |
| No log | 5.0 | 50 | 1.4373 | 0.6316 |
### Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.6.0
- Datasets 3.2.0
- Tokenizers 0.21.0
|
[
"0",
"1",
"2",
"3",
"4",
"5"
] |
YosuNamgay/car_classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
dupthotshering/Car_Classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
ugyendendup/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3952
- Accuracy: 0.9798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3835 | 1.0 | 105 | 0.3952 | 0.9798 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
ugyendendup/Car_Classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
chimegd/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3361
- Accuracy: 0.9798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.327 | 1.0 | 105 | 0.3361 | 0.9798 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
prithivMLmods/Geometric-Shapes-Classification
|

# **Geometric-Shapes-Classification**
> **Geometric-Shapes-Classification** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a multi-class shape recognition task. It classifies various geometric shapes using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
Circle ◯ 0.9921 0.9987 0.9953 1500
Kite ⬰ 0.9927 0.9927 0.9927 1500
Parallelogram ▰ 0.9926 0.9840 0.9883 1500
Rectangle ▭ 0.9993 0.9913 0.9953 1500
Rhombus ◆ 0.9846 0.9820 0.9833 1500
Square ◼ 0.9914 0.9987 0.9950 1500
Trapezoid ⏢ 0.9966 0.9793 0.9879 1500
Triangle ▲ 0.9772 0.9993 0.9881 1500
accuracy 0.9908 12000
macro avg 0.9908 0.9908 0.9907 12000
weighted avg 0.9908 0.9908 0.9907 12000
```

The model categorizes images into the following classes:
- **Class 0:** Circle ◯
- **Class 1:** Kite ⬰
- **Class 2:** Parallelogram ▰
- **Class 3:** Rectangle ▭
- **Class 4:** Rhombus ◆
- **Class 5:** Square ◼
- **Class 6:** Trapezoid ⏢
- **Class 7:** Triangle ▲
---
# **Run with Transformers 🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Geometric-Shapes-Classification"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label mapping with symbols
labels = {
"0": "Circle ◯",
"1": "Kite ⬰",
"2": "Parallelogram ▰",
"3": "Rectangle ▭",
"4": "Rhombus ◆",
"5": "Square ◼",
"6": "Trapezoid ⏢",
"7": "Triangle ▲"
}
def classify_shape(image):
"""Classifies the geometric shape in the input image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Gradio interface
iface = gr.Interface(
fn=classify_shape,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Geometric Shapes Classification",
description="Upload an image to classify geometric shapes such as circle, triangle, square, and more."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use**
The **Geometric-Shapes-Classification** model is designed to recognize basic geometric shapes in images. Example use cases:
- **Educational Tools:** For learning and teaching geometry visually.
- **Computer Vision Projects:** As a shape detector in robotics or automation.
- **Image Analysis:** Recognizing symbols in diagrams or engineering drafts.
- **Assistive Technology:** Supporting shape identification for visually impaired applications.
|
[
"circle ◯",
"kite ⬰",
"parallelogram ▰",
"rectangle ▭",
"rhombus ◆",
"square ◼",
"trapezoid ⏢",
"triangle ▲"
] |
acho2003/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0996
- Accuracy: 0.9070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1353 | 1.0 | 43 | 1.0996 | 0.9070 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
kcheki/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3796
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3905 | 1.0 | 105 | 0.3796 | 0.9851 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
sherab65/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6295
- Accuracy: 0.9786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.6879 | 0.9963 | 67 | 0.6295 | 0.9786 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
SangayWangmo/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3634
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3613 | 1.0 | 105 | 0.3634 | 0.9839 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
decipherme/bhutanese_currency_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese_currency_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3152
- Accuracy: 0.975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9781 | 1.0 | 20 | 1.8946 | 0.5844 |
| 1.5451 | 2.0 | 40 | 1.4346 | 0.8781 |
| 1.0404 | 3.0 | 60 | 0.9740 | 0.9281 |
| 0.7007 | 4.0 | 80 | 0.6779 | 0.9656 |
| 0.5091 | 5.0 | 100 | 0.4903 | 0.9781 |
| 0.3778 | 6.0 | 120 | 0.4151 | 0.9656 |
| 0.3274 | 7.0 | 140 | 0.3602 | 0.9812 |
| 0.2779 | 8.0 | 160 | 0.3155 | 0.9875 |
| 0.2644 | 9.0 | 180 | 0.2932 | 0.9812 |
| 0.2576 | 10.0 | 200 | 0.3152 | 0.975 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7"
] |
12220038K/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3977
- Accuracy: 0.9798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3969 | 1.0 | 105 | 0.3977 | 0.9798 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Kezang/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3597
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3657 | 1.0 | 105 | 0.3597 | 0.9851 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
ddeyy/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3823
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3909 | 1.0 | 105 | 0.3823 | 0.9833 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Kawang/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3769
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3954 | 1.0 | 105 | 0.3769 | 0.9857 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
sonam505/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3255
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3259 | 1.0 | 105 | 0.3255 | 0.9863 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
LunaAria/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3842
- Accuracy: 0.9768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3965 | 1.0 | 105 | 0.3842 | 0.9768 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
PhurbaDT/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3225
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3087 | 1.0 | 105 | 0.3225 | 0.9851 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Phurpa/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5131
- Accuracy: 0.9754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5197 | 1.0 | 84 | 0.5131 | 0.9754 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
Sangay123/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3741
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3871 | 1.0 | 105 | 0.3741 | 0.9863 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
yba222/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3814
- Accuracy: 0.9792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3745 | 1.0 | 105 | 0.3814 | 0.9792 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
SangayWangmo/image_classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
sonamdendup/bhutanese-textile-model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhutanese-textile-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3939
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3978 | 1.0 | 105 | 0.3939 | 0.9762 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"hor",
"jadrima",
"kishuthara",
"marthra",
"pangtse",
"serthra",
"shinglo"
] |
yba222/Bhutanese_currency_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bhutanese_currency_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1863
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2025 | 1.0 | 70 | 1.1863 | 0.9375 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"nu. 1",
"nu. 10",
"nu. 100",
"nu. 1000",
"nu. 20",
"nu. 5",
"nu. 50",
"nu. 500"
] |
sherab65/age-classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# age-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6466
- Accuracy: 0.7708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.614 | 0.9968 | 237 | 0.6466 | 0.7708 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"0-12",
"13-20",
"21-44",
"45-64",
"65+"
] |
prithivMLmods/Mirage-Photo-Classifier
|

# **Mirage-Photo-Classifier**
> **Mirage-Photo-Classifier** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a binary image authenticity classification task. It is designed to determine whether an image is real or AI-generated (fake) using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
Real 0.9781 0.9132 0.9446 5000
Fake 0.9186 0.9796 0.9481 5000
accuracy 0.9464 10000
macro avg 0.9484 0.9464 0.9463 10000
weighted avg 0.9484 0.9464 0.9463 10000
```

The model categorizes images into two classes:
- **Class 0:** Real
- **Class 1:** Fake
---
# **Run with Transformers 🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Mirage-Photo-Classifier"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label mapping
labels = {
"0": "Real",
"1": "Fake"
}
def classify_image_authenticity(image):
"""Predicts whether the image is real or AI-generated (fake)."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Gradio interface
iface = gr.Interface(
fn=classify_image_authenticity,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Mirage Photo Classifier",
description="Upload an image to determine if it's Real or AI-generated (Fake)."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use**
The **Mirage-Photo-Classifier** model is designed to detect whether an image is genuine (photograph) or synthetically generated. Use cases include:
- **AI Image Detection:** Identifying AI-generated images in social media, news, or datasets.
- **Digital Forensics:** Helping professionals detect image authenticity in investigations.
- **Platform Moderation:** Assisting content platforms in labeling generated content.
- **Dataset Validation:** Cleaning and verifying training data for other AI models.
|
[
"real",
"fake"
] |
kcheki/image_classification
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"background",
"tench, tinca tinca",
"goldfish, carassius auratus",
"great white shark, white shark, man-eater, man-eating shark, carcharodon carcharias",
"tiger shark, galeocerdo cuvieri",
"hammerhead, hammerhead shark",
"electric ray, crampfish, numbfish, torpedo",
"stingray",
"cock",
"hen",
"ostrich, struthio camelus",
"brambling, fringilla montifringilla",
"goldfinch, carduelis carduelis",
"house finch, linnet, carpodacus mexicanus",
"junco, snowbird",
"indigo bunting, indigo finch, indigo bird, passerina cyanea",
"robin, american robin, turdus migratorius",
"bulbul",
"jay",
"magpie",
"chickadee",
"water ouzel, dipper",
"kite",
"bald eagle, american eagle, haliaeetus leucocephalus",
"vulture",
"great grey owl, great gray owl, strix nebulosa",
"european fire salamander, salamandra salamandra",
"common newt, triturus vulgaris",
"eft",
"spotted salamander, ambystoma maculatum",
"axolotl, mud puppy, ambystoma mexicanum",
"bullfrog, rana catesbeiana",
"tree frog, tree-frog",
"tailed frog, bell toad, ribbed toad, tailed toad, ascaphus trui",
"loggerhead, loggerhead turtle, caretta caretta",
"leatherback turtle, leatherback, leathery turtle, dermochelys coriacea",
"mud turtle",
"terrapin",
"box turtle, box tortoise",
"banded gecko",
"common iguana, iguana, iguana iguana",
"american chameleon, anole, anolis carolinensis",
"whiptail, whiptail lizard",
"agama",
"frilled lizard, chlamydosaurus kingi",
"alligator lizard",
"gila monster, heloderma suspectum",
"green lizard, lacerta viridis",
"african chameleon, chamaeleo chamaeleon",
"komodo dragon, komodo lizard, dragon lizard, giant lizard, varanus komodoensis",
"african crocodile, nile crocodile, crocodylus niloticus",
"american alligator, alligator mississipiensis",
"triceratops",
"thunder snake, worm snake, carphophis amoenus",
"ringneck snake, ring-necked snake, ring snake",
"hognose snake, puff adder, sand viper",
"green snake, grass snake",
"king snake, kingsnake",
"garter snake, grass snake",
"water snake",
"vine snake",
"night snake, hypsiglena torquata",
"boa constrictor, constrictor constrictor",
"rock python, rock snake, python sebae",
"indian cobra, naja naja",
"green mamba",
"sea snake",
"horned viper, cerastes, sand viper, horned asp, cerastes cornutus",
"diamondback, diamondback rattlesnake, crotalus adamanteus",
"sidewinder, horned rattlesnake, crotalus cerastes",
"trilobite",
"harvestman, daddy longlegs, phalangium opilio",
"scorpion",
"black and gold garden spider, argiope aurantia",
"barn spider, araneus cavaticus",
"garden spider, aranea diademata",
"black widow, latrodectus mactans",
"tarantula",
"wolf spider, hunting spider",
"tick",
"centipede",
"black grouse",
"ptarmigan",
"ruffed grouse, partridge, bonasa umbellus",
"prairie chicken, prairie grouse, prairie fowl",
"peacock",
"quail",
"partridge",
"african grey, african gray, psittacus erithacus",
"macaw",
"sulphur-crested cockatoo, kakatoe galerita, cacatua galerita",
"lorikeet",
"coucal",
"bee eater",
"hornbill",
"hummingbird",
"jacamar",
"toucan",
"drake",
"red-breasted merganser, mergus serrator",
"goose",
"black swan, cygnus atratus",
"tusker",
"echidna, spiny anteater, anteater",
"platypus, duckbill, duckbilled platypus, duck-billed platypus, ornithorhynchus anatinus",
"wallaby, brush kangaroo",
"koala, koala bear, kangaroo bear, native bear, phascolarctos cinereus",
"wombat",
"jellyfish",
"sea anemone, anemone",
"brain coral",
"flatworm, platyhelminth",
"nematode, nematode worm, roundworm",
"conch",
"snail",
"slug",
"sea slug, nudibranch",
"chiton, coat-of-mail shell, sea cradle, polyplacophore",
"chambered nautilus, pearly nautilus, nautilus",
"dungeness crab, cancer magister",
"rock crab, cancer irroratus",
"fiddler crab",
"king crab, alaska crab, alaskan king crab, alaska king crab, paralithodes camtschatica",
"american lobster, northern lobster, maine lobster, homarus americanus",
"spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish",
"crayfish, crawfish, crawdad, crawdaddy",
"hermit crab",
"isopod",
"white stork, ciconia ciconia",
"black stork, ciconia nigra",
"spoonbill",
"flamingo",
"little blue heron, egretta caerulea",
"american egret, great white heron, egretta albus",
"bittern",
"crane",
"limpkin, aramus pictus",
"european gallinule, porphyrio porphyrio",
"american coot, marsh hen, mud hen, water hen, fulica americana",
"bustard",
"ruddy turnstone, arenaria interpres",
"red-backed sandpiper, dunlin, erolia alpina",
"redshank, tringa totanus",
"dowitcher",
"oystercatcher, oyster catcher",
"pelican",
"king penguin, aptenodytes patagonica",
"albatross, mollymawk",
"grey whale, gray whale, devilfish, eschrichtius gibbosus, eschrichtius robustus",
"killer whale, killer, orca, grampus, sea wolf, orcinus orca",
"dugong, dugong dugon",
"sea lion",
"chihuahua",
"japanese spaniel",
"maltese dog, maltese terrier, maltese",
"pekinese, pekingese, peke",
"shih-tzu",
"blenheim spaniel",
"papillon",
"toy terrier",
"rhodesian ridgeback",
"afghan hound, afghan",
"basset, basset hound",
"beagle",
"bloodhound, sleuthhound",
"bluetick",
"black-and-tan coonhound",
"walker hound, walker foxhound",
"english foxhound",
"redbone",
"borzoi, russian wolfhound",
"irish wolfhound",
"italian greyhound",
"whippet",
"ibizan hound, ibizan podenco",
"norwegian elkhound, elkhound",
"otterhound, otter hound",
"saluki, gazelle hound",
"scottish deerhound, deerhound",
"weimaraner",
"staffordshire bullterrier, staffordshire bull terrier",
"american staffordshire terrier, staffordshire terrier, american pit bull terrier, pit bull terrier",
"bedlington terrier",
"border terrier",
"kerry blue terrier",
"irish terrier",
"norfolk terrier",
"norwich terrier",
"yorkshire terrier",
"wire-haired fox terrier",
"lakeland terrier",
"sealyham terrier, sealyham",
"airedale, airedale terrier",
"cairn, cairn terrier",
"australian terrier",
"dandie dinmont, dandie dinmont terrier",
"boston bull, boston terrier",
"miniature schnauzer",
"giant schnauzer",
"standard schnauzer",
"scotch terrier, scottish terrier, scottie",
"tibetan terrier, chrysanthemum dog",
"silky terrier, sydney silky",
"soft-coated wheaten terrier",
"west highland white terrier",
"lhasa, lhasa apso",
"flat-coated retriever",
"curly-coated retriever",
"golden retriever",
"labrador retriever",
"chesapeake bay retriever",
"german short-haired pointer",
"vizsla, hungarian pointer",
"english setter",
"irish setter, red setter",
"gordon setter",
"brittany spaniel",
"clumber, clumber spaniel",
"english springer, english springer spaniel",
"welsh springer spaniel",
"cocker spaniel, english cocker spaniel, cocker",
"sussex spaniel",
"irish water spaniel",
"kuvasz",
"schipperke",
"groenendael",
"malinois",
"briard",
"kelpie",
"komondor",
"old english sheepdog, bobtail",
"shetland sheepdog, shetland sheep dog, shetland",
"collie",
"border collie",
"bouvier des flandres, bouviers des flandres",
"rottweiler",
"german shepherd, german shepherd dog, german police dog, alsatian",
"doberman, doberman pinscher",
"miniature pinscher",
"greater swiss mountain dog",
"bernese mountain dog",
"appenzeller",
"entlebucher",
"boxer",
"bull mastiff",
"tibetan mastiff",
"french bulldog",
"great dane",
"saint bernard, st bernard",
"eskimo dog, husky",
"malamute, malemute, alaskan malamute",
"siberian husky",
"dalmatian, coach dog, carriage dog",
"affenpinscher, monkey pinscher, monkey dog",
"basenji",
"pug, pug-dog",
"leonberg",
"newfoundland, newfoundland dog",
"great pyrenees",
"samoyed, samoyede",
"pomeranian",
"chow, chow chow",
"keeshond",
"brabancon griffon",
"pembroke, pembroke welsh corgi",
"cardigan, cardigan welsh corgi",
"toy poodle",
"miniature poodle",
"standard poodle",
"mexican hairless",
"timber wolf, grey wolf, gray wolf, canis lupus",
"white wolf, arctic wolf, canis lupus tundrarum",
"red wolf, maned wolf, canis rufus, canis niger",
"coyote, prairie wolf, brush wolf, canis latrans",
"dingo, warrigal, warragal, canis dingo",
"dhole, cuon alpinus",
"african hunting dog, hyena dog, cape hunting dog, lycaon pictus",
"hyena, hyaena",
"red fox, vulpes vulpes",
"kit fox, vulpes macrotis",
"arctic fox, white fox, alopex lagopus",
"grey fox, gray fox, urocyon cinereoargenteus",
"tabby, tabby cat",
"tiger cat",
"persian cat",
"siamese cat, siamese",
"egyptian cat",
"cougar, puma, catamount, mountain lion, painter, panther, felis concolor",
"lynx, catamount",
"leopard, panthera pardus",
"snow leopard, ounce, panthera uncia",
"jaguar, panther, panthera onca, felis onca",
"lion, king of beasts, panthera leo",
"tiger, panthera tigris",
"cheetah, chetah, acinonyx jubatus",
"brown bear, bruin, ursus arctos",
"american black bear, black bear, ursus americanus, euarctos americanus",
"ice bear, polar bear, ursus maritimus, thalarctos maritimus",
"sloth bear, melursus ursinus, ursus ursinus",
"mongoose",
"meerkat, mierkat",
"tiger beetle",
"ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle",
"ground beetle, carabid beetle",
"long-horned beetle, longicorn, longicorn beetle",
"leaf beetle, chrysomelid",
"dung beetle",
"rhinoceros beetle",
"weevil",
"fly",
"bee",
"ant, emmet, pismire",
"grasshopper, hopper",
"cricket",
"walking stick, walkingstick, stick insect",
"cockroach, roach",
"mantis, mantid",
"cicada, cicala",
"leafhopper",
"lacewing, lacewing fly",
"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk",
"damselfly",
"admiral",
"ringlet, ringlet butterfly",
"monarch, monarch butterfly, milkweed butterfly, danaus plexippus",
"cabbage butterfly",
"sulphur butterfly, sulfur butterfly",
"lycaenid, lycaenid butterfly",
"starfish, sea star",
"sea urchin",
"sea cucumber, holothurian",
"wood rabbit, cottontail, cottontail rabbit",
"hare",
"angora, angora rabbit",
"hamster",
"porcupine, hedgehog",
"fox squirrel, eastern fox squirrel, sciurus niger",
"marmot",
"beaver",
"guinea pig, cavia cobaya",
"sorrel",
"zebra",
"hog, pig, grunter, squealer, sus scrofa",
"wild boar, boar, sus scrofa",
"warthog",
"hippopotamus, hippo, river horse, hippopotamus amphibius",
"ox",
"water buffalo, water ox, asiatic buffalo, bubalus bubalis",
"bison",
"ram, tup",
"bighorn, bighorn sheep, cimarron, rocky mountain bighorn, rocky mountain sheep, ovis canadensis",
"ibex, capra ibex",
"hartebeest",
"impala, aepyceros melampus",
"gazelle",
"arabian camel, dromedary, camelus dromedarius",
"llama",
"weasel",
"mink",
"polecat, fitch, foulmart, foumart, mustela putorius",
"black-footed ferret, ferret, mustela nigripes",
"otter",
"skunk, polecat, wood pussy",
"badger",
"armadillo",
"three-toed sloth, ai, bradypus tridactylus",
"orangutan, orang, orangutang, pongo pygmaeus",
"gorilla, gorilla gorilla",
"chimpanzee, chimp, pan troglodytes",
"gibbon, hylobates lar",
"siamang, hylobates syndactylus, symphalangus syndactylus",
"guenon, guenon monkey",
"patas, hussar monkey, erythrocebus patas",
"baboon",
"macaque",
"langur",
"colobus, colobus monkey",
"proboscis monkey, nasalis larvatus",
"marmoset",
"capuchin, ringtail, cebus capucinus",
"howler monkey, howler",
"titi, titi monkey",
"spider monkey, ateles geoffroyi",
"squirrel monkey, saimiri sciureus",
"madagascar cat, ring-tailed lemur, lemur catta",
"indri, indris, indri indri, indri brevicaudatus",
"indian elephant, elephas maximus",
"african elephant, loxodonta africana",
"lesser panda, red panda, panda, bear cat, cat bear, ailurus fulgens",
"giant panda, panda, panda bear, coon bear, ailuropoda melanoleuca",
"barracouta, snoek",
"eel",
"coho, cohoe, coho salmon, blue jack, silver salmon, oncorhynchus kisutch",
"rock beauty, holocanthus tricolor",
"anemone fish",
"sturgeon",
"gar, garfish, garpike, billfish, lepisosteus osseus",
"lionfish",
"puffer, pufferfish, blowfish, globefish",
"abacus",
"abaya",
"academic gown, academic robe, judge's robe",
"accordion, piano accordion, squeeze box",
"acoustic guitar",
"aircraft carrier, carrier, flattop, attack aircraft carrier",
"airliner",
"airship, dirigible",
"altar",
"ambulance",
"amphibian, amphibious vehicle",
"analog clock",
"apiary, bee house",
"apron",
"ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin",
"assault rifle, assault gun",
"backpack, back pack, knapsack, packsack, rucksack, haversack",
"bakery, bakeshop, bakehouse",
"balance beam, beam",
"balloon",
"ballpoint, ballpoint pen, ballpen, biro",
"band aid",
"banjo",
"bannister, banister, balustrade, balusters, handrail",
"barbell",
"barber chair",
"barbershop",
"barn",
"barometer",
"barrel, cask",
"barrow, garden cart, lawn cart, wheelbarrow",
"baseball",
"basketball",
"bassinet",
"bassoon",
"bathing cap, swimming cap",
"bath towel",
"bathtub, bathing tub, bath, tub",
"beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon",
"beacon, lighthouse, beacon light, pharos",
"beaker",
"bearskin, busby, shako",
"beer bottle",
"beer glass",
"bell cote, bell cot",
"bib",
"bicycle-built-for-two, tandem bicycle, tandem",
"bikini, two-piece",
"binder, ring-binder",
"binoculars, field glasses, opera glasses",
"birdhouse",
"boathouse",
"bobsled, bobsleigh, bob",
"bolo tie, bolo, bola tie, bola",
"bonnet, poke bonnet",
"bookcase",
"bookshop, bookstore, bookstall",
"bottlecap",
"bow",
"bow tie, bow-tie, bowtie",
"brass, memorial tablet, plaque",
"brassiere, bra, bandeau",
"breakwater, groin, groyne, mole, bulwark, seawall, jetty",
"breastplate, aegis, egis",
"broom",
"bucket, pail",
"buckle",
"bulletproof vest",
"bullet train, bullet",
"butcher shop, meat market",
"cab, hack, taxi, taxicab",
"caldron, cauldron",
"candle, taper, wax light",
"cannon",
"canoe",
"can opener, tin opener",
"cardigan",
"car mirror",
"carousel, carrousel, merry-go-round, roundabout, whirligig",
"carpenter's kit, tool kit",
"carton",
"car wheel",
"cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, atm",
"cassette",
"cassette player",
"castle",
"catamaran",
"cd player",
"cello, violoncello",
"cellular telephone, cellular phone, cellphone, cell, mobile phone",
"chain",
"chainlink fence",
"chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour",
"chain saw, chainsaw",
"chest",
"chiffonier, commode",
"chime, bell, gong",
"china cabinet, china closet",
"christmas stocking",
"church, church building",
"cinema, movie theater, movie theatre, movie house, picture palace",
"cleaver, meat cleaver, chopper",
"cliff dwelling",
"cloak",
"clog, geta, patten, sabot",
"cocktail shaker",
"coffee mug",
"coffeepot",
"coil, spiral, volute, whorl, helix",
"combination lock",
"computer keyboard, keypad",
"confectionery, confectionary, candy store",
"container ship, containership, container vessel",
"convertible",
"corkscrew, bottle screw",
"cornet, horn, trumpet, trump",
"cowboy boot",
"cowboy hat, ten-gallon hat",
"cradle",
"crane",
"crash helmet",
"crate",
"crib, cot",
"crock pot",
"croquet ball",
"crutch",
"cuirass",
"dam, dike, dyke",
"desk",
"desktop computer",
"dial telephone, dial phone",
"diaper, nappy, napkin",
"digital clock",
"digital watch",
"dining table, board",
"dishrag, dishcloth",
"dishwasher, dish washer, dishwashing machine",
"disk brake, disc brake",
"dock, dockage, docking facility",
"dogsled, dog sled, dog sleigh",
"dome",
"doormat, welcome mat",
"drilling platform, offshore rig",
"drum, membranophone, tympan",
"drumstick",
"dumbbell",
"dutch oven",
"electric fan, blower",
"electric guitar",
"electric locomotive",
"entertainment center",
"envelope",
"espresso maker",
"face powder",
"feather boa, boa",
"file, file cabinet, filing cabinet",
"fireboat",
"fire engine, fire truck",
"fire screen, fireguard",
"flagpole, flagstaff",
"flute, transverse flute",
"folding chair",
"football helmet",
"forklift",
"fountain",
"fountain pen",
"four-poster",
"freight car",
"french horn, horn",
"frying pan, frypan, skillet",
"fur coat",
"garbage truck, dustcart",
"gasmask, respirator, gas helmet",
"gas pump, gasoline pump, petrol pump, island dispenser",
"goblet",
"go-kart",
"golf ball",
"golfcart, golf cart",
"gondola",
"gong, tam-tam",
"gown",
"grand piano, grand",
"greenhouse, nursery, glasshouse",
"grille, radiator grille",
"grocery store, grocery, food market, market",
"guillotine",
"hair slide",
"hair spray",
"half track",
"hammer",
"hamper",
"hand blower, blow dryer, blow drier, hair dryer, hair drier",
"hand-held computer, hand-held microcomputer",
"handkerchief, hankie, hanky, hankey",
"hard disc, hard disk, fixed disk",
"harmonica, mouth organ, harp, mouth harp",
"harp",
"harvester, reaper",
"hatchet",
"holster",
"home theater, home theatre",
"honeycomb",
"hook, claw",
"hoopskirt, crinoline",
"horizontal bar, high bar",
"horse cart, horse-cart",
"hourglass",
"ipod",
"iron, smoothing iron",
"jack-o'-lantern",
"jean, blue jean, denim",
"jeep, landrover",
"jersey, t-shirt, tee shirt",
"jigsaw puzzle",
"jinrikisha, ricksha, rickshaw",
"joystick",
"kimono",
"knee pad",
"knot",
"lab coat, laboratory coat",
"ladle",
"lampshade, lamp shade",
"laptop, laptop computer",
"lawn mower, mower",
"lens cap, lens cover",
"letter opener, paper knife, paperknife",
"library",
"lifeboat",
"lighter, light, igniter, ignitor",
"limousine, limo",
"liner, ocean liner",
"lipstick, lip rouge",
"loafer",
"lotion",
"loudspeaker, speaker, speaker unit, loudspeaker system, speaker system",
"loupe, jeweler's loupe",
"lumbermill, sawmill",
"magnetic compass",
"mailbag, postbag",
"mailbox, letter box",
"maillot",
"maillot, tank suit",
"manhole cover",
"maraca",
"marimba, xylophone",
"mask",
"matchstick",
"maypole",
"maze, labyrinth",
"measuring cup",
"medicine chest, medicine cabinet",
"megalith, megalithic structure",
"microphone, mike",
"microwave, microwave oven",
"military uniform",
"milk can",
"minibus",
"miniskirt, mini",
"minivan",
"missile",
"mitten",
"mixing bowl",
"mobile home, manufactured home",
"model t",
"modem",
"monastery",
"monitor",
"moped",
"mortar",
"mortarboard",
"mosque",
"mosquito net",
"motor scooter, scooter",
"mountain bike, all-terrain bike, off-roader",
"mountain tent",
"mouse, computer mouse",
"mousetrap",
"moving van",
"muzzle",
"nail",
"neck brace",
"necklace",
"nipple",
"notebook, notebook computer",
"obelisk",
"oboe, hautboy, hautbois",
"ocarina, sweet potato",
"odometer, hodometer, mileometer, milometer",
"oil filter",
"organ, pipe organ",
"oscilloscope, scope, cathode-ray oscilloscope, cro",
"overskirt",
"oxcart",
"oxygen mask",
"packet",
"paddle, boat paddle",
"paddlewheel, paddle wheel",
"padlock",
"paintbrush",
"pajama, pyjama, pj's, jammies",
"palace",
"panpipe, pandean pipe, syrinx",
"paper towel",
"parachute, chute",
"parallel bars, bars",
"park bench",
"parking meter",
"passenger car, coach, carriage",
"patio, terrace",
"pay-phone, pay-station",
"pedestal, plinth, footstall",
"pencil box, pencil case",
"pencil sharpener",
"perfume, essence",
"petri dish",
"photocopier",
"pick, plectrum, plectron",
"pickelhaube",
"picket fence, paling",
"pickup, pickup truck",
"pier",
"piggy bank, penny bank",
"pill bottle",
"pillow",
"ping-pong ball",
"pinwheel",
"pirate, pirate ship",
"pitcher, ewer",
"plane, carpenter's plane, woodworking plane",
"planetarium",
"plastic bag",
"plate rack",
"plow, plough",
"plunger, plumber's helper",
"polaroid camera, polaroid land camera",
"pole",
"police van, police wagon, paddy wagon, patrol wagon, wagon, black maria",
"poncho",
"pool table, billiard table, snooker table",
"pop bottle, soda bottle",
"pot, flowerpot",
"potter's wheel",
"power drill",
"prayer rug, prayer mat",
"printer",
"prison, prison house",
"projectile, missile",
"projector",
"puck, hockey puck",
"punching bag, punch bag, punching ball, punchball",
"purse",
"quill, quill pen",
"quilt, comforter, comfort, puff",
"racer, race car, racing car",
"racket, racquet",
"radiator",
"radio, wireless",
"radio telescope, radio reflector",
"rain barrel",
"recreational vehicle, rv, r.v.",
"reel",
"reflex camera",
"refrigerator, icebox",
"remote control, remote",
"restaurant, eating house, eating place, eatery",
"revolver, six-gun, six-shooter",
"rifle",
"rocking chair, rocker",
"rotisserie",
"rubber eraser, rubber, pencil eraser",
"rugby ball",
"rule, ruler",
"running shoe",
"safe",
"safety pin",
"saltshaker, salt shaker",
"sandal",
"sarong",
"sax, saxophone",
"scabbard",
"scale, weighing machine",
"school bus",
"schooner",
"scoreboard",
"screen, crt screen",
"screw",
"screwdriver",
"seat belt, seatbelt",
"sewing machine",
"shield, buckler",
"shoe shop, shoe-shop, shoe store",
"shoji",
"shopping basket",
"shopping cart",
"shovel",
"shower cap",
"shower curtain",
"ski",
"ski mask",
"sleeping bag",
"slide rule, slipstick",
"sliding door",
"slot, one-armed bandit",
"snorkel",
"snowmobile",
"snowplow, snowplough",
"soap dispenser",
"soccer ball",
"sock",
"solar dish, solar collector, solar furnace",
"sombrero",
"soup bowl",
"space bar",
"space heater",
"space shuttle",
"spatula",
"speedboat",
"spider web, spider's web",
"spindle",
"sports car, sport car",
"spotlight, spot",
"stage",
"steam locomotive",
"steel arch bridge",
"steel drum",
"stethoscope",
"stole",
"stone wall",
"stopwatch, stop watch",
"stove",
"strainer",
"streetcar, tram, tramcar, trolley, trolley car",
"stretcher",
"studio couch, day bed",
"stupa, tope",
"submarine, pigboat, sub, u-boat",
"suit, suit of clothes",
"sundial",
"sunglass",
"sunglasses, dark glasses, shades",
"sunscreen, sunblock, sun blocker",
"suspension bridge",
"swab, swob, mop",
"sweatshirt",
"swimming trunks, bathing trunks",
"swing",
"switch, electric switch, electrical switch",
"syringe",
"table lamp",
"tank, army tank, armored combat vehicle, armoured combat vehicle",
"tape player",
"teapot",
"teddy, teddy bear",
"television, television system",
"tennis ball",
"thatch, thatched roof",
"theater curtain, theatre curtain",
"thimble",
"thresher, thrasher, threshing machine",
"throne",
"tile roof",
"toaster",
"tobacco shop, tobacconist shop, tobacconist",
"toilet seat",
"torch",
"totem pole",
"tow truck, tow car, wrecker",
"toyshop",
"tractor",
"trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi",
"tray",
"trench coat",
"tricycle, trike, velocipede",
"trimaran",
"tripod",
"triumphal arch",
"trolleybus, trolley coach, trackless trolley",
"trombone",
"tub, vat",
"turnstile",
"typewriter keyboard",
"umbrella",
"unicycle, monocycle",
"upright, upright piano",
"vacuum, vacuum cleaner",
"vase",
"vault",
"velvet",
"vending machine",
"vestment",
"viaduct",
"violin, fiddle",
"volleyball",
"waffle iron",
"wall clock",
"wallet, billfold, notecase, pocketbook",
"wardrobe, closet, press",
"warplane, military plane",
"washbasin, handbasin, washbowl, lavabo, wash-hand basin",
"washer, automatic washer, washing machine",
"water bottle",
"water jug",
"water tower",
"whiskey jug",
"whistle",
"wig",
"window screen",
"window shade",
"windsor tie",
"wine bottle",
"wing",
"wok",
"wooden spoon",
"wool, woolen, woollen",
"worm fence, snake fence, snake-rail fence, virginia fence",
"wreck",
"yawl",
"yurt",
"web site, website, internet site, site",
"comic book",
"crossword puzzle, crossword",
"street sign",
"traffic light, traffic signal, stoplight",
"book jacket, dust cover, dust jacket, dust wrapper",
"menu",
"plate",
"guacamole",
"consomme",
"hot pot, hotpot",
"trifle",
"ice cream, icecream",
"ice lolly, lolly, lollipop, popsicle",
"french loaf",
"bagel, beigel",
"pretzel",
"cheeseburger",
"hotdog, hot dog, red hot",
"mashed potato",
"head cabbage",
"broccoli",
"cauliflower",
"zucchini, courgette",
"spaghetti squash",
"acorn squash",
"butternut squash",
"cucumber, cuke",
"artichoke, globe artichoke",
"bell pepper",
"cardoon",
"mushroom",
"granny smith",
"strawberry",
"orange",
"lemon",
"fig",
"pineapple, ananas",
"banana",
"jackfruit, jak, jack",
"custard apple",
"pomegranate",
"hay",
"carbonara",
"chocolate sauce, chocolate syrup",
"dough",
"meat loaf, meatloaf",
"pizza, pizza pie",
"potpie",
"burrito",
"red wine",
"espresso",
"cup",
"eggnog",
"alp",
"bubble",
"cliff, drop, drop-off",
"coral reef",
"geyser",
"lakeside, lakeshore",
"promontory, headland, head, foreland",
"sandbar, sand bar",
"seashore, coast, seacoast, sea-coast",
"valley, vale",
"volcano",
"ballplayer, baseball player",
"groom, bridegroom",
"scuba diver",
"rapeseed",
"daisy",
"yellow lady's slipper, yellow lady-slipper, cypripedium calceolus, cypripedium parviflorum",
"corn",
"acorn",
"hip, rose hip, rosehip",
"buckeye, horse chestnut, conker",
"coral fungus",
"agaric",
"gyromitra",
"stinkhorn, carrion fungus",
"earthstar",
"hen-of-the-woods, hen of the woods, polyporus frondosus, grifola frondosa",
"bolete",
"ear, spike, capitulum",
"toilet tissue, toilet paper, bathroom tissue"
] |
ajay-drew/plant-disease-vit
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"label_0",
"label_1",
"label_2",
"label_3",
"label_4",
"label_5",
"label_6",
"label_7",
"label_8",
"label_9",
"label_10",
"label_11",
"label_12",
"label_13",
"label_14",
"label_15",
"label_16",
"label_17",
"label_18",
"label_19",
"label_20",
"label_21",
"label_22",
"label_23",
"label_24",
"label_25",
"label_26",
"label_27",
"label_28",
"label_29",
"label_30",
"label_31",
"label_32",
"label_33",
"label_34",
"label_35",
"label_36",
"label_37"
] |
prithivMLmods/Coral-Health
|

# **Coral-Health**
> **Coral-Health** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify coral reef images into two health conditions using the **SiglipForImageClassification** architecture.
```py
Classification Report:
precision recall f1-score support
Bleached Corals 0.8677 0.7561 0.8081 4850
Healthy Corals 0.7665 0.8742 0.8168 4442
accuracy 0.8125 9292
macro avg 0.8171 0.8151 0.8124 9292
weighted avg 0.8193 0.8125 0.8122 9292
```

The model categorizes images into two classes:
- **Class 0:** Bleached Corals
- **Class 1:** Healthy Corals
---
# **Run with Transformers 🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Coral-Health"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Updated labels
labels = {
"0": "Bleached Corals",
"1": "Healthy Corals"
}
def coral_health_detection(image):
"""Predicts the health condition of coral reefs in the image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=coral_health_detection,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Coral Health Detection",
description="Upload an image of coral reefs to classify their condition as Bleached or Healthy."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use:**
The **Coral-Health** model is designed to support marine conservation and environmental monitoring. Potential use cases include:
- **Coral Reef Monitoring:** Helping scientists and conservationists track coral bleaching events.
- **Environmental Impact Assessment:** Analyzing reef health in response to climate change and pollution.
- **Educational Tools:** Raising awareness about coral reef health in classrooms and outreach programs.
- **Automated Drone/ROV Analysis:** Enhancing automated underwater monitoring workflows.
|
[
"bleached corals",
"healthy corals"
] |
prithivMLmods/Food-101-93M
|

# **Food-101-93M**
> **Food-101-93M** is a fine-tuned image classification model built on top of **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It is trained to classify food images into one of 101 popular dishes, derived from the [Food-101 dataset](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/).
```py
Classification Report:
precision recall f1-score support
apple_pie 0.8399 0.8253 0.8325 750
baby_back_ribs 0.9445 0.8853 0.9140 750
baklava 0.9736 0.9347 0.9537 750
beef_carpaccio 0.9079 0.9200 0.9139 750
beef_tartare 0.8486 0.8293 0.8388 750
beet_salad 0.8649 0.8707 0.8678 750
beignets 0.8961 0.9080 0.9020 750
bibimbap 0.9361 0.9373 0.9367 750
bread_pudding 0.7979 0.8000 0.7989 750
breakfast_burrito 0.8784 0.9053 0.8917 750
bruschetta 0.8672 0.8533 0.8602 750
caesar_salad 0.9444 0.9293 0.9368 750
cannoli 0.9263 0.9547 0.9402 750
caprese_salad 0.9110 0.9280 0.9194 750
carrot_cake 0.9068 0.8040 0.8523 750
ceviche 0.8375 0.8453 0.8414 750
cheesecake 0.8225 0.8093 0.8159 750
cheese_plate 0.9627 0.9627 0.9627 750
chicken_curry 0.8970 0.8827 0.8898 750
chicken_quesadilla 0.9254 0.9093 0.9173 750
chicken_wings 0.9512 0.9360 0.9435 750
chocolate_cake 0.7958 0.8107 0.8032 750
chocolate_mousse 0.6947 0.7827 0.7361 750
churros 0.9440 0.9440 0.9440 750
clam_chowder 0.8883 0.9120 0.9000 750
club_sandwich 0.9396 0.9133 0.9263 750
crab_cakes 0.9185 0.8720 0.8947 750
creme_brulee 0.9141 0.9227 0.9184 750
croque_madame 0.9106 0.8960 0.9032 750
cup_cakes 0.8986 0.9333 0.9156 750
deviled_eggs 0.9787 0.9813 0.9800 750
donuts 0.8893 0.8787 0.8840 750
dumplings 0.9212 0.8880 0.9043 750
edamame 0.9960 0.9920 0.9940 750
eggs_benedict 0.9207 0.9440 0.9322 750
escargots 0.8709 0.8907 0.8807 750
falafel 0.8945 0.8933 0.8939 750
filet_mignon 0.7598 0.7467 0.7532 750
fish_and_chips 0.9454 0.9467 0.9460 750
foie_gras 0.6659 0.8027 0.7279 750
french_fries 0.9447 0.9333 0.9390 750
french_onion_soup 0.8667 0.9187 0.8919 750
french_toast 0.8890 0.8760 0.8825 750
fried_calamari 0.9448 0.9133 0.9288 750
fried_rice 0.9325 0.9213 0.9269 750
frozen_yogurt 0.8716 0.9507 0.9094 750
garlic_bread 0.9103 0.8800 0.8949 750
gnocchi 0.8554 0.8280 0.8415 750
greek_salad 0.9203 0.9240 0.9222 750
grilled_cheese_sandwich 0.8523 0.8773 0.8647 750
grilled_salmon 0.8463 0.8960 0.8705 750
guacamole 0.9537 0.9347 0.9441 750
gyoza 0.8970 0.9173 0.9071 750
hamburger 0.8899 0.8947 0.8923 750
hot_and_sour_soup 0.9439 0.9413 0.9426 750
hot_dog 0.8859 0.9320 0.9084 750
huevos_rancheros 0.8465 0.8827 0.8642 750
hummus 0.9394 0.9093 0.9241 750
ice_cream 0.8633 0.8507 0.8570 750
lasagna 0.8780 0.8733 0.8757 750
lobster_bisque 0.8952 0.9107 0.9028 750
lobster_roll_sandwich 0.9664 0.9573 0.9618 750
macaroni_and_cheese 0.9273 0.9013 0.9141 750
macarons 0.9892 0.9747 0.9819 750
miso_soup 0.9565 0.9667 0.9615 750
mussels 0.9602 0.9640 0.9621 750
nachos 0.9337 0.9387 0.9362 750
omelette 0.8889 0.8960 0.8924 750
onion_rings 0.9493 0.9493 0.9493 750
oysters 0.9808 0.9533 0.9669 750
pad_thai 0.9188 0.9507 0.9345 750
paella 0.9352 0.9240 0.9296 750
pancakes 0.9277 0.9067 0.9171 750
panna_cotta 0.8056 0.8507 0.8275 750
peking_duck 0.8529 0.9120 0.8814 750
pho 0.9746 0.9227 0.9479 750
pizza 0.9512 0.9360 0.9435 750
pork_chop 0.8085 0.7373 0.7713 750
poutine 0.9424 0.9387 0.9405 750
prime_rib 0.9106 0.8147 0.8600 750
pulled_pork_sandwich 0.8887 0.9053 0.8970 750
ramen 0.8986 0.9213 0.9098 750
ravioli 0.8532 0.8293 0.8411 750
red_velvet_cake 0.9330 0.8907 0.9113 750
risotto 0.8809 0.8680 0.8744 750
samosa 0.9153 0.9227 0.9190 750
sashimi 0.9248 0.9187 0.9217 750
scallops 0.8564 0.8507 0.8535 750
seaweed_salad 0.9597 0.9533 0.9565 750
shrimp_and_grits 0.8995 0.8947 0.8971 750
spaghetti_bolognese 0.9667 0.9667 0.9667 750
spaghetti_carbonara 0.9601 0.9627 0.9614 750
spring_rolls 0.9045 0.9467 0.9251 750
steak 0.6311 0.7027 0.6650 750
strawberry_shortcake 0.8832 0.8467 0.8645 750
sushi 0.9204 0.8947 0.9074 750
tacos 0.9225 0.8893 0.9056 750
takoyaki 0.9419 0.9507 0.9463 750
tiramisu 0.9074 0.8627 0.8845 750
tuna_tartare 0.7691 0.7773 0.7732 750
waffles 0.9629 0.9347 0.9486 750
accuracy 0.8973 75750
macro avg 0.8987 0.8973 0.8977 75750
weighted avg 0.8987 0.8973 0.8977 75750
```
The model categorizes images into 101 food classes such as `sushi`, `hamburger`, `waffles`, `pad_thai`, and more.
---
# **Run with Transformers 🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Food-101-93M"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Food-101 labels
labels = {
"0": "apple_pie", "1": "baby_back_ribs", "2": "baklava", "3": "beef_carpaccio", "4": "beef_tartare",
"5": "beet_salad", "6": "beignets", "7": "bibimbap", "8": "bread_pudding", "9": "breakfast_burrito",
"10": "bruschetta", "11": "caesar_salad", "12": "cannoli", "13": "caprese_salad", "14": "carrot_cake",
"15": "ceviche", "16": "cheesecake", "17": "cheese_plate", "18": "chicken_curry", "19": "chicken_quesadilla",
"20": "chicken_wings", "21": "chocolate_cake", "22": "chocolate_mousse", "23": "churros", "24": "clam_chowder",
"25": "club_sandwich", "26": "crab_cakes", "27": "creme_brulee", "28": "croque_madame", "29": "cup_cakes",
"30": "deviled_eggs", "31": "donuts", "32": "dumplings", "33": "edamame", "34": "eggs_benedict",
"35": "escargots", "36": "falafel", "37": "filet_mignon", "38": "fish_and_chips", "39": "foie_gras",
"40": "french_fries", "41": "french_onion_soup", "42": "french_toast", "43": "fried_calamari", "44": "fried_rice",
"45": "frozen_yogurt", "46": "garlic_bread", "47": "gnocchi", "48": "greek_salad", "49": "grilled_cheese_sandwich",
"50": "grilled_salmon", "51": "guacamole", "52": "gyoza", "53": "hamburger", "54": "hot_and_sour_soup",
"55": "hot_dog", "56": "huevos_rancheros", "57": "hummus", "58": "ice_cream", "59": "lasagna",
"60": "lobster_bisque", "61": "lobster_roll_sandwich", "62": "macaroni_and_cheese", "63": "macarons", "64": "miso_soup",
"65": "mussels", "66": "nachos", "67": "omelette", "68": "onion_rings", "69": "oysters",
"70": "pad_thai", "71": "paella", "72": "pancakes", "73": "panna_cotta", "74": "peking_duck",
"75": "pho", "76": "pizza", "77": "pork_chop", "78": "poutine", "79": "prime_rib",
"80": "pulled_pork_sandwich", "81": "ramen", "82": "ravioli", "83": "red_velvet_cake", "84": "risotto",
"85": "samosa", "86": "sashimi", "87": "scallops", "88": "seaweed_salad", "89": "shrimp_and_grits",
"90": "spaghetti_bolognese", "91": "spaghetti_carbonara", "92": "spring_rolls", "93": "steak", "94": "strawberry_shortcake",
"95": "sushi", "96": "tacos", "97": "takoyaki", "98": "tiramisu", "99": "tuna_tartare", "100": "waffles"
}
def classify_food(image):
"""Predicts the type of food in the image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
# Sort by descending probability
predictions = dict(sorted(predictions.items(), key=lambda item: item[1], reverse=True)[:5])
return predictions
# Gradio Interface
iface = gr.Interface(
fn=classify_food,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=5, label="Top 5 Prediction Scores"),
title="Food-101-93M 🍽️",
description="Upload an image of food to classify it into one of 101 dish categories based on the Food-101 dataset."
)
# Launch app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use:**
The **Food-101-93M** model is intended for:
- **Recipe Recommendation Engines:** Automatically tagging food images to suggest recipes.
- **Food Logging & Calorie Tracking Apps:** Categorizing meals based on photos.
- **Smart Kitchens:** Assisting food recognition in smart appliances.
- **Restaurant Menu Digitization:** Auto-classifying dishes for visual menus or ordering systems.
- **Dataset Labeling:** Enabling automatic annotation of food datasets for training other ML models.
|
[
"apple_pie",
"baby_back_ribs",
"baklava",
"beef_carpaccio",
"beef_tartare",
"beet_salad",
"beignets",
"bibimbap",
"bread_pudding",
"breakfast_burrito",
"bruschetta",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare",
"waffles"
] |
prithivMLmods/Mature-Content-Detection
|

# **Mature-Content-Detection**
> **Mature-Content-Detection** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to classify images into various mature or neutral content categories using the **SiglipForImageClassification** architecture.
> [!Note]
> Use this model to support positive, safe, and respectful digital spaces. Misuse is strongly discouraged and may violate platform or regional policies. This model doesn't generate any unsafe content, as it is a classification model and does not fall under the category of models not suitable for all audiences.
> [!Important]
> Neutral = Safe / Normal
```py
Classification Report:
precision recall f1-score support
Anime Picture 0.8130 0.8066 0.8098 5600
Hentai 0.8317 0.8134 0.8224 4180
Neutral 0.8344 0.7785 0.8055 5503
Pornography 0.9161 0.8464 0.8799 5600
Enticing or Sensual 0.7699 0.8979 0.8290 5600
accuracy 0.8296 26483
macro avg 0.8330 0.8286 0.8293 26483
weighted avg 0.8331 0.8296 0.8298 26483
```
---

---
The model categorizes images into five classes:
- **Class 0:** Anime Picture
- **Class 1:** Hentai
- **Class 2:** Neutral
- **Class 3:** Pornography
- **Class 4:** Enticing or Sensual
# **Run with Transformers 🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Mature-Content-Detection"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Updated labels
labels = {
"0": "Anime Picture",
"1": "Hentai",
"2": "Neutral",
"3": "Pornography",
"4": "Enticing or Sensual"
}
def mature_content_detection(image):
"""Predicts the type of content in the image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=mature_content_detection,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Mature Content Detection",
description="Upload an image to classify whether it contains anime, hentai, neutral, pornographic, or enticing/sensual content."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Guidelines for Use Mature-Content-Detection**
The **Mature-Content-Detection** model is a computer vision classifier designed to detect and categorize adult-themed and anime-based content. It supports responsible content moderation and filtering across digital platforms. To ensure the ethical and intended use of the model, please follow the guidelines below:
# **Recommended Use Cases**
- **Content Moderation:** Automatically filter explicit or suggestive content in online communities, forums, or media-sharing platforms.
- **Parental Controls:** Enable content-safe environments for children by flagging inappropriate images.
- **Dataset Curation:** Clean and label image datasets for safe and compliant ML training.
- **Digital Wellbeing:** Assist in building safer AI and web experiences by identifying sensitive media content.
- **Search Engine Filtering:** Improve content relevance and safety in image-based search results.
# **Prohibited / Discouraged Use**
- **Malicious Intent:** Do not use the model to harass, shame, expose, or target individuals or communities.
- **Invasion of Privacy:** Avoid deploying the model on private or sensitive user data without proper consent.
- **Illegal Activities:** Never use the model for generating, distributing, or flagging illegal content.
- **Bias Amplification:** Do not rely solely on this model to make sensitive moderation decisions. Always include human oversight, especially where reputational or legal consequences are involved.
- **Manipulation or Misrepresentation:** Avoid using this model to manipulate or misrepresent content classification in unethical ways.
# **Important Notes**
- This model works best on **anime and adult content** images. It is **not designed for general images** or unrelated categories (e.g., child, violence, hate symbols, drugs).
- The output of the model is **probabilistic**, not definitive. Consider it a **screening tool**, not a sole decision-maker.
- The labels reflect the model's best interpretation of visual signals — not moral or legal judgments.
- Always **review flagged content manually** in high-stakes applications.
## **Ethical Reminder**
This model was built to **help** create safer digital ecosystems. **Do not misuse it** for exploitation, surveillance without consent, or personal gain at the expense of others. By using this model, you agree to act responsibly and ethically, keeping safety and privacy a top priority.
# **Sample Inference**
| Screenshot 1 | Screenshot 2 | Screenshot 3 |
|--------------|--------------|--------------|
|  |  |  |
| Screenshot 4 | Screenshot 5 | Screenshot 6 |
|--------------|--------------|--------------|
|  |  |  |
| Screenshot 7 |
|--------------|
|  |
# **Intended Use:**
The **Mature-Content-Detection** model is designed to classify visual content for moderation and filtering purposes. Potential use cases include:
- **Content Moderation:** Automatically flagging explicit or sensitive content on platforms.
- **Parental Control Systems:** Filtering inappropriate material for child-safe environments.
- **Search Engine Filtering:** Improving search results by categorizing Un-Safe content.
- **Dataset Cleaning:** Assisting in curation of safe training datasets for other AI models.
|
[
"anime picture",
"hentai",
"neutral",
"pornography",
"enticing or sensual"
] |
dewiri/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1848
- Accuracy: 0.9445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3692 | 1.0 | 370 | 0.2796 | 0.9269 |
| 0.2224 | 2.0 | 740 | 0.2251 | 0.9283 |
| 0.1585 | 3.0 | 1110 | 0.2074 | 0.9350 |
| 0.1421 | 4.0 | 1480 | 0.2000 | 0.9350 |
| 0.1391 | 5.0 | 1850 | 0.1989 | 0.9337 |
### Training results (Zero-Shot)
- Accuracy: 0.8800
- Precision: 0.8768
- Recall: 0.8800
Modell used: openai/clip-vit-large-patch14 - for Zero-shot Classification
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
werent4/ViTas-f1
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"anger",
"surprise",
"contempt",
"happy",
"neutral",
"fear",
"sad",
"disgust"
] |
dame-ningen/mushroom-classifier
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
look for id to label dictionary in "config.json"
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"(poisonous) rubroboletus_pulcherrimus",
"(poisonous) stropharia_aeruginosa",
"(poisonous) clitocybe_fragrans",
"(poisonous) jelly_tooth",
"(poisonous) coprinopsis_romagnesiana",
"(poisonous) tricholoma_pardinum",
"(poisonous) amanita_frostiana",
"(poisonous) blackening_waxcap",
"(poisonous) cudonia_circinans",
"(poisonous) amanita_pseudoregalis",
"(poisonous) inocybe_lilacina",
"(poisonous) hypholoma_lateritium",
"(poisonous) entoloma_sinuatum",
"(poisonous) amanita_petalinivolva",
"(poisonous) imperator_torosus",
"(poisonous) king_alfreds_cakes",
"(poisonous) cortinarius_cinnabarinus",
"(poisonous) helvella_lactea",
"(poisonous) white_saddle",
"(poisonous) cortinarius_limonius",
"(poisonous) amanita_gemmata",
"(poisonous) amanita_xanthocephala",
"(poisonous) warted_amanita",
"(poisonous) coprinopsis_alopecia",
"(poisonous) amanita_porphyria",
"(poisonous) cortinarius_palustris",
"(poisonous) spotted_toughshank",
"(poisonous) entoloma_albidum",
"(poisonous) grey_spotted_amanita",
"(poisonous) rubroboletus_satanas",
"(poisonous) amanita_albocreata",
"(poisonous) panthercap",
"(poisonous) saffron_milkcap",
"(poisonous) inocybe_hystrix",
"(poisonous) deadly_webcap",
"(poisonous) peppery_bolete",
"(poisonous) coprinellus_micaceus",
"(poisonous) psilocybe_semilanceata",
"(poisonous) bovine_bolete",
"(poisonous) panaeolus_cinctulus",
"(poisonous) amanita_parvipantherina",
"(poisonous) deadly_fibrecap",
"(poisonous) cortinarius_phoeniceus",
"(poisonous) amanita_farinosa",
"(poisonous) hypholoma_marginatum",
"(poisonous) orange_peel_fungus",
"(poisonous) omphalotus_olearius",
"(poisonous) ochre_brittlegill",
"(poisonous) amanita_wellsii",
"(poisonous) old_man_of_the_woods",
"(poisonous) tricholoma_equestre",
"(poisonous) bruising_webcap",
"(poisonous) sarcosphaera_coronaria",
"(poisonous) agrocybe_arenicola",
"(poisonous) white_false_death_cap",
"(poisonous) rubroboletus_lupinus",
"(poisonous) amanita_nehuta",
"(poisonous) ramaria_pallida",
"(poisonous) hebeloma_crustuliniforme",
"(poisonous) agaricus_menieri",
"(poisonous) amanita_rubrovolvata",
"(poisonous) echinoderma_asperum",
"(poisonous) gyromitra_perlata",
"(poisonous) hairy_curtain_crust",
"(poisonous) hebeloma_sinapizans",
"(poisonous) russula_olivacea",
"(poisonous) amanita_boudieri",
"(poisonous) blushing_rosette",
"(poisonous) blackening_polypore",
"(poisonous) bitter_beech_bolete",
"(poisonous) gyromitra esculenta",
"(poisonous) hypholoma_fasciculare",
"(poisonous) amanita_roseotincta",
"(poisonous) jelly_ears",
"(poisonous) the_sickener",
"(poisonous) amanita_smithiana",
"(poisonous) cortinarius_smithii",
"(poisonous) black_morel",
"(poisonous) tricholoma_muscarium",
"(poisonous) inky_mushroom",
"(poisonous) mycena_rosea",
"(poisonous) penny_bun",
"(poisonous) suede_bolete",
"(poisonous) ramaria_neoformosa",
"(poisonous) calocera_viscosa",
"(poisonous) mycena_diosma",
"(poisonous) chlorophyllum_brunneum",
"(poisonous) cortinarius_cruentus",
"(poisonous) amanita_parcivolvata",
"(poisonous) cortinarius_cinnamomeoluteus",
"(poisonous) agaricus_hondensis",
"(poisonous) gyromitra gigas",
"(poisonous) agaricus_moelleri",
"(poisonous) amanita_velatipes",
"(poisonous) helvella_lacunosa",
"(poisonous) imperator_rhodopurpureus",
"(poisonous) scarlet_waxcap",
"(poisonous) echinoderma_calcicola",
"(poisonous) scarlet_elfcup",
"(poisonous) ramaria_formosa",
"(poisonous) agaricus_phaeolepidotus",
"(poisonous) pholiotina_rugosa",
"(poisonous) agaricus_placomyces",
"(poisonous) coprinopsis atramentaria",
"(poisonous) amanita",
"(poisonous) helvella_vespertina",
"(poisonous) amanita_ceciliae",
"(poisonous) omphalotus_illudens",
"(poisonous) russula_viscida",
"(poisonous) amanita_neoovoidea",
"(poisonous) conocybe_subovalis",
"(poisonous) helvella_dryophila",
"(poisonous) stinking_dapperling",
"(poisonous) calycina citrina",
"(poisonous) hedgehog_fungus",
"(poisonous) paralepistopsis_amoenolens",
"(poisonous) amanita muscaria",
"(poisonous) choiromyces_venosus",
"(poisonous) amanita_ibotengutake",
"(poisonous) lactarius_torminosus",
"(poisonous) bronze_bolete",
"(poisonous) inocybe_rimosa",
"(poisonous) amanita_citrina",
"(poisonous) neonothopanus_nambi",
"(poisonous) trooping_funnel",
"(poisonous) yellow_false_truffle",
"(poisonous) amanita_heterochroma",
"(poisonous) cortinarius_callisteus",
"(poisonous) blushing_bracket",
"(poisonous) lurid_bolete",
"(poisonous) bjerkandera adusta",
"(poisonous) tricholoma_filamentosum",
"(poisonous) entoloma_rhodopolium",
"(poisonous) oak_polypore",
"(poisonous) mycena_pura",
"(poisonous) larch_bolete",
"(poisonous) cortinarius_malicorius",
"(poisonous) xanthoria parietina",
"(poisonous) amanita_eliae",
"(poisonous) amanita citrina",
"(poisonous) amanita_gioiosa",
"(poisonous) yellow_stagshorn",
"(poisonous) yellow_stainer",
"(poisonous) cortinarius_bolaris",
"(poisonous) cortinarius_rubicundulus",
"(poisonous) giant_puffball",
"(poisonous) ampulloclitocybe_clavipes",
"(poisonous) amanita_regalis",
"(poisonous) inocybe_sublilacina",
"(poisonous) parasol",
"(poisonous) helvella_crispa",
"(poisonous) agaricus_xanthodermus",
"(poisonous) omphalotus_nidiformis",
"(poisonous) clitocybe_cerussata",
"(poisonous) inocybe_lacera",
"(poisonous) agaricus_californicus",
"(poisonous) rubroboletus_legaliae",
"(poisonous) turbinellus_kauffmanii",
"(poisonous) clitocybe_dealbata",
"(poisonous) turkey_tail",
"(poisonous) amanita_subfrostiana",
"(poisonous) inocybe_geophylla",
"(poisonous) false_deathcap",
"(poisonous) greencracked_brittlegill",
"(poisonous) armillaria_mellea",
"(poisonous) marasmius_collinus",
"(poisonous) lactarius_chrysorrheus",
"(poisonous) omphalotus_olivascens",
"(poisonous) lactarius_helvus",
"(poisonous) yellow_foot_waxcap",
"(poisonous) blushing_wood_mushroom",
"(poisonous) blackening_brittlegill",
"(poisonous) deathcap",
"(poisonous) amanita_pseudoporphyria",
"(poisonous) hypholoma_radicosum",
"(poisonous) amanita rubescens",
"(poisonous) amanita_echinocephala",
"(poisonous) amanita_aprica",
"(poisonous) omphalotus_japonicus",
"(poisonous) coprinopsis_atramentaria",
"(poisonous) amanita_hongoi",
"(poisonous) hapalopilus_nidulans",
"(poisonous) cortinarius_cinnamomeus",
"(poisonous) suillus_granulatus",
"(poisonous) russula_subnigricans",
"(poisonous) amanita_cokeri",
"(poisonous) amanita_pantherina",
"(poisonous) amanita_abrupta",
"(poisonous) lepiota_cristata",
"(poisonous) tricholoma_sulphureum",
"(poisonous) inocybe_sambucina",
"(poisonous) amanita_flavoconia",
"(poisonous) brown_birch_bolete",
"(poisonous) inocybe_fibrosa",
"(poisonous) oak_mazegill",
"(poisonous) turbinellus_floccosus",
"(poisonous) amanita_cothurnata",
"(poisonous) scleroderma_citrinum",
"(poisonous) scarletina_bolete",
"(poisonous) destroying_angel",
"(poisonous) amanita_pseudorubescens",
"(poisonous) amanita_flavorubescens",
"(poisonous) cortinarius_mirandus",
"(poisonous) amanita_gracilior",
"(poisonous) chlorophyllum_molybdites",
"(poisonous) amanita_viscidolutea",
"(poisonous) gyromitra infula",
"(poisonous) russula_emetica",
"(poisonous) amanita_breckonii",
"(poisonous) amanita pantherina",
"(poisonous) stinkhorn",
"(poisonous) clitocybe_nebularis",
"(poisonous) trogia_venenata",
"(poisonous) liberty_cap",
"(conditionally_edible) shaggy_scalycap",
"(conditionally_edible) imleria badia",
"(conditionally_edible) rhytisma acerinum",
"(conditionally_edible) clitocybe nebularis",
"(conditionally_edible) amanita_rubescens",
"(conditionally_edible) suillus luteus",
"(conditionally_edible) nectria cinnabarina",
"(conditionally_edible) orange_bolete",
"(conditionally_edible) egghead_mottlegill",
"(conditionally_edible) crimson_waxcap",
"(conditionally_edible) leccinum versipelle",
"(conditionally_edible) weeping_widow",
"(conditionally_edible) pavement_mushroom",
"(conditionally_edible) common_inkcap",
"(conditionally_edible) false_chanterelle",
"(conditionally_edible) woolly_milkcap",
"(conditionally_edible) freckled_dapperling",
"(conditionally_edible) earthballs",
"(conditionally_edible) phellinus igniarius",
"(conditionally_edible) stereum hirsutum",
"(conditionally_edible) white_domecap",
"(conditionally_edible) lactarius",
"(conditionally_edible) trametes ochracea",
"(conditionally_edible) cantharellus cibarius",
"(conditionally_edible) calocera viscosa",
"(conditionally_edible) pseudevernia furfuracea",
"(conditionally_edible) giant_funnel",
"(conditionally_edible) lepista_saeva",
"(conditionally_edible) tripe_fungus",
"(conditionally_edible) shaggy_parasol",
"(conditionally_edible) leccinum albostipitatum",
"(conditionally_edible) brown_rollrim",
"(conditionally_edible) red_belted_bracket",
"(conditionally_edible) fomitopsis betulina",
"(conditionally_edible) trichaptum biforme",
"(conditionally_edible) poison_pie",
"(conditionally_edible) fleecy_milkcap",
"(conditionally_edible) suillus grevillei",
"(conditionally_edible) evernia prunastri",
"(conditionally_edible) pholiota squarrosa",
"(conditionally_edible) vermillion_waxcap",
"(conditionally_edible) cinnamon_bracket",
"(conditionally_edible) white_fibrecap",
"(conditionally_edible) trametes versicolor",
"(conditionally_edible) slimy_waxcap",
"(conditionally_edible) leccinum_albostipitatum",
"(conditionally_edible) dusky_puffball",
"(conditionally_edible) paxillus involutus",
"(conditionally_edible) gyromitra_esculenta",
"(conditionally_edible) hoof_fungus",
"(conditionally_edible) sepia_bolete",
"(conditionally_edible) laetiporus sulphureus",
"(conditionally_edible) sarcoscypha austriaca",
"(conditionally_edible) scarlet_caterpillarclub",
"(conditionally_edible) suillus",
"(conditionally_edible) sarcosoma globosum",
"(conditionally_edible) phlebia radiata",
"(conditionally_edible) amethyst_deceiver",
"(conditionally_edible) scaly_wood_mushroom",
"(conditionally_edible) chondrostereum purpureum",
"(conditionally_edible) tuberous_polypore",
"(conditionally_edible) orange_grisette",
"(conditionally_edible) phaeophyscia orbicularis",
"(conditionally_edible) rooting_shank",
"(conditionally_edible) splitgill",
"(conditionally_edible) geranium_brittlegill",
"(conditionally_edible) fomitopsis pinicola",
"(conditionally_edible) coprinellus disseminatus",
"(conditionally_edible) orange_birch_bolete",
"(conditionally_edible) cladonia fimbriata",
"(conditionally_edible) funeral_bell",
"(conditionally_edible) mosaic_puffball",
"(conditionally_edible) dog_stinkhorn",
"(conditionally_edible) verpa bohemica",
"(conditionally_edible) butter_cap",
"(conditionally_edible) hygrophoropsis aurantiaca",
"(conditionally_edible) lilac_fibrecap",
"(conditionally_edible) pink_waxcap",
"(conditionally_edible) pholiota aurivella",
"(conditionally_edible) cladonia stellaris",
"(conditionally_edible) trametes hirsuta",
"(conditionally_edible) terracotta_hedgehog",
"(conditionally_edible) cladonia rangiferina",
"(conditionally_edible) winter_chanterelle",
"(conditionally_edible) shaggy_inkcap",
"(conditionally_edible) slender_parasol",
"(conditionally_edible) wood_mushroom",
"(conditionally_edible) inonotus obliquus",
"(conditionally_edible) rosy_bonnet",
"(conditionally_edible) splendid_waxcap",
"(conditionally_edible) cetraria islandica",
"(mushrooms) lactarius",
"(mushrooms) russula",
"(mushrooms) suillus",
"(mushrooms) amanita",
"(mushrooms) cortinarius",
"(mushrooms) boletus",
"(mushrooms) hygrocybe",
"(mushrooms) entoloma",
"(mushrooms) agaricus",
"(conditionally_edible) jubilee_waxcap",
"(conditionally_edible) suillus granulatus",
"(conditionally_edible) pale_oyster",
"(conditionally_edible) macro_mushroom",
"(conditionally_edible) root_rot",
"(conditionally_edible) thimble_morel",
"(conditionally_edible) grisettes",
"(conditionally_edible) tremella mesenterica",
"(conditionally_edible) cortinarius",
"(conditionally_edible) verpa_bohemica",
"(conditionally_edible) shaggy_bracket",
"(conditionally_edible) snakeskin_grisette",
"(conditionally_edible) boletus",
"(conditionally_edible) semifree_morel",
"(conditionally_edible) chlorociboria aeruginascens",
"(conditionally_edible) elfin_saddle",
"(conditionally_edible) yellow_swamp_brittlegill",
"(conditionally_edible) lycoperdon perlatum",
"(conditionally_edible) ruby_bolete",
"(conditionally_edible) frosted_chanterelle",
"(conditionally_edible) silverleaf_fungus",
"(conditionally_edible) lobaria pulmonaria",
"(conditionally_edible) medusa_mushroom",
"(conditionally_edible) armillaria borealis",
"(conditionally_edible) graphis scripta",
"(conditionally_edible) physcia adscendens",
"(conditionally_edible) tricholomopsis rutilans",
"(conditionally_edible) lepista nuda",
"(conditionally_edible) artomyces pyxidatus",
"(conditionally_edible) fomes fomentarius",
"(conditionally_edible) urnula craterium",
"(conditionally_edible) spring_fieldcap",
"(conditionally_edible) ascot_hat",
"(conditionally_edible) the_miller",
"(conditionally_edible) false_morel",
"(conditionally_edible) parmelia sulcata",
"(conditionally_edible) charcoal_burner",
"(conditionally_edible) morchella_esculenta",
"(conditionally_edible) fragrant_funnel",
"(conditionally_edible) fenugreek_milkcap",
"(conditionally_edible) leccinum aurantiacum",
"(conditionally_edible) magpie_inkcap",
"(conditionally_edible) macrolepiota procera",
"(conditionally_edible) lactarius turpis",
"(conditionally_edible) peltigera aphthosa",
"(conditionally_edible) bitter_bolete",
"(conditionally_edible) stubble_rosegill",
"(conditionally_edible) stump_puffball",
"(conditionally_edible) phellinus tremulae",
"(conditionally_edible) amethyst_chanterelle",
"(conditionally_edible) vulpicida pinastri",
"(conditionally_edible) glistening_inkcap",
"(conditionally_edible) horn_of_plenty",
"(conditionally_edible) coprinopsis_atramentaria",
"(conditionally_edible) sheathed_woodtuft",
"(conditionally_edible) truffles",
"(conditionally_edible) wood_blewit",
"(conditionally_edible) evernia mesomorpha",
"(conditionally_edible) rooting_bolete",
"(conditionally_edible) dyers_mazegill",
"(conditionally_edible) hypogymnia physodes",
"(conditionally_edible) false_saffron_milkcap",
"(conditionally_edible) amanita_fulva",
"(conditionally_edible) boletus reticulatus",
"(conditionally_edible) powdery_brittlegill",
"(conditionally_edible) peltigera praetextata",
"(conditionally_edible) boletus edulis",
"(conditionally_edible) sarcomyxa serotina",
"(conditionally_edible) ganoderma applanatum",
"(conditionally_edible) pine_bolete",
"(conditionally_edible) amanita_muscaria",
"(conditionally_edible) white_dapperling",
"(conditionally_edible) leccinum scabrum",
"(conditionally_edible) grey_knight",
"(conditionally_edible) devils_bolete",
"(conditionally_edible) the_goblet",
"(conditionally_edible) curry_milkcap",
"(conditionally_edible) fools_funnel",
"(conditionally_edible) deer_shield",
"(deadly) paxillus_involutus",
"(deadly) entoloma_sinuatum",
"(deadly) lepiota_brunneoincarnata",
"(deadly) lepiota_castanea",
"(deadly) amanita_magnivelaris",
"(deadly) cortinarius_orellanus",
"(deadly) tricholoma_equestre",
"(deadly) amanita_verna",
"(deadly) cortinarius_rubellus",
"(deadly) amanita_ocreata",
"(deadly) cortinarius_splendens",
"(deadly) hypholoma_fasciculare",
"(deadly) amanita_smithiana",
"(deadly) amanita_subpallidorosea",
"(deadly) amanita_fuliginea",
"(deadly) amanita_arocheae",
"(deadly) lepiota_subincarnata",
"(deadly) pholiotina_rugosa",
"(deadly) lepiota_helveola",
"(deadly) omphalotus_illudens",
"(deadly) lepiota_brunneolilacea",
"(deadly) lactarius_torminosus",
"(deadly) amanita_virosa",
"(deadly) cortinarius_eartoxicus",
"(deadly) amanita_sphaerobulbosa",
"(deadly) clitocybe_rivulosa",
"(deadly) pleurocybella_porrigens",
"(deadly) podostroma_cornu-damae",
"(deadly) boletus_pulcherrimus",
"(deadly) clitocybe_dealbata",
"(deadly) galerina_sulciceps",
"(deadly) galerina_marginata",
"(deadly) omphalotus_japonicus",
"(deadly) inosperma_erubescens",
"(deadly) amanita_phalloides",
"(deadly) russula_subnigricans",
"(deadly) amanita_exitialis",
"(deadly) amanita_bisporigera",
"(deadly) amanita_subjunquillea",
"(deadly) trogia_venenata",
"(edible) aleuria_aurantia",
"(edible) fairy_ring_champignons",
"(edible) cyclocybe_aegerita",
"(edible) snowy_waxcap",
"(edible) crucibulum laeve",
"(edible) flammulina_velutipes",
"(edible) sulphur_tuft",
"(edible) meadow_waxcap",
"(edible) common_bonnet",
"(edible) grifola_frondosa",
"(edible) hygrophorus_chrysodon",
"(edible) suillus_tomentosus",
"(edible) lactarius_subdulcis",
"(edible) tuber_brumale",
"(edible) tuber_indicum",
"(edible) golden_waxcap",
"(edible) field_mushroom",
"(edible) clavulinaceae",
"(edible) honey_fungus",
"(edible) lions_mane",
"(edible) kalaharituber_pfeilii",
"(edible) flammulina velutipes",
"(edible) common_morel",
"(edible) suillus_bovinus",
"(edible) lentinula_edodes",
"(edible) coprinellus micaceus",
"(edible) woodland_inkcap",
"(edible) cortinarius_caperatus",
"(edible) rhizopogon_luteolus",
"(edible) bearded_milkcap",
"(edible) craterellus_tubaeformis",
"(edible) marasmius_oreades",
"(edible) macrolepiota_procera",
"(edible) cyttaria_espinosae",
"(edible) hypholoma fasciculare",
"(edible) field_blewit",
"(edible) calbovista_subsculpta",
"(edible) calvatia_utriformis",
"(edible) almond_mushroom",
"(edible) tuber_borchii",
"(edible) suillus_luteus",
"(edible) poplar_fieldcap",
"(edible) auricularia_auricula-judae",
"(edible) lactarius_deliciosus",
"(edible) lactarius torminosus",
"(edible) daedaleopsis confragosa",
"(edible) schizophyllum commune",
"(edible) russula",
"(edible) agaricus_arvensis",
"(edible) daedaleopsis tricolor",
"(edible) beefsteak_fungus",
"(edible) blue_roundhead",
"(edible) panellus stipticus",
"(edible) bay_bolete",
"(edible) tuber_mesentericum",
"(edible) tawny_funnel",
"(edible) phallus_indusiatus",
"(edible) polyporus_squamosus",
"(edible) stropharia_rugosoannulata",
"(edible) clitocybe_nuda",
"(edible) plums_and_custard",
"(edible) lactarius deliciosus",
"(edible) morel",
"(edible) lactarius_salmonicolor",
"(edible) the_deceiver",
"(edible) cortinarius_variicolor",
"(edible) purple_brittlegill",
"(edible) hericium_erinaceus",
"(edible) coltricia perennis",
"(edible) leccinum_scabrum",
"(edible) cauliflower_fungus",
"(edible) tricholoma_matsutake",
"(edible) tawny_grisette",
"(edible) agaricus_silvaticus",
"(edible) clustered_domecap",
"(edible) coprinus comatus",
"(edible) common_puffball",
"(edible) cerioporus squamosus",
"(edible) cedarwood_waxcap",
"(edible) boletus_edulis",
"(edible) parrot_waxcap",
"(edible) agaricus_bisporus",
"(edible) golden_scalycap",
"(edible) clavariaceae",
"(edible) polyporus_mylittae",
"(edible) hypholoma lateritium",
"(edible) calvatia_gigantea",
"(edible) volvariella_volvacea",
"(edible) chroogomphus",
"(edible) craterellus_cornucopioides",
"(edible) the_blusher",
"(edible) chicken_of_the_woods",
"(edible) fly_agaric",
"(edible) birch_woodwart",
"(edible) clouded_agaric",
"(edible) hen_of_the_woods",
"(edible) common_rustgill",
"(edible) lactarius_deterrimus",
"(edible) pleurotus",
"(edible) chanterelle",
"(edible) hygrocybe",
"(edible) laetiporus_sulphureus",
"(edible) fistulina_hepatica",
"(edible) golden_bootleg",
"(edible) tuber_aestivum",
"(edible) summer_bolete",
"(edible) lilac_bonnet",
"(edible) oyster_mushroom",
"(edible) aniseed_funnel",
"(edible) heath_waxcap",
"(edible) oak_bolete",
"(edible) tuber_macrosporum",
"(edible) cucumber_cap",
"(edible) red_cracking_bolete",
"(edible) lactarius_volemus",
"(edible) sparassis_crispa",
"(edible) pseudohydnum_gelatinosum",
"(edible) st_georges_mushroom",
"(edible) the_prince",
"(edible) apioperdon pyriforme",
"(edible) entoloma",
"(edible) coprinus_comatus",
"(edible) silky_rosegill",
"(edible) armillaria_mellea",
"(edible) platismatia glauca",
"(edible) pleurotus ostreatus",
"(edible) slippery_jack",
"(edible) smoky_bracket",
"(edible) calocybe_gambosa",
"(edible) pleurotus pulmonarius",
"(edible) amanita_caesarea",
"(edible) agaricus",
"(edible) tremella_fuciformis",
"(edible) chestnut_bolete",
"(edible) suillus_granulatus",
"(edible) black_bulgar",
"(edible) stropharia aeruginosa",
"(edible) hydnum_repandum",
"(edible) boletus_badius",
"(edible) leccinum_aurantiacum",
"(edible) pestle_puffball",
"(edible) crimped_gill",
"(edible) porcelain_fungus",
"(edible) phallus impudicus",
"(edible) kuehneromyces mutabilis",
"(edible) wrinkled_peach",
"(edible) hericium coralloides",
"(edible) birch_polypore",
"(edible) velvet_shank",
"(edible) horse_mushroom",
"(edible) leccinum_versipelle",
"(edible) beechwood_sickener",
"(edible) merulius tremellosus",
"(edible) tricholoma_terreum",
"(edible) cantharellus_cibarius",
"(edible) poplar_bell",
"(edible) hypsizygus_tessellatus",
"(edible) mutinus ravenelii",
"(edible) dryads_saddle",
"(edible) spectacular_rustgill"
] |
MaxPowerUnlimited/vit-base-oxford-iiit-pets
|
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the [Oxford-IIIT Pets dataset](https://huggingface.co/datasets/pcuenq/oxford-pets). It has been trained to classify 37 breeds of cats and dogs.
It achieves the following results on the validation set:
- **Loss**: 0.2648
- **Accuracy**: 0.9459
## Model description
This model is based on ViT (Vision Transformer), a transformer-based architecture for image classification that treats image patches as input tokens, enabling the use of pure transformer architectures on vision tasks.
Fine-tuning was done using the `transformers` Trainer API from Hugging Face.
## Intended uses & limitations
**You can use this model for:**
- Classifying breeds of cats and dogs from the Oxford-IIIT Pets dataset.
- Fine-tuning on other animal classification datasets.
- Serving as a strong vision transformer baseline for academic or benchmarking purposes.
**Limitations:**
- Performance may degrade on images outside of the pet domain.
- Not optimized for mobile or edge devices.
## Training and evaluation data
Dataset used: [`pcuenq/oxford-pets`](https://huggingface.co/datasets/pcuenq/oxford-pets)
- 7,390 training images
- 739 validation images
- 37 breed classes
## Training procedure
### Hyperparameters
- **Learning rate**: 5e-05
- **Train batch size**: 32
- **Eval batch size**: 8
- **Seed**: 42
- **Optimizer**: AdamW
- **Scheduler**: Linear
- **Epochs**: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3673 | 1.0 | 185 | 0.3645 | 0.9080 |
| 0.3068 | 2.0 | 370 | 0.3329 | 0.9161 |
| 0.2836 | 3.0 | 555 | 0.3129 | 0.9175 |
| 0.2556 | 4.0 | 740 | 0.2980 | 0.9202 |
| 0.2411 | 5.0 | 925 | 0.2872 | 0.9215 |
| 0.2256 | 6.0 | 1110 | 0.2805 | 0.9215 |
| 0.2378 | 7.0 | 1295 | 0.2751 | 0.9215 |
| 0.2176 | 8.0 | 1480 | 0.2717 | 0.9215 |
| 0.2206 | 9.0 | 1665 | 0.2696 | 0.9215 |
| 0.2173 | 10.0 | 1850 | 0.2690 | 0.9215 |
## Zero-shot evaluation results
Using a CLIP-based benchmark on the same dataset, the following zero-shot performance was observed:
- **Accuracy**: 0.8800
- **Precision**: 0.8768
- **Recall**: 0.8800
## Framework versions
- **Transformers**: 4.50.3
- **PyTorch**: 2.5.1+cu121
- **Datasets**: 3.5.0
- **Tokenizers**: 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
acd20000/real-estate-categories
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"bad/misc",
"bathroom",
"front of house",
"kitchen",
"no image",
"room"
] |
werent4/FiTTas-14-f1
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"anger",
"surprise",
"contempt",
"happy",
"neutral",
"fear",
"sad",
"disgust"
] |
ricardoSLabs/gender_mozilla_mel_spec_Vit_swin-tiny-patch4-window7-224
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gender_mozilla_mel_spec_Vit_swin-tiny-patch4-window7-224
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2508
- Accuracy: 0.9133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5945 | 1.0 | 11 | 0.3302 | 0.8733 |
| 0.3064 | 2.0 | 22 | 0.2508 | 0.9133 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
[
"female",
"male"
] |
ricardoSLabs/gender_mozilla_mel_spec_Vit_vit-tiny-patch16-224
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gender_mozilla_mel_spec_Vit_vit-tiny-patch16-224
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2610
- Accuracy: 0.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6588 | 1.0 | 11 | 0.3814 | 0.8733 |
| 0.3131 | 2.0 | 22 | 0.2610 | 0.9167 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
[
"female",
"male"
] |
prithivMLmods/Human-Action-Recognition
|

# **Human-Action-Recognition**
> **Human-Action-Recognition** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for multi-class human action recognition. It uses the **SiglipForImageClassification** architecture to predict human activities from still images.
```py
Classification Report:
precision recall f1-score support
calling 0.8525 0.7571 0.8020 840
clapping 0.8679 0.7119 0.7822 840
cycling 0.9662 0.9857 0.9758 840
dancing 0.8302 0.8381 0.8341 840
drinking 0.9093 0.8714 0.8900 840
eating 0.9377 0.9131 0.9252 840
fighting 0.9034 0.7905 0.8432 840
hugging 0.9065 0.9000 0.9032 840
laughing 0.7854 0.8583 0.8203 840
listening_to_music 0.8494 0.7988 0.8233 840
running 0.8888 0.9321 0.9099 840
sitting 0.5945 0.7226 0.6523 840
sleeping 0.8593 0.8214 0.8399 840
texting 0.8195 0.6702 0.7374 840
using_laptop 0.6610 0.9190 0.7689 840
accuracy 0.8327 12600
macro avg 0.8421 0.8327 0.8339 12600
weighted avg 0.8421 0.8327 0.8339 12600
```

The model categorizes images into 15 action classes:
- **0:** calling
- **1:** clapping
- **2:** cycling
- **3:** dancing
- **4:** drinking
- **5:** eating
- **6:** fighting
- **7:** hugging
- **8:** laughing
- **9:** listening_to_music
- **10:** running
- **11:** sitting
- **12:** sleeping
- **13:** texting
- **14:** using_laptop
---
# **Run with Transformers 🤗**
```python
!pip install -q transformers torch pillow gradio
```
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/Human-Action-Recognition" # Change to your updated model path
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# ID to Label mapping
id2label = {
0: "calling",
1: "clapping",
2: "cycling",
3: "dancing",
4: "drinking",
5: "eating",
6: "fighting",
7: "hugging",
8: "laughing",
9: "listening_to_music",
10: "running",
11: "sitting",
12: "sleeping",
13: "texting",
14: "using_laptop"
}
def classify_action(image):
"""Predicts the human action in the image."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Gradio interface
iface = gr.Interface(
fn=classify_action,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Action Prediction Scores"),
title="Human Action Recognition",
description="Upload an image to recognize the human action (e.g., dancing, calling, sitting, etc.)."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
```
---
# **Intended Use**
The **Human-Action-Recognition** model is designed to detect and classify human actions from images. Example applications:
- **Surveillance & Monitoring:** Recognizing suspicious or specific activities in public spaces.
- **Sports Analytics:** Identifying player activities or movements.
- **Social Media Insights:** Understanding trends in user-posted visuals.
- **Healthcare:** Monitoring elderly or patients for activity patterns.
- **Robotics & Automation:** Enabling context-aware AI systems with visual understanding.
|
[
"calling",
"clapping",
"cycling",
"dancing",
"drinking",
"eating",
"fighting",
"hugging",
"laughing",
"listening_to_music",
"running",
"sitting",
"sleeping",
"texting",
"using_laptop"
] |
dafa-w/emotion_classification
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5254
- Accuracy: 0.4313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.9225 | 0.25 |
| No log | 2.0 | 80 | 1.5751 | 0.4188 |
| No log | 3.0 | 120 | 1.5159 | 0.3875 |
| No log | 4.0 | 160 | 1.5254 | 0.4313 |
| 1.2986 | 5.0 | 200 | 1.6747 | 0.4125 |
| 1.2986 | 6.0 | 240 | 1.8746 | 0.3563 |
| 1.2986 | 7.0 | 280 | 2.1805 | 0.35 |
| 1.2986 | 8.0 | 320 | 2.3406 | 0.3125 |
| 1.2986 | 9.0 | 360 | 2.3001 | 0.3438 |
| 0.7028 | 10.0 | 400 | 2.2428 | 0.2625 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
keyran/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1996
- Accuracy: 0.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3741 | 1.0 | 370 | 0.2714 | 0.9459 |
| 0.2138 | 2.0 | 740 | 0.2146 | 0.9499 |
| 0.1833 | 3.0 | 1110 | 0.1899 | 0.9472 |
| 0.1472 | 4.0 | 1480 | 0.1852 | 0.9526 |
| 0.1326 | 5.0 | 1850 | 0.1814 | 0.9540 |
### Framework versions
- Transformers 4.51.0
- Pytorch 2.6.0
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
werent4/FiTTas-14-full-f1
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"anger",
"surprise",
"contempt",
"happy",
"neutral",
"fear",
"sad",
"disgust"
] |
werent4/FiTTas-14-full-chp-4200
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"anger",
"surprise",
"contempt",
"happy",
"neutral",
"fear",
"sad",
"disgust"
] |
chrisis2/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2039
- Accuracy: 0.9269
## Model description
## Transfer-Learned ViT Model
This model was trained using transfer learning based on the ViT model `google/vit-base-patch16-224`.
### 🔧 Training Setup
- Dataset: Oxford-IIIT Pets
- Epochs: 7
- Batch Size: 8
- Learning Rate: 2e-4
### Performance on Test Set
- **Accuracy:** 0.9269
- **Precision:** 0.9273
- **Recall:** 0.9269
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3448 | 1.0 | 739 | 0.2690 | 0.9418 |
| 0.2359 | 2.0 | 1478 | 0.2013 | 0.9378 |
| 0.1621 | 3.0 | 2217 | 0.1807 | 0.9391 |
| 0.1436 | 4.0 | 2956 | 0.1738 | 0.9378 |
| 0.1106 | 5.0 | 3695 | 0.1679 | 0.9445 |
| 0.1319 | 6.0 | 4434 | 0.1616 | 0.9405 |
| 0.1413 | 7.0 | 5173 | 0.1609 | 0.9391 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
# Comparison with the zero-shot classification model
## Zero-Shot Evaluation on the Oxford-IIIT Pet Dataset
As part of the evaluation, I compared this transfer-learned model to a zero-shot classification model. The zero-shot model used is:
**Model:** `openai/clip-vit-large-patch14`
**Task:** zero-shot-image-classification
### Zero-Shot Model Results
On the Oxford-IIIT Pet Dataset, the zero-shot model achieved the following performance:
- **Accuracy:** 0.8800
- **Precision:** 0.8768
- **Recall:** 0.8800
These results were obtained using the test set and evaluated with `sklearn.metrics`.
---
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
sergioGGG/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
reuben256/nsfw-content-moderation
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"normal",
"nsfw"
] |
lukmanulhakeem/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1744
- Accuracy: 0.9526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3847 | 1.0 | 370 | 0.3016 | 0.9229 |
| 0.2205 | 2.0 | 740 | 0.2314 | 0.9378 |
| 0.184 | 3.0 | 1110 | 0.2043 | 0.9378 |
| 0.1303 | 4.0 | 1480 | 0.1968 | 0.9364 |
| 0.1387 | 5.0 | 1850 | 0.1936 | 0.9350 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
darthraider/vit-4-veggies-2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-4-veggies-2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0636
- Accuracy: 0.9870
- Precision: 0.9875
- Recall: 0.9870
- F1: 0.9870
- Confusion Matrix: [[62, 0, 4, 0, 0, 0, 0, 0, 0, 0], [0, 279, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 410, 0, 0, 0, 0, 0, 0, 0], [0, 0, 3, 356, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 123, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 9, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 13, 1, 0, 1], [0, 0, 0, 0, 0, 1, 1, 11, 0, 1], [0, 0, 0, 0, 0, 2, 0, 0, 7, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 19]]
- Cohen Kappa: 0.9830
- Matthews Corrcoef: 0.9831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Confusion Matrix | Cohen Kappa | Matthews Corrcoef |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------:|:-----------------:|
| 0.2225 | 0.6211 | 100 | 0.2418 | 0.9510 | 0.9485 | 0.9510 | 0.9472 | [[59, 0, 7, 0, 0, 0, 0, 0, 0, 0], [0, 276, 3, 0, 0, 0, 0, 0, 0, 0], [1, 0, 399, 8, 2, 0, 0, 0, 0, 0], [0, 0, 8, 345, 6, 0, 0, 0, 0, 0], [0, 0, 0, 0, 124, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 6, 0, 3], [0, 0, 0, 0, 0, 0, 13, 0, 0, 2], [0, 0, 0, 0, 0, 0, 0, 3, 0, 11], [2, 0, 0, 0, 0, 0, 0, 2, 3, 2], [0, 0, 0, 0, 0, 0, 0, 0, 0, 20]] | 0.9362 | 0.9364 |
| 0.106 | 1.2422 | 200 | 0.1557 | 0.9694 | 0.9684 | 0.9694 | 0.9675 | [[60, 0, 6, 0, 0, 0, 0, 0, 0, 0], [0, 279, 0, 0, 0, 0, 0, 0, 0, 0], [2, 0, 406, 2, 0, 0, 0, 0, 0, 0], [0, 0, 7, 351, 1, 0, 0, 0, 0, 0], [0, 0, 1, 1, 122, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 6, 1, 2], [0, 0, 0, 0, 0, 0, 14, 0, 0, 1], [0, 0, 0, 0, 0, 0, 0, 8, 2, 4], [1, 0, 0, 0, 0, 1, 0, 1, 6, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 19]] | 0.9601 | 0.9602 |
| 0.1198 | 1.8634 | 300 | 0.1496 | 0.9640 | 0.9679 | 0.9640 | 0.9624 | [[60, 0, 6, 0, 0, 0, 0, 0, 0, 0], [0, 266, 13, 0, 0, 0, 0, 0, 0, 0], [0, 0, 406, 4, 0, 0, 0, 0, 0, 0], [0, 0, 7, 352, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 124, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 8, 0, 1], [0, 0, 0, 0, 0, 0, 14, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1, 9, 0, 4], [0, 0, 0, 0, 0, 0, 0, 1, 8, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 19]] | 0.9530 | 0.9533 |
| 0.0901 | 2.4845 | 400 | 0.1208 | 0.9686 | 0.9709 | 0.9686 | 0.9665 | [[63, 0, 3, 0, 0, 0, 0, 0, 0, 0], [0, 279, 0, 0, 0, 0, 0, 0, 0, 0], [8, 1, 401, 0, 0, 0, 0, 0, 0, 0], [0, 0, 11, 348, 0, 0, 0, 0, 0, 0], [0, 0, 0, 3, 121, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 6, 0, 2], [0, 0, 0, 0, 0, 0, 14, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1, 9, 0, 4], [0, 0, 0, 0, 0, 0, 0, 0, 9, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 20]] | 0.9591 | 0.9592 |
| 0.0133 | 3.1056 | 500 | 0.0862 | 0.9801 | 0.9820 | 0.9801 | 0.9797 | [[61, 0, 5, 0, 0, 0, 0, 0, 0, 0], [0, 279, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 408, 1, 0, 0, 0, 0, 0, 0], [0, 0, 4, 355, 0, 0, 0, 0, 0, 0], [0, 0, 0, 3, 121, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 4, 0, 6, 0, 0], [0, 0, 0, 0, 0, 0, 14, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1, 11, 0, 2], [0, 0, 0, 0, 0, 0, 0, 1, 8, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 19]] | 0.9740 | 0.9741 |
| 0.0078 | 3.7267 | 600 | 0.0498 | 0.9908 | 0.9915 | 0.9908 | 0.9906 | [[63, 0, 3, 0, 0, 0, 0, 0, 0, 0], [0, 279, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 410, 0, 0, 0, 0, 0, 0, 0], [0, 0, 3, 356, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 124, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 6, 0, 4, 0, 0], [0, 0, 0, 0, 0, 0, 14, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1, 13, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 9, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 20]] | 0.9880 | 0.9881 |
| 0.0045 | 4.3478 | 700 | 0.0532 | 0.9877 | 0.9879 | 0.9877 | 0.9877 | [[62, 0, 4, 0, 0, 0, 0, 0, 0, 0], [0, 279, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 409, 1, 0, 0, 0, 0, 0, 0], [0, 0, 2, 357, 0, 0, 0, 0, 0, 0], [0, 0, 0, 2, 122, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 9, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 14, 0, 0, 1], [0, 0, 0, 0, 0, 1, 1, 11, 0, 1], [0, 0, 0, 0, 0, 1, 0, 0, 8, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 19]] | 0.9840 | 0.9840 |
| 0.0039 | 4.9689 | 800 | 0.0547 | 0.9877 | 0.9885 | 0.9877 | 0.9878 | [[63, 0, 3, 0, 0, 0, 0, 0, 0, 0], [0, 279, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 410, 0, 0, 0, 0, 0, 0, 0], [0, 0, 3, 356, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 123, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 9, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 13, 0, 0, 2], [0, 0, 0, 0, 0, 1, 0, 11, 0, 2], [0, 0, 0, 0, 0, 2, 0, 0, 7, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 19]] | 0.9840 | 0.9841 |
| 0.0032 | 5.5901 | 900 | 0.0767 | 0.9862 | 0.9868 | 0.9862 | 0.9862 | [[61, 0, 5, 0, 0, 0, 0, 0, 0, 0], [0, 279, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 408, 2, 0, 0, 0, 0, 0, 0], [0, 0, 2, 357, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 123, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 9, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 13, 1, 0, 1], [0, 0, 0, 0, 0, 1, 1, 12, 0, 0], [0, 0, 0, 0, 0, 2, 0, 0, 7, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 19]] | 0.9820 | 0.9821 |
| 0.0021 | 6.2112 | 1000 | 0.0678 | 0.9862 | 0.9867 | 0.9862 | 0.9862 | [[62, 0, 4, 0, 0, 0, 0, 0, 0, 0], [0, 279, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 409, 1, 0, 0, 0, 0, 0, 0], [0, 0, 3, 356, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 123, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 9, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 13, 1, 0, 1], [0, 0, 0, 0, 0, 1, 1, 11, 0, 1], [0, 0, 0, 0, 0, 2, 0, 0, 7, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 19]] | 0.9820 | 0.9821 |
| 0.0017 | 6.8323 | 1100 | 0.0617 | 0.9877 | 0.9882 | 0.9877 | 0.9877 | [[62, 0, 4, 0, 0, 0, 0, 0, 0, 0], [0, 279, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 410, 0, 0, 0, 0, 0, 0, 0], [0, 0, 2, 357, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 123, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 9, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 13, 1, 0, 1], [0, 0, 0, 0, 0, 1, 1, 11, 0, 1], [0, 0, 0, 0, 0, 2, 0, 0, 7, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 19]] | 0.9840 | 0.9841 |
| 0.0012 | 7.4534 | 1200 | 0.0636 | 0.9870 | 0.9875 | 0.9870 | 0.9870 | [[62, 0, 4, 0, 0, 0, 0, 0, 0, 0], [0, 279, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 410, 0, 0, 0, 0, 0, 0, 0], [0, 0, 3, 356, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 123, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 9, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 13, 1, 0, 1], [0, 0, 0, 0, 0, 1, 1, 11, 0, 1], [0, 0, 0, 0, 0, 2, 0, 0, 7, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 19]] | 0.9830 | 0.9831 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
[
"damaged",
"dried",
"old",
"ripe",
"unripe",
"diseased",
"flower",
"ripe",
"rotten",
"unripe"
] |
Rausda6/autotrain-yh172-uui7d
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 1.7830637693405151
f1_macro: 0.041666666666666664
f1_micro: 0.14285714285714285
f1_weighted: 0.03571428571428571
precision_macro: 0.023809523809523808
precision_micro: 0.14285714285714285
precision_weighted: 0.02040816326530612
recall_macro: 0.16666666666666666
recall_micro: 0.14285714285714285
recall_weighted: 0.14285714285714285
accuracy: 0.14285714285714285
|
[
"blur",
"dots",
"good",
"illum",
"noise",
"radblur"
] |
Rausda6/autotrain-uo2t1-gvgzu
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 1.0500575304031372
f1_macro: 0.611111111111111
f1_micro: 0.7142857142857143
f1_weighted: 0.619047619047619
precision_macro: 0.5833333333333334
precision_micro: 0.7142857142857143
precision_weighted: 0.5714285714285714
recall_macro: 0.6666666666666666
recall_micro: 0.7142857142857143
recall_weighted: 0.7142857142857143
accuracy: 0.7142857142857143
|
[
"blur",
"dots",
"good",
"illum",
"noise",
"radblur"
] |
sabari15/ViT-base16-fine-tuned-crop-disease-model
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"cashew anthracnose",
"cashew gumosis",
"cashew healthy",
"cashew leaf miner",
"cashew red rust",
"cassava bacterial blight",
"cassava brown spot",
"cassava green mite",
"cassava healthy",
"cassava mosaic",
"maize fall armyworm",
"maize grasshoper",
"maize healthy",
"maize leaf beetle",
"maize leaf blight",
"maize leaf spot",
"maize streak virus",
"tomato healthy",
"tomato leaf blight",
"tomato leaf curl",
"tomato septoria leaf spot",
"tomato verticulium wilt"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.