model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
mbiarreta/deit-ena24
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-ena24 This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the ena24 dataset. It achieves the following results on the evaluation set: - Loss: 0.1233 - Accuracy: 0.9809 - F1: 0.9799 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:| | 1.396 | 0.1302 | 100 | 1.0114 | 0.7107 | 0.6602 | | 1.0428 | 0.2604 | 200 | 0.7400 | 0.7939 | 0.7694 | | 0.6952 | 0.3906 | 300 | 0.6129 | 0.8160 | 0.7981 | | 0.4429 | 0.5208 | 400 | 0.4991 | 0.8618 | 0.8171 | | 0.5441 | 0.6510 | 500 | 0.4392 | 0.8840 | 0.8631 | | 0.4533 | 0.7812 | 600 | 0.4120 | 0.8985 | 0.8765 | | 0.1213 | 0.9115 | 700 | 0.3953 | 0.8916 | 0.8738 | | 0.1151 | 1.0417 | 800 | 0.3146 | 0.9237 | 0.9141 | | 0.0953 | 1.1719 | 900 | 0.4656 | 0.9015 | 0.8786 | | 0.1876 | 1.3021 | 1000 | 0.3164 | 0.9168 | 0.9023 | | 0.2368 | 1.4323 | 1100 | 0.2997 | 0.9305 | 0.9219 | | 0.0658 | 1.5625 | 1200 | 0.2324 | 0.9534 | 0.9473 | | 0.0566 | 1.6927 | 1300 | 0.3444 | 0.9176 | 0.9077 | | 0.2437 | 1.8229 | 1400 | 0.3033 | 0.9435 | 0.9363 | | 0.1011 | 1.9531 | 1500 | 0.2740 | 0.9450 | 0.9330 | | 0.2987 | 2.0833 | 1600 | 0.2715 | 0.9489 | 0.9419 | | 0.0227 | 2.2135 | 1700 | 0.2050 | 0.9603 | 0.9562 | | 0.1891 | 2.3438 | 1800 | 0.2055 | 0.9542 | 0.9494 | | 0.0325 | 2.4740 | 1900 | 0.2070 | 0.9626 | 0.9604 | | 0.0407 | 2.6042 | 2000 | 0.1876 | 0.9611 | 0.9550 | | 0.0112 | 2.7344 | 2100 | 0.1702 | 0.9748 | 0.9719 | | 0.112 | 2.8646 | 2200 | 0.1695 | 0.9656 | 0.9624 | | 0.184 | 2.9948 | 2300 | 0.2088 | 0.9626 | 0.9590 | | 0.0464 | 3.125 | 2400 | 0.1805 | 0.9656 | 0.9613 | | 0.0794 | 3.2552 | 2500 | 0.2089 | 0.9634 | 0.9608 | | 0.0033 | 3.3854 | 2600 | 0.2128 | 0.9603 | 0.9623 | | 0.0422 | 3.5156 | 2700 | 0.1378 | 0.9702 | 0.9701 | | 0.2038 | 3.6458 | 2800 | 0.1674 | 0.9687 | 0.9685 | | 0.0156 | 3.7760 | 2900 | 0.1383 | 0.9756 | 0.9758 | | 0.0004 | 3.9062 | 3000 | 0.1544 | 0.9733 | 0.9715 | | 0.002 | 4.0365 | 3100 | 0.1552 | 0.9710 | 0.9690 | | 0.0405 | 4.1667 | 3200 | 0.1326 | 0.9763 | 0.9751 | | 0.0031 | 4.2969 | 3300 | 0.1437 | 0.9756 | 0.9759 | | 0.0022 | 4.4271 | 3400 | 0.1316 | 0.9794 | 0.9790 | | 0.0019 | 4.5573 | 3500 | 0.1233 | 0.9809 | 0.9799 | | 0.0005 | 4.6875 | 3600 | 0.1400 | 0.9771 | 0.9763 | | 0.0002 | 4.8177 | 3700 | 0.1339 | 0.9794 | 0.9797 | | 0.0304 | 4.9479 | 3800 | 0.1469 | 0.9794 | 0.9795 | | 0.0 | 5.0781 | 3900 | 0.1532 | 0.9763 | 0.9760 | | 0.0 | 5.2083 | 4000 | 0.1530 | 0.9779 | 0.9771 | | 0.0006 | 5.3385 | 4100 | 0.1434 | 0.9771 | 0.9765 | | 0.0218 | 5.4688 | 4200 | 0.1468 | 0.9763 | 0.9751 | | 0.0043 | 5.5990 | 4300 | 0.1568 | 0.9779 | 0.9763 | | 0.0246 | 5.7292 | 4400 | 0.1582 | 0.9771 | 0.9748 | | 0.0052 | 5.8594 | 4500 | 0.1489 | 0.9786 | 0.9774 | | 0.0003 | 5.9896 | 4600 | 0.1499 | 0.9779 | 0.9775 | | 0.0 | 6.1198 | 4700 | 0.1457 | 0.9794 | 0.9786 | | 0.0 | 6.25 | 4800 | 0.1437 | 0.9802 | 0.9794 | | 0.0048 | 6.3802 | 4900 | 0.1440 | 0.9794 | 0.9782 | | 0.0002 | 6.5104 | 5000 | 0.1417 | 0.9802 | 0.9793 | | 0.0001 | 6.6406 | 5100 | 0.1427 | 0.9802 | 0.9794 | | 0.0 | 6.7708 | 5200 | 0.1422 | 0.9802 | 0.9787 | | 0.0002 | 6.9010 | 5300 | 0.1425 | 0.9802 | 0.9787 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "american black bear", "american crow", "eastern fox squirrel", "eastern gray squirrel", "grey fox", "horse", "northern raccoon", "red fox", "striped skunk", "vehicle", "virginia opossum", "white_tailed_deer", "bird", "wild turkey", "woodchuck", "bobcat", "chicken", "coyote", "dog", "domestic cat", "eastern chipmunk", "eastern cottontail" ]
sergioGGG/clear_cloudy_classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clear_cloudy_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0839 - Accuracy: 0.9787 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1683 | 1.0 | 118 | 0.1417 | 0.9553 | | 0.0557 | 2.0 | 236 | 0.0952 | 0.9702 | | 0.0806 | 3.0 | 354 | 0.0937 | 0.9702 | | 0.0625 | 4.0 | 472 | 0.0980 | 0.9691 | | 0.123 | 5.0 | 590 | 0.0839 | 0.9787 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "clear", "cloudy" ]
ricardoSLabs/id_ravdess_mel_spec_Vit_swin-tiny-patch4-window7-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # id_ravdess_mel_spec_Vit_swin-tiny-patch4-window7-224 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.9423 - Accuracy: 0.2593 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 8 | 3.0745 | 0.1667 | | 3.1441 | 2.0 | 16 | 2.9423 | 0.2593 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "actor_01", "actor_02", "actor_11", "actor_12", "actor_13", "actor_14", "actor_15", "actor_16", "actor_17", "actor_18", "actor_19", "actor_20", "actor_03", "actor_21", "actor_22", "actor_23", "actor_24", "actor_04", "actor_05", "actor_06", "actor_07", "actor_08", "actor_09", "actor_10" ]
ricardoSLabs/id_ravdess_mel_spec_Vit_vit-tiny-patch16-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # id_ravdess_mel_spec_Vit_vit-tiny-patch16-224 This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.9627 - Accuracy: 0.1389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 8 | 3.1010 | 0.0787 | | 3.3744 | 2.0 | 16 | 2.9627 | 0.1389 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
[ "actor_01", "actor_02", "actor_11", "actor_12", "actor_13", "actor_14", "actor_15", "actor_16", "actor_17", "actor_18", "actor_19", "actor_20", "actor_03", "actor_21", "actor_22", "actor_23", "actor_24", "actor_04", "actor_05", "actor_06", "actor_07", "actor_08", "actor_09", "actor_10" ]
prithivMLmods/Recycling-Net-11
![zdcfbxdfgb.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/KB38CZBN1qfcAAfUoN7py.png) # **Recycling-Net-11** > **Recycling-Net-11** is an image classification model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. The model classifies images into 11 categories related to recyclable materials, helping to automate and enhance waste sorting systems. ```py Classification Report: precision recall f1-score support aluminium 0.9213 0.9145 0.9179 269 batteries 0.9833 0.9933 0.9883 297 cardboard 0.9660 0.9343 0.9499 274 disposable plates 0.9078 0.9744 0.9399 273 glass 0.9621 0.9490 0.9555 294 hard plastic 0.8675 0.7250 0.7899 280 paper 0.8702 0.8941 0.8820 255 paper towel 0.9333 0.9622 0.9475 291 polystyrene 0.8188 0.8385 0.8285 291 soft plastics 0.8425 0.8693 0.8557 283 takeaway cups 0.9575 0.9767 0.9670 300 accuracy 0.9128 3107 macro avg 0.9119 0.9119 0.9111 3107 weighted avg 0.9127 0.9128 0.9119 3107 ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/XW6fZXkQ2-Z5KhSjnxuQs.png) The model categorizes images into the following classes: - **0:** aluminium - **1:** batteries - **2:** cardboard - **3:** disposable plates - **4:** glass - **5:** hard plastic - **6:** paper - **7:** paper towel - **8:** polystyrene - **9:** soft plastics - **10:** takeaway cups --- # **Run with Transformers 🤗** ```python !pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Recycling-Net-11" # Update with your actual Hugging Face model path model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Label mapping id2label = { 0: "aluminium", 1: "batteries", 2: "cardboard", 3: "disposable plates", 4: "glass", 5: "hard plastic", 6: "paper", 7: "paper towel", 8: "polystyrene", 9: "soft plastics", 10: "takeaway cups" } def classify_recyclable_material(image): """Predicts the type of recyclable material in the image.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))} return predictions # Gradio interface iface = gr.Interface( fn=classify_recyclable_material, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Recyclable Material Prediction Scores"), title="Recycling-Net-11", description="Upload an image of a waste item to identify its recyclable material type." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Intended Use** **Recycling-Net-11** is ideal for: - **Smart Waste Sorting:** Automating recycling processes in smart bins or factories. - **Environmental Awareness Tools:** Helping people learn how to sort waste correctly. - **Municipal Waste Management:** Classifying and analyzing urban waste data. - **Robotics:** Assisting robots in identifying and sorting materials. - **Education:** Teaching children and communities about recyclable materials.
[ "aluminium", "batteries", "cardboard", "disposable plates", "glass", "hard plastic", "paper", "paper towel", "polystyrene", "soft plastics", "takeaway cups" ]
prithivMLmods/Minc-Materials-23
![14.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/xVgE0XmLAzc-7BPVXXDNZ.png) # **Minc-Materials-23** > **Minc-Materials-23** is a visual material classification model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies images into 23 common material types based on visual features, ideal for applications in material recognition, construction, retail, robotics, and beyond. ```py Classification Report: precision recall f1-score support brick 0.8325 0.8278 0.8301 2125 carpet 0.7318 0.7539 0.7427 2125 ceramic 0.6484 0.6579 0.6531 2125 fabric 0.6248 0.5666 0.5943 2125 foliage 0.9102 0.9205 0.9153 2125 food 0.8588 0.8899 0.8740 2125 glass 0.7799 0.6753 0.7238 2125 hair 0.9267 0.9520 0.9392 2125 leather 0.7464 0.7826 0.7641 2125 metal 0.6491 0.6626 0.6558 2125 mirror 0.7668 0.6127 0.6811 2125 other 0.8637 0.8198 0.8411 2125 painted 0.6813 0.8391 0.7520 2125 paper 0.7393 0.7261 0.7327 2125 plastic 0.6142 0.5304 0.5692 2125 polishedstone 0.7435 0.7449 0.7442 2125 skin 0.8995 0.9228 0.9110 2125 sky 0.9584 0.9751 0.9666 2125 stone 0.7567 0.7289 0.7426 2125 tile 0.7108 0.6847 0.6975 2125 wallpaper 0.7825 0.8193 0.8005 2125 water 0.8993 0.8781 0.8886 2125 wood 0.6281 0.7685 0.6912 2125 accuracy 0.7713 48875 macro avg 0.7719 0.7713 0.7700 48875 weighted avg 0.7719 0.7713 0.7700 48875 ``` ![download (2).png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/QNv3UQpQg6RkPLXaUFE2g.png) The model categorizes images into the following 23 classes: - **0:** brick - **1:** carpet - **2:** ceramic - **3:** fabric - **4:** foliage - **5:** food - **6:** glass - **7:** hair - **8:** leather - **9:** metal - **10:** mirror - **11:** other - **12:** painted - **13:** paper - **14:** plastic - **15:** polishedstone - **16:** skin - **17:** sky - **18:** stone - **19:** tile - **20:** wallpaper - **21:** water - **22:** wood --- # **Run with Transformers 🤗** ```python !pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Minc-Materials-23" # Replace with your actual model path model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Label mapping id2label = { 0: "brick", 1: "carpet", 2: "ceramic", 3: "fabric", 4: "foliage", 5: "food", 6: "glass", 7: "hair", 8: "leather", 9: "metal", 10: "mirror", 11: "other", 12: "painted", 13: "paper", 14: "plastic", 15: "polishedstone", 16: "skin", 17: "sky", 18: "stone", 19: "tile", 20: "wallpaper", 21: "water", 22: "wood" } def classify_material(image): """Predicts the material type present in the uploaded image.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))} return predictions # Gradio interface iface = gr.Interface( fn=classify_material, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Material Prediction Scores"), title="Minc-Materials-23", description="Upload an image to identify the material type (e.g., brick, wood, plastic, metal, etc.)." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Intended Use** **Minc-Materials-23** is tailored for: - **Architecture & Construction:** Material identification from site photos or plans. - **Retail & Inventory:** Recognizing product materials in e-commerce. - **Robotics & AI Vision:** Enabling object material perception. - **Environmental Monitoring:** Detecting materials in natural vs. urban environments. - **Education & Research:** Teaching material properties and classification techniques.
[ "brick", "carpet", "ceramic", "fabric", "foliage", "food", "glass", "hair", "leather", "metal", "mirror", "other", "painted", "paper", "plastic", "polishedstone", "skin", "sky", "stone", "tile", "wallpaper", "water", "wood" ]
holendar/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2044 - Accuracy: 0.9337 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3817 | 1.0 | 370 | 0.2979 | 0.9229 | | 0.2143 | 2.0 | 740 | 0.2289 | 0.9378 | | 0.1525 | 3.0 | 1110 | 0.2046 | 0.9405 | | 0.1322 | 4.0 | 1480 | 0.1996 | 0.9391 | | 0.1256 | 5.0 | 1850 | 0.1968 | 0.9391 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1 ### Zero-Shot Classification Results - model: openai/clip-vit-large-patch14 - Accuracy: 0.8800 - Precision: 0.8768 - Recall: 0.8800
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
007Marlon2000/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2127 - Accuracy: 0.9405 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.51.1 - Pytorch 2.6.0+cpu - Datasets 3.5.0 - Tokenizers 0.21.1
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
mlg556/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6279 - Accuracy: 0.901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6728 | 1.0 | 63 | 2.4845 | 0.834 | | 1.8248 | 2.0 | 126 | 1.7660 | 0.903 | | 1.5803 | 2.96 | 186 | 1.6279 | 0.901 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
ccordovafi/platzi-beans-finetuned-cesar-cordova
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-beans-finetuned-cesar-cordova This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1099 - Accuracy: 0.9699 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 1.5385 | 200 | 0.5416 | 0.8647 | | No log | 3.0769 | 400 | 0.1099 | 0.9699 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
armgB/swin-finetuned-best
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "no finding", "enlarged cardiomediastinum", "cardiomegaly", "lung opacity", "lung lesion", "edema", "consolidation", "pneumonia", "atelectasis", "pneumothorax", "pleural effusion", "pleural other", "fracture" ]
Weberm/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2111 - Accuracy: 0.9499 ## Model description Based on the ViT model google/vit-base-patch16-224. ## Performance on Test Set - eval_loss: 0.21107521653175354 - eval_accuracy: 0.9519323410013532 - eval_runtime: 12.1289 - eval_samples_per_second: 73.032 - eval_steps_per_second: 9.191 - epoch: 6.0 ## Compared to Zero Shot - Accuracy: 0.8800 - Precision: 0.8768 - Recall: 0.8800 - checkpoint = "openai/clip-vit-large-patch14" ## Training and evaluation data - per_device_train_batch_size=16, - evaluation_strategy="epoch", - save_strategy="epoch", - logging_steps=100, - num_train_epochs=6, - learning_rate=3e-4, - save_total_limit=2, - remove_unused_columns=False, - push_to_hub=True, - report_to='tensorboard', - load_best_model_at_end=True, ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3785 | 1.0 | 370 | 0.3015 | 0.9229 | | 0.1868 | 2.0 | 740 | 0.2318 | 0.9323 | | 0.1572 | 3.0 | 1110 | 0.2077 | 0.9432 | | 0.1402 | 4.0 | 1480 | 0.2030 | 0.9405 | | 0.1278 | 5.0 | 1850 | 0.2031 | 0.9418 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
sergioGGG/clear_cloudy_classifier_Pr2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clear_cloudy_classifier_Pr2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0738 - Accuracy: 0.9732 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2003 | 1.0 | 74 | 0.1635 | 0.9459 | | 0.1059 | 2.0 | 148 | 0.1415 | 0.9497 | | 0.1044 | 3.0 | 222 | 0.1098 | 0.9621 | | 0.0608 | 4.0 | 296 | 0.1001 | 0.9621 | | 0.0897 | 5.0 | 370 | 0.0844 | 0.9719 | | 0.0627 | 6.0 | 444 | 0.0952 | 0.9681 | | 0.0376 | 7.0 | 518 | 0.0784 | 0.9723 | | 0.033 | 8.0 | 592 | 0.0738 | 0.9732 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "clear", "cloudy" ]
Dugerij/vit-base-newspaper_for_segmetation_classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-newspaper_for_segmetation_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dugerij/newspaper_parser_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.0104 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0429 | 1.0 | 100 | 0.0409 | 0.9929 | | 0.0159 | 2.0 | 200 | 0.0181 | 1.0 | | 0.0113 | 3.0 | 300 | 0.0139 | 1.0 | | 0.0119 | 4.0 | 400 | 0.0112 | 1.0 | | 0.0093 | 5.0 | 500 | 0.0104 | 1.0 | ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.0
[ "0", "1" ]
Sychol/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.7870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.7870 | | No log | 2.0 | 34 | 0.6131 | | No log | 3.0 | 51 | 0.5534 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
zeromin-03/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.7839 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.7839 | | No log | 2.0 | 34 | 0.6245 | | No log | 3.0 | 51 | 0.5703 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
j200chi/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.7828 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.7828 | | No log | 2.0 | 34 | 0.6187 | | No log | 3.0 | 51 | 0.5645 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
Skrrrrrrrr/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.8174 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.8174 | | No log | 2.0 | 34 | 0.6596 | | No log | 3.0 | 51 | 0.6012 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
Meoharago/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.7975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.7975 | | No log | 2.0 | 34 | 0.6232 | | No log | 3.0 | 51 | 0.5676 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
jih123/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.7555 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.7555 | | No log | 2.0 | 34 | 0.5963 | | No log | 3.0 | 51 | 0.5424 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
hbjoo/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.8052 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.8052 | | No log | 2.0 | 34 | 0.6313 | | No log | 3.0 | 51 | 0.5755 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
minhyuckkkkk/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.8113 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.8113 | | No log | 2.0 | 34 | 0.7223 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
Snjie/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.5462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.5462 | | No log | 2.0 | 34 | 0.4191 | | No log | 3.0 | 51 | 0.3789 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
valla2345/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.8045 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.8045 | | No log | 2.0 | 34 | 0.6510 | | No log | 3.0 | 51 | 0.5973 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
Uniteworker/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.7525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.7525 | | No log | 2.0 | 34 | 0.6061 | | No log | 3.0 | 51 | 0.5516 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
myonghyun/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.4327 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.4327 | | No log | 2.0 | 72 | 0.2069 | | No log | 3.0 | 108 | 0.1673 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
z1515/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.8021 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.8021 | | No log | 2.0 | 34 | 0.7059 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
yunseyoung94/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.8044 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.8044 | | No log | 2.0 | 34 | 0.7073 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
halfmoonbear/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.8145 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.8145 | | No log | 2.0 | 34 | 0.6428 | | No log | 3.0 | 51 | 0.5840 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
SangjeHwang/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.7702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.7702 | | No log | 2.0 | 34 | 0.5934 | | No log | 3.0 | 51 | 0.5358 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
gjseh115/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.6250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.6250 | | No log | 2.0 | 72 | 0.2477 | | No log | 3.0 | 108 | 0.1574 | | No log | 4.0 | 144 | 0.1282 | | No log | 5.0 | 180 | 0.1140 | | No log | 6.0 | 216 | 0.1067 | | No log | 7.0 | 252 | 0.1027 | | No log | 8.0 | 288 | 0.1014 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2", "label_3", "label_4" ]
Uniteworker/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_foods dataset. It achieves the following results on the evaluation set: - Loss: 0.4279 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.4279 | | No log | 2.0 | 72 | 0.2084 | | No log | 3.0 | 108 | 0.1726 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
Meoharago/dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.4230 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.4230 | | No log | 2.0 | 72 | 0.1626 | | No log | 3.0 | 108 | 0.1086 | | No log | 4.0 | 144 | 0.0915 | | No log | 5.0 | 180 | 0.0869 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
zeromin-03/dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.3821 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.3821 | | No log | 2.0 | 72 | 0.1479 | | No log | 3.0 | 108 | 0.0985 | | No log | 4.0 | 144 | 0.0833 | | No log | 5.0 | 180 | 0.0786 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
cjhan5696/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.1243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.1243 | | No log | 2.0 | 72 | 0.0759 | | No log | 3.0 | 108 | 0.0591 | | No log | 4.0 | 144 | 0.0532 | | No log | 5.0 | 180 | 0.0512 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
cjhan5696/ViT_beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.7571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 0.7571 | | No log | 2.0 | 34 | 0.5783 | | No log | 3.0 | 51 | 0.5211 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
prithivMLmods/Fashion-Product-Gender
![16.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/1rf5M6UtlzkYJOeFx0yTQ.png) # **Fashion-Product-Gender** > **Fashion-Product-Gender** is a vision model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies fashion product images into one of five gender categories. ```py Classification Report: precision recall f1-score support Boys 0.4127 0.0940 0.1531 830 Girls 0.5000 0.0061 0.0121 655 Men 0.7506 0.8393 0.7925 22104 Unisex 0.5714 0.0188 0.0364 2126 Women 0.7317 0.7609 0.7460 18357 accuracy 0.7407 44072 macro avg 0.5933 0.3438 0.3480 44072 weighted avg 0.7240 0.7407 0.7130 44072 ``` The model predicts one of the following gender categories for fashion products: - **0:** Boys - **1:** Girls - **2:** Men - **3:** Unisex - **4:** Women --- # **Run with Transformers 🤗** ```python !pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Fashion-Product-Gender" # Replace with your actual model path model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Label mapping id2label = { 0: "Boys", 1: "Girls", 2: "Men", 3: "Unisex", 4: "Women" } def classify_gender(image): """Predicts the gender category for a fashion product.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))} return predictions # Gradio interface iface = gr.Interface( fn=classify_gender, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Gender Prediction Scores"), title="Fashion-Product-Gender", description="Upload a fashion product image to predict the target gender category (Boys, Girls, Men, Unisex, Women)." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Intended Use** This model is best suited for: - **Fashion E-commerce tagging and search** - **Personalized recommendations based on gender** - **Catalog organization and gender-based filters** - **Retail analytics and demographic insights**
[ "boys", "girls", "men", "unisex", "women" ]
prithivMLmods/Fashion-Product-masterCategory
![13.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/ME_3kB4xkOUBuKpnHj5CX.png) # **Fashion-Product-masterCategory** > **Fashion-Product-masterCategory** is a vision model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies fashion product images into high-level master categories. > ```py Classification Report: precision recall f1-score support Accessories 0.9611 0.9698 0.9654 11244 Apparel 0.9855 0.9919 0.9887 21361 Footwear 0.9952 0.9936 0.9944 9197 Free Items 0.0000 0.0000 0.0000 105 Home 0.0000 0.0000 0.0000 1 Personal Care 0.9638 0.9219 0.9424 2139 Sporting Goods 1.0000 0.0400 0.0769 25 accuracy 0.9803 44072 macro avg 0.7008 0.5596 0.5668 44072 weighted avg 0.9779 0.9803 0.9788 44072 ``` The model predicts one of the following master categories: - **0:** Accessories - **1:** Apparel - **2:** Footwear - **3:** Free Items - **4:** Home - **5:** Personal Care - **6:** Sporting Goods --- # **Run with Transformers 🤗** ```python !pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Fashion-Product-masterCategory" # Replace with your actual model path model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Label mapping id2label = { 0: "Accessories", 1: "Apparel", 2: "Footwear", 3: "Free Items", 4: "Home", 5: "Personal Care", 6: "Sporting Goods" } def classify_master_category(image): """Predicts the master category of a fashion product.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))} return predictions # Gradio interface iface = gr.Interface( fn=classify_master_category, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Master Category Prediction Scores"), title="Fashion-Product-masterCategory", description="Upload a fashion product image to predict its master category (Accessories, Apparel, Footwear, etc.)." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Intended Use** This model can be applied to: - **E-commerce product categorization** - **Automated tagging of product catalogs** - **Enhancing search and filtering options** - **Data annotation pipelines for fashion datasets**
[ "accessories", "apparel", "footwear", "free items", "home", "personal care", "sporting goods" ]
wuwo7057/finetuned-indian-food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-indian-food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset. It achieves the following results on the evaluation set: - Loss: 0.0871 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "刈包", "小籠包" ]
prithivMLmods/Fashion-Product-subCategory
![19.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/FO-WZ6tA2N-HrU_SXX7Ls.png) # **Fashion-Product-subCategory** > **Fashion-Product-subCategory** is a vision model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies fashion product images into 45 fine-grained subcategories for retail and e-commerce applications. ```py Classification Report: precision recall f1-score support Accessories 0.9700 0.7519 0.8472 129 Apparel Set 0.9011 0.7736 0.8325 106 Bags 0.9275 0.9767 0.9515 3053 Bath and Body 1.0000 0.1111 0.2000 9 Beauty Accessories 0.0000 0.0000 0.0000 3 Belts 0.9684 0.9840 0.9761 811 Bottomwear 0.9445 0.9754 0.9597 2685 Cufflinks 0.8870 0.9444 0.9148 108 Dress 0.7857 0.7364 0.7603 478 Eyes 0.7500 0.0882 0.1579 34 Eyewear 0.9898 0.9991 0.9944 1073 Flip Flops 0.8558 0.9102 0.8822 913 Fragrance 0.9280 0.9530 0.9404 1001 Free Gifts 0.0000 0.0000 0.0000 104 Gloves 0.7000 0.3500 0.4667 20 Hair 0.8824 0.7895 0.8333 19 Headwear 0.9403 0.8601 0.8984 293 Home Furnishing 0.0000 0.0000 0.0000 1 Innerwear 0.9763 0.9347 0.9550 1806 Jewellery 0.9689 0.9527 0.9607 1079 Lips 0.9292 0.9271 0.9282 425 Loungewear and Nightwear 0.7604 0.6703 0.7125 464 Makeup 0.7904 0.8745 0.8303 263 Mufflers 1.0000 0.0526 0.1000 38 Nails 0.9450 0.9892 0.9666 278 Perfumes 0.0000 0.0000 0.0000 6 Sandal 0.8720 0.7940 0.8312 961 Saree 0.9320 0.9953 0.9626 427 Scarves 0.6316 0.7119 0.6693 118 Shoe Accessories 0.0000 0.0000 0.0000 4 Shoes 0.9759 0.9799 0.9779 7323 Skin 0.5455 0.4528 0.4948 53 Skin Care 0.7333 0.4490 0.5570 49 Socks 0.9417 0.9728 0.9570 698 Sports Accessories 0.0000 0.0000 0.0000 3 Sports Equipment 0.7083 0.8095 0.7556 21 Stoles 0.8871 0.6111 0.7237 90 Ties 0.9808 0.9884 0.9846 258 Topwear 0.9822 0.9914 0.9867 15383 Umbrellas 1.0000 1.0000 1.0000 6 Vouchers 0.0000 0.0000 0.0000 1 Wallets 0.9376 0.8605 0.8974 925 Watches 0.9790 0.9921 0.9855 2542 Water Bottle 0.0000 0.0000 0.0000 7 Wristbands 0.0000 0.0000 0.0000 4 accuracy 0.9568 44072 macro avg 0.7091 0.6270 0.6412 44072 weighted avg 0.9535 0.9568 0.9540 44072 ``` The model predicts one of the following product subcategories: ```json "id2label": { "0": "Accessories", "1": "Apparel Set", "2": "Bags", "3": "Bath and Body", "4": "Beauty Accessories", "5": "Belts", "6": "Bottomwear", "7": "Cufflinks", "8": "Dress", "9": "Eyes", "10": "Eyewear", "11": "Flip Flops", "12": "Fragrance", "13": "Free Gifts", "14": "Gloves", "15": "Hair", "16": "Headwear", "17": "Home Furnishing", "18": "Innerwear", "19": "Jewellery", "20": "Lips", "21": "Loungewear and Nightwear", "22": "Makeup", "23": "Mufflers", "24": "Nails", "25": "Perfumes", "26": "Sandal", "27": "Saree", "28": "Scarves", "29": "Shoe Accessories", "30": "Shoes", "31": "Skin", "32": "Skin Care", "33": "Socks", "34": "Sports Accessories", "35": "Sports Equipment", "36": "Stoles", "37": "Ties", "38": "Topwear", "39": "Umbrellas", "40": "Vouchers", "41": "Wallets", "42": "Watches", "43": "Water Bottle", "44": "Wristbands" } ``` --- # **Run with Transformers 🤗** ```python !pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Fashion-Product-subCategory" # Replace with your actual model path model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Label mapping id2label = { 0: "Accessories", 1: "Apparel Set", 2: "Bags", 3: "Bath and Body", 4: "Beauty Accessories", 5: "Belts", 6: "Bottomwear", 7: "Cufflinks", 8: "Dress", 9: "Eyes", 10: "Eyewear", 11: "Flip Flops", 12: "Fragrance", 13: "Free Gifts", 14: "Gloves", 15: "Hair", 16: "Headwear", 17: "Home Furnishing", 18: "Innerwear", 19: "Jewellery", 20: "Lips", 21: "Loungewear and Nightwear", 22: "Makeup", 23: "Mufflers", 24: "Nails", 25: "Perfumes", 26: "Sandal", 27: "Saree", 28: "Scarves", 29: "Shoe Accessories", 30: "Shoes", 31: "Skin", 32: "Skin Care", 33: "Socks", 34: "Sports Accessories", 35: "Sports Equipment", 36: "Stoles", 37: "Ties", 38: "Topwear", 39: "Umbrellas", 40: "Vouchers", 41: "Wallets", 42: "Watches", 43: "Water Bottle", 44: "Wristbands" } def classify_subcategory(image): """Predicts the subcategory of a fashion product.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))} return predictions # Gradio interface iface = gr.Interface( fn=classify_subcategory, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Subcategory Prediction Scores"), title="Fashion-Product-subCategory", description="Upload a fashion product image to predict its subcategory (e.g., Dress, Shoes, Accessories, etc.)." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Intended Use** This model is best suited for: - **Product Subcategory Tagging**: Automatically assign fine-grained subcategories to fashion product listings. - **Improved Search & Filters**: Enhance customer experience by enabling better filtering and browsing. - **Catalog Structuring**: Streamline fashion catalog organization at scale for large e-commerce platforms. - **Automated Inventory Insights**: Identify trends in product categories for sales, inventory, and marketing analysis.
[ "accessories", "apparel set", "bags", "bath and body", "beauty accessories", "belts", "bottomwear", "cufflinks", "dress", "eyes", "eyewear", "flip flops", "fragrance", "free gifts", "gloves", "hair", "headwear", "home furnishing", "innerwear", "jewellery", "lips", "loungewear and nightwear", "makeup", "mufflers", "nails", "perfumes", "sandal", "saree", "scarves", "shoe accessories", "shoes", "skin", "skin care", "socks", "sports accessories", "sports equipment", "stoles", "ties", "topwear", "umbrellas", "vouchers", "wallets", "watches", "water bottle", "wristbands" ]
prithivMLmods/Fashion-Product-articleType
![17.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/dHy-iubqlC4_rsb6xWAl-.png) # **Fashion-Product-articleType** > **Fashion-Product-articleType** is a vision model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies fashion product images into one of 141 article types. ```py Classification Report: precision recall f1-score support Accessory Gift Set 0.9898 1.0000 0.9949 97 Baby Dolls 0.6667 0.1429 0.2353 14 Backpacks 0.9582 0.9503 0.9542 724 Bangle 0.8421 0.7529 0.7950 85 Basketballs 0.7500 0.9231 0.8276 13 Bath Robe 0.8571 0.7059 0.7742 17 Beauty Accessory 0.0000 0.0000 0.0000 3 Belts 0.9842 0.9938 0.9890 813 Blazers 0.8333 0.6250 0.7143 8 Body Lotion 1.0000 0.3333 0.5000 3 Body Wash and Scrub 0.0000 0.0000 0.0000 1 Booties 0.6875 0.9167 0.7857 12 Boxers 0.8679 0.8846 0.8762 52 Bra 0.9614 0.9916 0.9763 477 Bracelet 0.7656 0.7424 0.7538 66 Briefs 0.9731 0.9811 0.9771 847 Camisoles 0.7500 0.5385 0.6269 39 Capris 0.6558 0.8057 0.7231 175 Caps 0.9317 0.9647 0.9479 283 Casual Shoes 0.8338 0.8643 0.8488 2845 Churidar 0.7500 0.5000 0.6000 30 Clothing Set 0.7500 0.3750 0.5000 8 Clutches 0.8015 0.7431 0.7712 288 Compact 0.8864 1.0000 0.9398 39 Concealer 0.7143 0.9091 0.8000 11 Cufflinks 0.9811 0.9811 0.9811 106 Cushion Covers 0.0000 0.0000 0.0000 1 Deodorant 0.8946 0.9539 0.9233 347 Dresses 0.7956 0.8642 0.8285 464 Duffel Bag 0.8947 0.5795 0.7034 88 Dupatta 0.9008 0.9397 0.9198 116 Earrings 0.9952 0.9880 0.9916 416 Eye Cream 1.0000 0.2500 0.4000 4 Eyeshadow 0.9062 0.9062 0.9062 32 Face Moisturisers 0.5846 0.8085 0.6786 47 Face Scrub and Exfoliator 0.0000 0.0000 0.0000 4 Face Serum and Gel 0.0000 0.0000 0.0000 2 Face Wash and Cleanser 0.6667 0.6250 0.6452 16 Flats 0.5764 0.2640 0.3621 500 Flip Flops 0.8573 0.9464 0.8996 914 Footballs 1.0000 0.3750 0.5455 8 Formal Shoes 0.8246 0.8932 0.8576 637 Foundation and Primer 0.9524 0.8696 0.9091 69 Fragrance Gift Set 0.6842 0.9123 0.7820 57 Free Gifts 0.9000 0.0989 0.1782 91 Gloves 0.9375 0.7500 0.8333 20 Hair Accessory 0.0000 0.0000 0.0000 1 Hair Colour 0.8636 1.0000 0.9268 19 Handbags 0.8840 0.9744 0.9270 1759 Hat 0.0000 0.0000 0.0000 3 Headband 1.0000 0.5714 0.7273 7 Heels 0.7622 0.9206 0.8340 1323 Highlighter and Blush 0.9697 0.8421 0.9014 38 Innerwear Vests 0.9056 0.8719 0.8884 242 Ipad 0.0000 0.0000 0.0000 1 Jackets 0.7950 0.6163 0.6943 258 Jeans 0.8118 0.9385 0.8706 602 Jeggings 1.0000 0.0882 0.1622 34 Jewellery Set 0.9333 0.9655 0.9492 58 Jumpsuit 0.0000 0.0000 0.0000 16 Kajal and Eyeliner 0.7241 0.8936 0.8000 94 Key chain 0.0000 0.0000 0.0000 2 Kurta Sets 0.8774 0.9894 0.9300 94 Kurtas 0.9348 0.9414 0.9381 1844 Kurtis 0.5000 0.5427 0.5205 234 Laptop Bag 0.6338 0.5488 0.5882 82 Leggings 0.7590 0.8362 0.7957 177 Lehenga Choli 0.0000 0.0000 0.0000 4 Lip Care 0.8000 0.5714 0.6667 7 Lip Gloss 0.8718 0.9358 0.9027 109 Lip Liner 0.8846 0.5111 0.6479 45 Lip Plumper 1.0000 0.5000 0.6667 4 Lipstick 0.9660 0.9846 0.9752 260 Lounge Pants 0.7727 0.2787 0.4096 61 Lounge Shorts 1.0000 0.1176 0.2105 34 Lounge Tshirts 0.5000 0.6667 0.5714 3 Makeup Remover 0.0000 0.0000 0.0000 2 Mascara 0.6000 0.5000 0.5455 12 Mask and Peel 0.7778 0.7000 0.7368 10 Mens Grooming Kit 0.0000 0.0000 0.0000 1 Messenger Bag 0.6818 0.3409 0.4545 44 Mobile Pouch 0.5714 0.5106 0.5393 47 Mufflers 0.8056 0.7632 0.7838 38 Nail Essentials 1.0000 0.5000 0.6667 6 Nail Polish 0.9928 0.9964 0.9946 278 Necklace and Chains 0.9375 0.9375 0.9375 160 Nehru Jackets 0.0000 0.0000 0.0000 5 Night suits 0.8792 0.9291 0.9034 141 Nightdress 0.7730 0.7606 0.7668 188 Patiala 1.0000 0.7368 0.8485 38 Pendant 0.9181 0.8920 0.9049 176 Perfume and Body Mist 0.9463 0.9055 0.9254 603 Rain Jacket 0.0000 0.0000 0.0000 7 Ring 0.8952 0.9407 0.9174 118 Robe 0.0000 0.0000 0.0000 4 Rompers 1.0000 1.0000 1.0000 12 Rucksacks 0.7143 0.4545 0.5556 11 Salwar 0.6122 0.9375 0.7407 32 Salwar and Dupatta 1.0000 0.8571 0.9231 7 Sandals 0.8618 0.8291 0.8451 895 Sarees 0.9660 0.9977 0.9816 427 Scarves 0.8333 0.7983 0.8155 119 Shapewear 0.2500 0.1111 0.1538 9 Shirts 0.9360 0.9614 0.9485 3212 Shoe Accessories 0.0000 0.0000 0.0000 3 Shoe Laces 0.0000 0.0000 0.0000 1 Shorts 0.8986 0.9232 0.9107 547 Shrug 0.0000 0.0000 0.0000 6 Skirts 0.8293 0.7969 0.8127 128 Socks 0.9869 0.9883 0.9876 686 Sports Sandals 0.6111 0.1642 0.2588 67 Sports Shoes 0.8880 0.8100 0.8472 2016 Stockings 0.8824 0.9375 0.9091 32 Stoles 0.8690 0.8111 0.8391 90 Sunglasses 0.9898 0.9991 0.9944 1073 Sunscreen 1.0000 0.7333 0.8462 15 Suspenders 1.0000 1.0000 1.0000 40 Sweaters 0.7488 0.5812 0.6545 277 Sweatshirts 0.6348 0.7930 0.7051 285 Swimwear 0.9000 0.5294 0.6667 17 Tablet Sleeve 0.0000 0.0000 0.0000 3 Ties 1.0000 0.9886 0.9943 263 Ties and Cufflinks 0.0000 0.0000 0.0000 2 Tights 1.0000 0.3333 0.5000 9 Toner 0.0000 0.0000 0.0000 2 Tops 0.7591 0.7208 0.7394 1762 Track Pants 0.8537 0.8257 0.8395 304 Tracksuits 0.8750 0.9655 0.9180 29 Travel Accessory 1.0000 0.1875 0.3158 16 Trolley Bag 0.0000 0.0000 0.0000 3 Trousers 0.9428 0.8396 0.8882 530 Trunk 0.8819 0.9071 0.8944 140 Tshirts 0.9273 0.9580 0.9424 7065 Tunics 0.6129 0.1659 0.2612 229 Umbrellas 1.0000 1.0000 1.0000 6 Waist Pouch 1.0000 0.1176 0.2105 17 Waistcoat 1.0000 0.2667 0.4211 15 Wallets 0.9491 0.9235 0.9361 928 Watches 0.9817 0.9929 0.9873 2542 Water Bottle 1.0000 0.8182 0.9000 11 Wristbands 0.8571 0.8571 0.8571 7 accuracy 0.8911 44072 macro avg 0.7131 0.6174 0.6361 44072 weighted avg 0.8877 0.8911 0.8846 44072 ``` The model predicts one of the following **article types** for fashion products, such as: - **0:** Accessory Gift Set - **1:** Baby Dolls - **2:** Backpacks - **3:** Bangle - **...** - **140:** Wristbands --- # **Run with Transformers 🤗** ```bash pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Fashion-Product-articleType" # Replace with your actual model path model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Label mapping id2label = { 0: "Accessory Gift Set", 1: "Baby Dolls", 2: "Backpacks", 3: "Bangle", 4: "Basketballs", 5: "Bath Robe", 6: "Beauty Accessory", 7: "Belts", 8: "Blazers", 9: "Body Lotion", 10: "Body Wash and Scrub", 11: "Booties", 12: "Boxers", 13: "Bra", 14: "Bracelet", 15: "Briefs", 16: "Camisoles", 17: "Capris", 18: "Caps", 19: "Casual Shoes", 20: "Churidar", 21: "Clothing Set", 22: "Clutches", 23: "Compact", 24: "Concealer", 25: "Cufflinks", 26: "Cushion Covers", 27: "Deodorant", 28: "Dresses", 29: "Duffel Bag", 30: "Dupatta", 31: "Earrings", 32: "Eye Cream", 33: "Eyeshadow", 34: "Face Moisturisers", 35: "Face Scrub and Exfoliator", 36: "Face Serum and Gel", 37: "Face Wash and Cleanser", 38: "Flats", 39: "Flip Flops", 40: "Footballs", 41: "Formal Shoes", 42: "Foundation and Primer", 43: "Fragrance Gift Set", 44: "Free Gifts", 45: "Gloves", 46: "Hair Accessory", 47: "Hair Colour", 48: "Handbags", 49: "Hat", 50: "Headband", 51: "Heels", 52: "Highlighter and Blush", 53: "Innerwear Vests", 54: "Ipad", 55: "Jackets", 56: "Jeans", 57: "Jeggings", 58: "Jewellery Set", 59: "Jumpsuit", 60: "Kajal and Eyeliner", 61: "Key chain", 62: "Kurta Sets", 63: "Kurtas", 64: "Kurtis", 65: "Laptop Bag", 66: "Leggings", 67: "Lehenga Choli", 68: "Lip Care", 69: "Lip Gloss", 70: "Lip Liner", 71: "Lip Plumper", 72: "Lipstick", 73: "Lounge Pants", 74: "Lounge Shorts", 75: "Lounge Tshirts", 76: "Makeup Remover", 77: "Mascara", 78: "Mask and Peel", 79: "Mens Grooming Kit", 80: "Messenger Bag", 81: "Mobile Pouch", 82: "Mufflers", 83: "Nail Essentials", 84: "Nail Polish", 85: "Necklace and Chains", 86: "Nehru Jackets", 87: "Night suits", 88: "Nightdress", 89: "Patiala", 90: "Pendant", 91: "Perfume and Body Mist", 92: "Rain Jacket", 93: "Ring", 94: "Robe", 95: "Rompers", 96: "Rucksacks", 97: "Salwar", 98: "Salwar and Dupatta", 99: "Sandals", 100: "Sarees", 101: "Scarves", 102: "Shapewear", 103: "Shirts", 104: "Shoe Accessories", 105: "Shoe Laces", 106: "Shorts", 107: "Shrug", 108: "Skirts", 109: "Socks", 110: "Sports Sandals", 111: "Sports Shoes", 112: "Stockings", 113: "Stoles", 114: "Sunglasses", 115: "Sunscreen", 116: "Suspenders", 117: "Sweaters", 118: "Sweatshirts", 119: "Swimwear", 120: "Tablet Sleeve", 121: "Ties", 122: "Ties and Cufflinks", 123: "Tights", 124: "Toner", 125: "Tops", 126: "Track Pants", 127: "Tracksuits", 128: "Travel Accessory", 129: "Trolley Bag", 130: "Trousers", 131: "Trunk", 132: "Tshirts", 133: "Tunics", 134: "Umbrellas", 135: "Waist Pouch", 136: "Waistcoat", 137: "Wallets", 138: "Watches", 139: "Water Bottle", 140: "Wristbands" } def classify_article_type(image): """Predicts the article type for a fashion product.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))} return predictions # Gradio interface iface = gr.Interface( fn=classify_article_type, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Article Type Prediction Scores"), title="Fashion-Product-articleType", description="Upload a fashion product image to predict its article type (e.g., T-shirt, Jeans, Handbag, etc)." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Intended Use** This model is best suited for: - **Fashion E-commerce Tagging & Categorization** - **Automated Product Labeling for Catalogs** - **Enhanced Product Search & Filtering** - **Retail Analytics and Product Type Breakdown**
[ "accessory gift set", "baby dolls", "backpacks", "bangle", "basketballs", "bath robe", "beauty accessory", "belts", "blazers", "body lotion", "body wash and scrub", "booties", "boxers", "bra", "bracelet", "briefs", "camisoles", "capris", "caps", "casual shoes", "churidar", "clothing set", "clutches", "compact", "concealer", "cufflinks", "cushion covers", "deodorant", "dresses", "duffel bag", "dupatta", "earrings", "eye cream", "eyeshadow", "face moisturisers", "face scrub and exfoliator", "face serum and gel", "face wash and cleanser", "flats", "flip flops", "footballs", "formal shoes", "foundation and primer", "fragrance gift set", "free gifts", "gloves", "hair accessory", "hair colour", "handbags", "hat", "headband", "heels", "highlighter and blush", "innerwear vests", "ipad", "jackets", "jeans", "jeggings", "jewellery set", "jumpsuit", "kajal and eyeliner", "key chain", "kurta sets", "kurtas", "kurtis", "laptop bag", "leggings", "lehenga choli", "lip care", "lip gloss", "lip liner", "lip plumper", "lipstick", "lounge pants", "lounge shorts", "lounge tshirts", "makeup remover", "mascara", "mask and peel", "mens grooming kit", "messenger bag", "mobile pouch", "mufflers", "nail essentials", "nail polish", "necklace and chains", "nehru jackets", "night suits", "nightdress", "patiala", "pendant", "perfume and body mist", "rain jacket", "ring", "robe", "rompers", "rucksacks", "salwar", "salwar and dupatta", "sandals", "sarees", "scarves", "shapewear", "shirts", "shoe accessories", "shoe laces", "shorts", "shrug", "skirts", "socks", "sports sandals", "sports shoes", "stockings", "stoles", "sunglasses", "sunscreen", "suspenders", "sweaters", "sweatshirts", "swimwear", "tablet sleeve", "ties", "ties and cufflinks", "tights", "toner", "tops", "track pants", "tracksuits", "travel accessory", "trolley bag", "trousers", "trunk", "tshirts", "tunics", "umbrellas", "waist pouch", "waistcoat", "wallets", "watches", "water bottle", "wristbands" ]
prithivMLmods/Fashion-Product-baseColour
![18.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/B0YuDqYBhhE310ZHAFDFE.png) # **Fashion-Product-baseColour** > **Fashion-Product-baseColour** is a visual classification model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It predicts the **base color** of fashion products from images — enabling accurate tagging, search, and recommendation in fashion-related applications. ```py Classification Report: precision recall f1-score support Beige 0.4338 0.5409 0.4815 745 Black 0.8051 0.8656 0.8342 9699 Blue 0.7513 0.7858 0.7682 4906 Bronze 0.0000 0.0000 0.0000 89 Brown 0.6812 0.7596 0.7183 3440 Burgundy 0.0000 0.0000 0.0000 44 Charcoal 0.4941 0.1842 0.2684 228 Coffee Brown 0.0000 0.0000 0.0000 29 Copper 0.5000 0.0120 0.0235 83 Cream 0.3940 0.3446 0.3677 383 Fluorescent Green 0.0000 0.0000 0.0000 5 Gold 0.4935 0.6747 0.5701 621 Green 0.7286 0.7760 0.7516 2103 Grey 0.6313 0.5002 0.5581 2735 Grey Melange 0.5728 0.4041 0.4739 146 Khaki 0.3540 0.2878 0.3175 139 Lavender 0.5049 0.3250 0.3954 160 Lime Green 0.0000 0.0000 0.0000 5 Magenta 0.5909 0.1016 0.1733 128 Maroon 0.5121 0.2929 0.3727 577 Mauve 0.0000 0.0000 0.0000 28 Metallic 0.0000 0.0000 0.0000 41 Multi 0.4005 0.3832 0.3917 394 Mushroom Brown 0.0000 0.0000 0.0000 16 Mustard 0.4912 0.2887 0.3636 97 Navy Blue 0.6290 0.4905 0.5512 1784 Nude 0.0000 0.0000 0.0000 21 Off White 0.5789 0.2418 0.3411 182 Olive 0.5259 0.5208 0.5233 409 Orange 0.6838 0.6119 0.6458 523 Peach 0.4727 0.4216 0.4457 185 Pink 0.6912 0.7423 0.7158 1824 Purple 0.6846 0.7568 0.7189 1612 Red 0.6916 0.8273 0.7534 2432 Rose 0.0000 0.0000 0.0000 21 Rust 0.5000 0.1692 0.2529 65 Sea Green 0.0000 0.0000 0.0000 22 Silver 0.6088 0.4830 0.5387 1089 Skin 0.5479 0.6319 0.5869 163 Steel 0.2857 0.0381 0.0672 315 Tan 0.6667 0.0357 0.0678 112 Taupe 0.0000 0.0000 0.0000 11 Teal 0.4857 0.2857 0.3598 119 Turquoise Blue 0.0000 0.0000 0.0000 69 White 0.7518 0.7950 0.7728 5497 Yellow 0.7714 0.8003 0.7856 776 accuracy 0.7072 44072 macro avg 0.4112 0.3343 0.3469 44072 weighted avg 0.6919 0.7072 0.6935 44072 ``` The model categorizes fashion product images into the following **46 base color classes**: - Beige, Black, Blue, Bronze, Brown, Burgundy, Charcoal, Coffee Brown, Copper, Cream - Fluorescent Green, Gold, Green, Grey, Grey Melange, Khaki, Lavender, Lime Green - Magenta, Maroon, Mauve, Metallic, Multi, Mushroom Brown, Mustard, Navy Blue - Nude, Off White, Olive, Orange, Peach, Pink, Purple, Red, Rose, Rust - Sea Green, Silver, Skin, Steel, Tan, Taupe, Teal, Turquoise Blue, White, Yellow --- # **Run with Transformers 🤗** ```python !pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Fashion-Product-baseColour" # Replace with actual model path model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Label mapping id2label = { 0: "Beige", 1: "Black", 2: "Blue", 3: "Bronze", 4: "Brown", 5: "Burgundy", 6: "Charcoal", 7: "Coffee Brown", 8: "Copper", 9: "Cream", 10: "Fluorescent Green", 11: "Gold", 12: "Green", 13: "Grey", 14: "Grey Melange", 15: "Khaki", 16: "Lavender", 17: "Lime Green", 18: "Magenta", 19: "Maroon", 20: "Mauve", 21: "Metallic", 22: "Multi", 23: "Mushroom Brown", 24: "Mustard", 25: "Navy Blue", 26: "Nude", 27: "Off White", 28: "Olive", 29: "Orange", 30: "Peach", 31: "Pink", 32: "Purple", 33: "Red", 34: "Rose", 35: "Rust", 36: "Sea Green", 37: "Silver", 38: "Skin", 39: "Steel", 40: "Tan", 41: "Taupe", 42: "Teal", 43: "Turquoise Blue", 44: "White", 45: "Yellow" } def classify_base_color(image): """Predicts the base color of a fashion product from an image.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))} return predictions # Gradio interface iface = gr.Interface( fn=classify_base_color, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Base Colour Prediction Scores"), title="Fashion-Product-baseColour", description="Upload a fashion product image to detect its primary color (e.g., Red, Black, Cream, Navy Blue, etc.)." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Intended Use** This model is ideal for: - **E-commerce platforms** for accurate product color labeling - **Fashion search engines** and recommendation systems - **Inventory and catalog automation** - **Fashion analytics and trends tracking** - **Design tools** for color-based sorting and filters
[ "beige", "black", "blue", "bronze", "brown", "burgundy", "charcoal", "coffee brown", "copper", "cream", "fluorescent green", "gold", "green", "grey", "grey melange", "khaki", "lavender", "lime green", "magenta", "maroon", "mauve", "metallic", "multi", "mushroom brown", "mustard", "navy blue", "nude", "off white", "olive", "orange", "peach", "pink", "purple", "red", "rose", "rust", "sea green", "silver", "skin", "steel", "tan", "taupe", "teal", "turquoise blue", "white", "yellow" ]
anusha2002/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0528 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 1.0 | 9 | 0.1874 | 0.9837 | | 0.4952 | 2.0 | 18 | 0.0528 | 1.0 | | 0.1205 | 2.6857 | 24 | 0.0416 | 1.0 | ### Framework versions - Transformers 4.51.0 - Pytorch 2.6.0+cpu - Datasets 3.3.2 - Tokenizers 0.21.0
[ "cat", "dog" ]
aashituli/promblemo
# Model Card for Smart Farming Disease Detection Transformer This model is a Vision Transformer (ViT) designed to identify plant diseases in crops as part of a smart agricultural farming system. It has been trained on a diverse dataset of plant images, including different disease categories affecting crops such as corn, potato, rice, and wheat. The model aims to provide farmers and agronomists with real-time disease detection for better crop management. ## Model Details ### Model Description This Vision Transformer model has been fine-tuned to classify various plant diseases commonly found in agricultural settings. The model can classify diseases in crops such as corn, potato, rice, and wheat, identifying diseases like rust, blight, leaf spots, and others. The goal is to enable precision farming by helping farmers detect diseases early and take appropriate actions. - **Developed by:** Wambugu Kinyua - **Model type:** Vision Transformer (ViT) - **Languages (NLP):** N/A (Computer Vision Model) - **License:** Apache 2.0 - **Finetuned from model:** (WinKawaks/vit-tiny-patch16-224)[https://huggingface.co/WinKawaks/vit-tiny-patch16-224] - **Input:** Images of crops (RGB format) - **Output:** Disease classification labels (healthy or diseased categories) ## Diseases from the model | Crop | Diseases Identified | |--------|------------------------------| | Corn | Common Rust | | Corn | Gray Leaf Spot | | Corn | Healthy | | Corn | Leaf Blight | | - | Invalid | | Potato | Early Blight | | Potato | Healthy | | Potato | Late Blight | | Rice | Brown Spot | | Rice | Healthy | | Rice | Leaf Blast | | Wheat | Brown Rust | | Wheat | Healthy | | Wheat | Yellow Rust | ## Uses ### Direct Use This model can be used directly to classify images of crops to detect plant diseases. It is especially useful for precision farming, enabling users to monitor crop health and take early interventions based on the detected disease. ### Downstream Use This model can be fine-tuned on other agricultural datasets for specific crops or regions to improve its performance or be integrated into larger precision farming systems that include other features like weather predictions and irrigation control. Can be quantitized or deployed in full precision on edge devices due to its small parameter size without compromising on precision and accuracy. ### Out-of-Scope Use This model is not designed for non-agricultural image classification tasks or for environments with insufficient or very noisy data. Misuse includes using the model in areas with vastly different agricultural conditions from those it was trained on. ## Bias, Risks, and Limitations - The model may exhibit bias toward the crops and diseases present in the training dataset, leading to lower performance on unrepresented diseases or crop varieties. - False negatives (failing to detect a disease) may result in untreated crop damage, while false positives could lead to unnecessary interventions. ### Recommendations Users should evaluate the model on their specific crops and farming conditions. Regular updates and retraining with local data are recommended for optimal performance. ## How to Get Started with the Model ```python from PIL import Image, UnidentifiedImageError from transformers import ViTFeatureExtractor, ViTForImageClassification feature_extractor = ViTFeatureExtractor.from_pretrained('wambugu71/crop_leaf_diseases_vit') model = ViTForImageClassification.from_pretrained( 'wambugu1738/crop_leaf_diseases_vit', ignore_mismatched_sizes=True ) image = Image.open('<image_path>') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` ## Training Details ### Training Data The model was trained on a dataset containing images of various crops with labeled diseases, including the following categories: - **Corn**: Common Rust, Gray Leaf Spot, Leaf Blight, Healthy - **Potato**: Early Blight, Late Blight, Healthy - **Rice**: Brown Spot, Hispa, Leaf Blast, Healthy - **Wheat**: Brown Rust, Yellow Rust, Healthy The dataset also includes images captured under various lighting conditions, from both controlled and uncontrolled environments and angles, to simulate real-world farming scenarios. We made use of public available datasets, and our own private data. ### Training Procedure The model was fine-tuned using a vision transformer architecture pre-trained on the ImageNet dataset. The dataset was preprocessed by resizing the images and normalizing the pixel values. #### Training Hyperparameters - **Batch size:** 32 - **Learning rate:** 2e-5 - **Epochs:** 4 - **Optimizer:** AdamW - **Precision:** fp16 ### Evaluation ![Confusion matrix](disease_classification_metrics.png) #### Testing Data, Factors & Metrics The model was evaluated using a validation set consisting of 20% of the original dataset, with the following metrics: - **Accuracy:** 98% - **Precision:** 97% - **Recall:** 97% - **F1 Score:** 96% ## Environmental Impact Carbon emissions during model training can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute). - **Hardware Type:** NVIDIA L40S - **Hours used:** 1 hours - **Cloud Provider:** Lightning AI ## Technical Specifications ### Model Architecture and Objective The model uses a Vision Transformer architecture to learn image representations and classify them into disease categories. Its self-attention mechanism enables it to capture global contextual information in the images, making it suitable for agricultural disease detection. ### Compute Infrastructure #### Hardware - NVIDIA L40S GPUs - 48 GB RAM - SSD storage for fast I/O #### Software - Python 3.9 - PyTorch 2.4.1+cu121 - pytorch_lightning - Transformers library by Hugging Face ## Citation If you use this model in your research or applications, please cite it as: **BibTeX:** ``` @misc{kinyua2024smartfarming, title={Smart Farming Disease Detection Transformer}, author={Wambugu Kinyua}, year={2024}, publisher={Hugging Face}, } ``` **APA:** Kinyua, W. (2024). Smart Farming Disease Detection Transformer. Hugging Face. ## Model Card Contact For further inquiries, contact: [email protected]
[ "corn___common_rust", "corn___gray_leaf_spot", "wheat___brown_rust", "wheat___healthy", "wheat___yellow_rust", "corn___healthy", "invalid", "potato___early_blight", "potato___healthy", "potato___late_blight", "rice___brown_spot", "rice___healthy", "rice___leaf_blast" ]
swritchie/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8359 - Accuracy: 0.745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2015 | 1.0 | 13 | 3.2327 | 0.745 | | 2.9684 | 1.88 | 24 | 2.8359 | 0.745 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles" ]
mbiarreta/swin-ena24
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-ena24 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the ena24 dataset. It achieves the following results on the evaluation set: - Loss: 0.1460 - Accuracy: 0.9695 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.679 | 0.1302 | 100 | 1.2075 | 0.6573 | | 0.9674 | 0.2604 | 200 | 0.7625 | 0.7809 | | 0.8194 | 0.3906 | 300 | 0.7240 | 0.7931 | | 0.5756 | 0.5208 | 400 | 0.6613 | 0.8023 | | 0.5796 | 0.6510 | 500 | 0.3928 | 0.8947 | | 0.5275 | 0.7812 | 600 | 0.4274 | 0.8863 | | 0.1931 | 0.9115 | 700 | 0.4006 | 0.8908 | | 0.254 | 1.0417 | 800 | 0.2949 | 0.9237 | | 0.1321 | 1.1719 | 900 | 0.2565 | 0.9420 | | 0.1888 | 1.3021 | 1000 | 0.2155 | 0.9466 | | 0.0558 | 1.4323 | 1100 | 0.2289 | 0.9420 | | 0.0824 | 1.5625 | 1200 | 0.1732 | 0.9634 | | 0.1455 | 1.6927 | 1300 | 0.1689 | 0.9649 | | 0.1453 | 1.8229 | 1400 | 0.1596 | 0.9672 | | 0.0403 | 1.9531 | 1500 | 0.1460 | 0.9695 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.0
[ "american black bear", "american crow", "eastern fox squirrel", "eastern gray squirrel", "grey fox", "horse", "northern raccoon", "red fox", "striped skunk", "vehicle", "virginia opossum", "white_tailed_deer", "bird", "wild turkey", "woodchuck", "bobcat", "chicken", "coyote", "dog", "domestic cat", "eastern chipmunk", "eastern cottontail" ]
mariamoracrossitcr/vit-base-beans-demo-v8
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v8 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - eval_loss: 1.0999 - eval_model_preparation_time: 0.0035 - eval_accuracy: 0.3383 - eval_runtime: 1.9431 - eval_samples_per_second: 68.448 - eval_steps_per_second: 8.749 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
mariamoracrossitcr/vit-base-beans-demo-v9
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v9 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0146 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3015 | 1.0 | 65 | 0.0969 | 0.9699 | | 0.1156 | 2.0 | 130 | 0.0270 | 1.0 | | 0.0358 | 3.0 | 195 | 0.0271 | 0.9925 | | 0.0144 | 4.0 | 260 | 0.0146 | 1.0 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Thomaslam2/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Thomaslam2/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: nan - Validation Loss: nan - Train Accuracy: 0.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | nan | nan | 0.0 | 0 | | nan | nan | 0.0 | 1 | | nan | nan | 0.0 | 2 | | nan | nan | 0.0 | 3 | | nan | nan | 0.0 | 4 | ### Framework versions - Transformers 4.51.3 - TensorFlow 2.18.0 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
thenewsupercell/Nose_image_parts_df_VIT
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Nose_image_parts_df_VIT This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0145 - Accuracy: 0.9969 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.016 | 1.0 | 5146 | 0.0245 | 0.9933 | | 0.0029 | 2.0 | 10292 | 0.0190 | 0.9929 | | 0.0023 | 3.0 | 15438 | 0.0156 | 0.9953 | | 0.0176 | 4.0 | 20584 | 0.0145 | 0.9969 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "fake", "real" ]
thenewsupercell/Eyes_image_parts_df_VIT
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Eyes_image_parts_df_VIT This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0339 - Accuracy: 0.9916 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0481 | 1.0 | 5146 | 0.0429 | 0.9864 | | 0.1098 | 2.0 | 10292 | 0.0336 | 0.9905 | | 0.0048 | 3.0 | 15438 | 0.0307 | 0.9916 | | 0.0269 | 4.0 | 20584 | 0.0339 | 0.9916 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "fake", "real" ]
thenewsupercell/Mouth_image_parts_df_VIT
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mouth_image_parts_df_VIT This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0252 - Accuracy: 0.9934 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0273 | 1.0 | 5146 | 0.0341 | 0.9884 | | 0.0114 | 2.0 | 10292 | 0.0342 | 0.9873 | | 0.0116 | 3.0 | 15438 | 0.0277 | 0.9935 | | 0.0178 | 4.0 | 20584 | 0.0252 | 0.9934 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "fake", "real" ]
thenewsupercell/Forehead_image_parts_df_VIT
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Forehead_image_parts_df_VIT This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0342 - Accuracy: 0.9908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0551 | 1.0 | 5146 | 0.0531 | 0.9863 | | 0.0337 | 2.0 | 10292 | 0.0398 | 0.9839 | | 0.0395 | 3.0 | 15438 | 0.0366 | 0.9892 | | 0.0082 | 4.0 | 20584 | 0.0342 | 0.9908 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "fake", "real" ]
thenewsupercell/Jaw_image_parts_df_VIT
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Jaw_image_parts_df_VIT This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0110 - Accuracy: 0.9983 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0197 | 1.0 | 5146 | 0.0102 | 0.9964 | | 0.0396 | 2.0 | 10292 | 0.0121 | 0.9959 | | 0.0004 | 3.0 | 15438 | 0.0139 | 0.9977 | | 0.0048 | 4.0 | 20584 | 0.0110 | 0.9983 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "fake", "real" ]
mariamoracrossitcr/vit-base-beans-demo-v10
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v10 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0823 - Accuracy: 0.9699 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0807 | 1.0 | 65 | 0.0977 | 0.9774 | | 0.06 | 2.0 | 130 | 0.1550 | 0.9624 | | 0.0402 | 3.0 | 195 | 0.0899 | 0.9774 | | 0.0033 | 4.0 | 260 | 0.0565 | 0.9774 | | 0.0016 | 5.0 | 325 | 0.0837 | 0.9699 | | 0.0011 | 6.0 | 390 | 0.0823 | 0.9699 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
z1515/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.3992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.3992 | | No log | 2.0 | 72 | 0.1874 | | No log | 3.0 | 108 | 0.1539 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
heado/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.4453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.4453 | | No log | 2.0 | 72 | 0.2022 | | No log | 3.0 | 108 | 0.1640 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
j200chi/dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dogfood dataset. It achieves the following results on the evaluation set: - Loss: 0.4415 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.4415 | | No log | 2.0 | 72 | 0.2033 | | No log | 3.0 | 108 | 0.1653 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
SangjeHwang/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.4352 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.4352 | | No log | 2.0 | 72 | 0.1996 | | No log | 3.0 | 108 | 0.1620 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
minhyuckkkkk/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.4407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.4407 | | No log | 2.0 | 72 | 0.2098 | | No log | 3.0 | 108 | 0.1703 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
halfmoonbear/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.0064 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.0064 | | No log | 2.0 | 72 | 0.0025 | | No log | 3.0 | 108 | 0.0024 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
Sychol/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the lewtun/dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.5047 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.5047 | | No log | 2.0 | 72 | 0.2475 | | No log | 3.0 | 108 | 0.2007 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
Skrrrrrrrr/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.4322 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.4322 | | No log | 2.0 | 72 | 0.1999 | | No log | 3.0 | 108 | 0.1631 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
yunseyoung94/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.1441 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.1441 | | No log | 2.0 | 72 | 0.0947 | | No log | 3.0 | 108 | 0.0867 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
Snjie/ViT_dog_food
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_dog_food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.4305 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 36 | 0.4305 | | No log | 2.0 | 72 | 0.1990 | | No log | 3.0 | 108 | 0.1620 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2" ]
microwaveablemax/train_checkpoints2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_checkpoints2 This model is a fine-tuned version of [dennisjooo/Birds-Classifier-EfficientNetB2](https://huggingface.co/dennisjooo/Birds-Classifier-EfficientNetB2) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4826 - F1: 0.8686 - Precision: 0.8782 - Recall: 0.8635 - Accuracy: 0.8686 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:| | 0.1145 | 1.0 | 15 | 0.5836 | 0.8608 | 0.8776 | 0.8520 | 0.8613 | | 0.129 | 2.0 | 30 | 0.8019 | 0.8322 | 0.8634 | 0.8192 | 0.8358 | | 0.2085 | 3.0 | 45 | 0.7550 | 0.8083 | 0.8355 | 0.8042 | 0.8212 | | 0.1722 | 4.0 | 60 | 0.7524 | 0.8298 | 0.8422 | 0.8357 | 0.8394 | | 0.19 | 5.0 | 75 | 0.5542 | 0.8743 | 0.8910 | 0.8679 | 0.8723 | | 0.1612 | 6.0 | 90 | 0.8325 | 0.8114 | 0.8410 | 0.8063 | 0.8066 | | 0.2009 | 7.0 | 105 | 0.4425 | 0.8900 | 0.8904 | 0.8911 | 0.8942 | | 0.209 | 8.0 | 120 | 0.6705 | 0.8126 | 0.8482 | 0.8074 | 0.8358 | | 0.2188 | 9.0 | 135 | 0.5906 | 0.8387 | 0.8551 | 0.8350 | 0.8467 | | 0.1962 | 10.0 | 150 | 0.4826 | 0.8686 | 0.8782 | 0.8635 | 0.8686 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "american_wigeon", "bufflehead", "canvasback", "hooded_merganser", "mallard", "northern_shoveler", "redhead", "wood_duck" ]
VK26/ViT-finetuned-emotion
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "sad", "anger", "neutral", "fear", "content", "happy", "disgust", "surprise" ]
prithivMLmods/Fashion-Product-Usage
![14.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/LNIOy8V_w0loMrVMtTyIK.png) # **Fashion-Product-Usage** > **Fashion-Product-Usage** is a vision-language model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies fashion product images based on their intended usage context. ```py Classification Report: precision recall f1-score support Casual 0.8529 0.9716 0.9084 34392 Ethnic 0.8365 0.7528 0.7925 3208 Formal 0.7246 0.3006 0.4250 2345 Home 0.0000 0.0000 0.0000 1 Party 0.0000 0.0000 0.0000 29 Smart Casual 0.0000 0.0000 0.0000 67 Sports 0.7157 0.1848 0.2938 4004 Travel 0.0000 0.0000 0.0000 26 accuracy 0.8458 44072 macro avg 0.3912 0.2762 0.3024 44072 weighted avg 0.8300 0.8458 0.8159 44072 ``` The model predicts one of the following usage categories: - **0:** Casual - **1:** Ethnic - **2:** Formal - **3:** Home - **4:** Party - **5:** Smart Casual - **6:** Sports - **7:** Travel --- # **Run with Transformers 🤗** ```python !pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Fashion-Product-Usage" # Replace with your actual model path model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Label mapping id2label = { 0: "Casual", 1: "Ethnic", 2: "Formal", 3: "Home", 4: "Party", 5: "Smart Casual", 6: "Sports", 7: "Travel" } def classify_usage(image): """Predicts the usage type of a fashion product.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))} return predictions # Gradio interface iface = gr.Interface( fn=classify_usage, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Usage Prediction Scores"), title="Fashion-Product-Usage", description="Upload a fashion product image to predict its intended usage (Casual, Formal, Party, etc.)." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Intended Use** This model can be used for: - **Product tagging in e-commerce catalogs** - **Context-aware product recommendations** - **Fashion search optimization** - **Data annotation for training recommendation engines**
[ "casual", "ethnic", "formal", "home", "party", "smart casual", "sports", "travel" ]
steffchi/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2023 - Accuracy: 0.9459 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3878 | 1.0 | 370 | 0.2921 | 0.9215 | | 0.2188 | 2.0 | 740 | 0.2260 | 0.9269 | | 0.1832 | 3.0 | 1110 | 0.2136 | 0.9283 | | 0.14 | 4.0 | 1480 | 0.2050 | 0.9323 | | 0.1322 | 5.0 | 1850 | 0.2030 | 0.9323 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1 ### Oxford-Pet dataset using a zero-shot classification model used model: checkpoint = "openai/clip-vit-large-patch14" detector = pipeline(model=checkpoint, task="zero-shot-image-classification") Accuracy: 0.8800 Precision: 0.8768 Recall: 0.8800
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
rolloraq/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2101 - Accuracy: 0.9405 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3601 | 1.0 | 370 | 0.3013 | 0.9269 | | 0.2025 | 2.0 | 740 | 0.2369 | 0.9391 | | 0.1637 | 3.0 | 1110 | 0.2178 | 0.9472 | | 0.1484 | 4.0 | 1480 | 0.2115 | 0.9418 | | 0.1172 | 5.0 | 1850 | 0.2087 | 0.9432 | Accuracy: 0.8800 Precision: 0.8768 Recall: 0.8800 /home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/sklearn/metrics/_classification.py:1471: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) ### Model: openai/clip-vit-large-patch14 ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
prithivMLmods/Vit-Mature-Content-Detection
![uijytyyt.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/IDbH_a4KQpydEVQ4VtKiA.png) # **Vit-Mature-Content-Detection** > **Vit-Mature-Content-Detection** is an image classification vision-language model fine-tuned from **vit-base-patch16-224-in21k** for a single-label classification task. It classifies images into various mature or neutral content categories using the **ViTForImageClassification** architecture. > [!Note] > Use this model to support positive, safe, and respectful digital spaces. Misuse is strongly discouraged and may violate platform or regional policies. This model doesn't generate any unsafe content, as it is a classification model and does not fall under the category of models not suitable for all audiences. > [!Important] > Neutral = Safe / Normal ```py Classification Report: precision recall f1-score support Anime Picture 0.9311 0.9455 0.9382 5600 Hentai 0.9520 0.9244 0.9380 4180 Neutral 0.9681 0.9529 0.9604 5503 Pornography 0.9896 0.9832 0.9864 5600 Enticing or Sensual 0.9602 0.9870 0.9734 5600 accuracy 0.9605 26483 macro avg 0.9602 0.9586 0.9593 26483 weighted avg 0.9606 0.9605 0.9604 26483 ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/FvFTPm_JKwFIffb_LF4ft.png) ```py from datasets import load_dataset # Load the dataset dataset = load_dataset("YOUR-DATASET-HERE") # Extract unique labels labels = dataset["train"].features["label"].names # Create id2label mapping id2label = {str(i): label for i, label in enumerate(labels)} # Print the mapping print(id2label) ``` --- The model categorizes images into five classes: - **Class 0:** Anime Picture - **Class 1:** Hentai - **Class 2:** Neutral - **Class 3:** Pornography - **Class 4:** Enticing or Sensual # **Run with Transformers 🤗** ```python !pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import ViTImageProcessor, ViTForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Vit-Mature-Content-Detection" # Replace with your actual model path model = ViTForImageClassification.from_pretrained(model_name) processor = ViTImageProcessor.from_pretrained(model_name) # Label mapping labels = { "0": "Anime Picture", "1": "Hentai", "2": "Neutral", "3": "Pornography", "4": "Enticing or Sensual" } def mature_content_detection(image): """Predicts the type of content in the image.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))} return predictions # Create Gradio interface iface = gr.Interface( fn=mature_content_detection, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Prediction Scores"), title="Vit-Mature-Content-Detection", description="Upload an image to classify whether it contains anime, hentai, neutral, pornographic, or enticing/sensual content." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Recommended Use Cases** - Content moderation systems - Parental control filters - Dataset preprocessing and filtering - Digital well-being and user safety tools - Search engine safe filter enhancements # **Discouraged / Prohibited Use** - Harassment or shaming - Unethical surveillance - Illegal or deceptive applications - Sole-dependency without human oversight - Misuse to mislead moderation decisions
[ "anime picture", "hentai", "neutral", "pornography", "enticing or sensual" ]
ismdal/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2038 - Accuracy: 0.9445 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.373 | 1.0 | 370 | 0.2732 | 0.9337 | | 0.2127 | 2.0 | 740 | 0.2148 | 0.9405 | | 0.1801 | 3.0 | 1110 | 0.1918 | 0.9445 | | 0.1448 | 4.0 | 1480 | 0.1857 | 0.9472 | | 0.1308 | 5.0 | 1850 | 0.1814 | 0.9445 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1 ### Zero-Shot Evaluation Model used:  openai/clip-vit-large-patch14  Dataset: Oxford-IIIT-Pets ( pcuenq/oxford-pets ) - Accuracy: 0.8800 - Precision: 0.8768 - Recall: 0.8800 The zero-shot evaluation was done using Hugging Face Transformers and the CLIP model on the Oxford-Pet dataset.
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
gitnub/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ## Zero-Shot Classification with CLIP We evaluated the Oxford-IIIT Pets dataset using the zero-shot model [`openai/clip-vit-large-patch14`](https://huggingface.co/openai/clip-vit-large-patch14) for comparison. The goal was to assess how well a powerful pre-trained model performs without fine-tuning, compared to our fine-tuned ViT model. **Results on the full dataset (7,390 samples):** - **Accuracy:** 0.8800 - **Precision:** 0.8768 - **Recall:** 0.8800 These results show that CLIP performs surprisingly well even without task-specific training, but still falls slightly behind our fine-tuned ViT model (accuracy 0.9459). # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2031 - Accuracy: 0.9459 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3727 | 1.0 | 370 | 0.2756 | 0.9337 | | 0.2145 | 2.0 | 740 | 0.2168 | 0.9378 | | 0.1835 | 3.0 | 1110 | 0.1918 | 0.9459 | | 0.147 | 4.0 | 1480 | 0.1857 | 0.9472 | | 0.1315 | 5.0 | 1850 | 0.1818 | 0.9472 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
TheoK98/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1819 - Accuracy: 0.9337 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3743 | 1.0 | 370 | 0.2753 | 0.9364 | | 0.2188 | 2.0 | 740 | 0.2023 | 0.9459 | | 0.1678 | 3.0 | 1110 | 0.1838 | 0.9459 | | 0.1565 | 4.0 | 1480 | 0.1791 | 0.9486 | | 0.1164 | 5.0 | 1850 | 0.1767 | 0.9472 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
JernejRozman/zdravJEM_CV_BERT
# 🥦 zdravJEM - Model za klasifikacijo zdrave prehrane To je *Vision Transformer* (ViT) model, treniran za klasifikacijo fotografij hrane glede na štiri dimenzije: - **zdravo** - **raznoliko** - **domače** - **je hrana** Model je del aplikacije [zdravJEM](https://github.com/JernejRozman/zdravJEM), odprtokodnega orodja za ozaveščanje o prehranskih navadah na podlagi vizualne analize hrane. --- ## 📊 Dataset Model je treniran na ročno anotiranem datasetu, objavljenem na Zenodo: 📦 [https://zenodo.org/records/15203529](https://zenodo.org/records/15203529) Dataset vsebuje več sto slik hrane, ki so bile ocenjene glede na prehransko vrednost in kulturne značilnosti (npr. "domače"). --- ## 🧠 Trening Model temelji na predtreniranem `google/vit-base-patch16-224`, ki je bil *fine-tunan* na zgornjem datasetu. Treniranje je potekalo po vzorcu: ["Fine-tuning a Vision Transformer Model With a Custom Biomedical Dataset"](https://huggingface.co/learn/cookbook/fine_tuning_vit_custom_dataset#fine-tuning-the-model) Trening je bil izveden v Jupyter Notebooku [`TrainModel.ipynb`](https://github.com/JernejRozman/zdravJEM/blob/main/notebooks/TrainModel.ipynb), ki prikazuje: - pripravo podatkov (resizing, normalizacija), - stratificirano razdelitev na trening/test, - trening z `torch` + `transformers`, - shranjevanje modela kot `safetensors`. Uporabljena sta bila `BCEWithLogitsLoss` za več-labelsko klasifikacijo in 50 epochov. --- ## 🚀 Kako uporabiti ```python from transformers import ViTImageProcessor, ViTForImageClassification from PIL import Image import torch # Load model and processor model = ViTForImageClassification.from_pretrained("JernejRozman/zdravjem-vit") processor = ViTImageProcessor.from_pretrained("JernejRozman/zdravjem-vit") # Load image image = Image.open("test_hrana.jpg") # Prepare inputs inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # Get sigmoid scores scores = torch.sigmoid(outputs.logits).squeeze().tolist() print("Ocene (zdravo, raznoliko, domače, je hrana):", scores)
[ "zdravo", "raznoliko", "domace", "jehrana" ]
vimal-humantics/dinov2-base-xray-224-finetuned-tb
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dinov2-base-xray-224-finetuned-tb This model is a fine-tuned version of [StanfordAIMI/dinov2-base-xray-224](https://huggingface.co/StanfordAIMI/dinov2-base-xray-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.3027 | 1.0 | 151 | 0.0634 | 0.9834 | | 0.0524 | 2.0 | 302 | 0.0000 | 1.0 | | 0.0013 | 2.9834 | 450 | 0.0000 | 1.0 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "abnormal", "normal" ]
prithivMLmods/Gameplay-Classcode-10
![zdzdf.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/KJnfq1zAn56dabaX4nuei.png) # **Gameplay-Classcode-10** > **Gameplay-Classcode-10** is a vision-language model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies gameplay screenshots or thumbnails into one of ten popular video game titles. ```py Classification Report: precision recall f1-score support Among Us 0.9990 0.9920 0.9955 1000 Apex Legends 0.9737 0.9990 0.9862 1000 Fortnite 0.9960 0.9910 0.9935 1000 Forza Horizon 0.9990 0.9820 0.9904 1000 Free Fire 0.9930 0.9860 0.9895 1000 Genshin Impact 0.9831 0.9890 0.9860 1000 God of War 0.9930 0.9930 0.9930 1000 Minecraft 0.9990 0.9990 0.9990 1000 Roblox 0.9832 0.9960 0.9896 1000 Terraria 1.0000 0.9910 0.9955 1000 accuracy 0.9918 10000 macro avg 0.9919 0.9918 0.9918 10000 weighted avg 0.9919 0.9918 0.9918 10000 ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/mI7DFpu3kJ6V3EOiJII39.png) The model predicts one of the following **game categories**: - **0:** Among Us - **1:** Apex Legends - **2:** Fortnite - **3:** Forza Horizon - **4:** Free Fire - **5:** Genshin Impact - **6:** God of War - **7:** Minecraft - **8:** Roblox - **9:** Terraria --- # **Run with Transformers 🤗** ```python !pip install -q transformers torch pillow gradio ``` ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Gameplay-Classcode-10" # Replace with your actual model path model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Label mapping id2label = { 0: "Among Us", 1: "Apex Legends", 2: "Fortnite", 3: "Forza Horizon", 4: "Free Fire", 5: "Genshin Impact", 6: "God of War", 7: "Minecraft", 8: "Roblox", 9: "Terraria" } def classify_game(image): """Predicts the game title based on the gameplay image.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))} predictions = dict(sorted(predictions.items(), key=lambda item: item[1], reverse=True)) return predictions # Gradio interface iface = gr.Interface( fn=classify_game, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Game Prediction Scores"), title="Gameplay-Classcode-10", description="Upload a gameplay screenshot or thumbnail to identify the game title (Among Us, Fortnite, Minecraft, etc.)." ) # Launch the app if __name__ == "__main__": iface.launch() ``` --- # **Intended Use** This model can be used for: - **Automatic tagging of gameplay content for streamers and creators** - **Organizing gaming datasets** - **Enhancing searchability in gameplay video repositories** - **Training AI systems for game-related content moderation or recommendations**
[ "among us", "apex legends", "fortnite", "forza horizon", "free fire", "genshin impact", "god of war", "minecraft", "roblox", "terraria" ]
dima806/orange_fruit_disease_detection
Returns the orange fruit common disease (melanose or citrus canker) with about 98% accuracy based on an image. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/Ijkn6PR0_iJC0NRKD5NAP.png) See https://www.kaggle.com/code/dima806/orange-fruit-disease-detection-vit for details. ``` Classification report: precision recall f1-score support citrus canker 0.9806 0.9700 0.9753 1200 healthy 0.9795 0.9933 0.9863 1200 melanose 0.9783 0.9750 0.9766 1200 accuracy 0.9794 3600 macro avg 0.9794 0.9794 0.9794 3600 weighted avg 0.9794 0.9794 0.9794 3600 ```
[ "citrus canker", "healthy", "melanose" ]
kimjungin1770/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6247 - Accuracy: 0.904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7363 | 1.0 | 63 | 2.5326 | 0.838 | | 1.8545 | 2.0 | 126 | 1.7747 | 0.892 | | 1.6536 | 2.96 | 186 | 1.6247 | 0.904 | ### Framework versions - Transformers 4.51.1 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
Kaeyze/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0863 - Accuracy: 0.9706 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.4985 | 1.0 | 352 | 0.1360 | 0.953 | | 0.3537 | 2.0 | 704 | 0.1017 | 0.9656 | | 0.3377 | 2.9922 | 1053 | 0.0863 | 0.9706 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
SodaXII/mobilevit-small_rice-leaf-disease-augmented-v4_v5_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilevit-small_rice-leaf-disease-augmented-v4_v5_fft This model is a fine-tuned version of [apple/mobilevit-small](https://huggingface.co/apple/mobilevit-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3882 - Accuracy: 0.9362 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 256 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 2.0678 | 0.5 | 64 | 2.0341 | 0.2617 | | 1.9896 | 1.0 | 128 | 1.8927 | 0.5067 | | 1.768 | 1.5 | 192 | 1.5508 | 0.5336 | | 1.3736 | 2.0 | 256 | 1.0992 | 0.6779 | | 0.9732 | 2.5 | 320 | 0.7521 | 0.7685 | | 0.7316 | 3.0 | 384 | 0.6023 | 0.8121 | | 0.5769 | 3.5 | 448 | 0.5281 | 0.8121 | | 0.5013 | 4.0 | 512 | 0.4605 | 0.8423 | | 0.4329 | 4.5 | 576 | 0.4268 | 0.8691 | | 0.3821 | 5.0 | 640 | 0.3944 | 0.8859 | | 0.3602 | 5.5 | 704 | 0.3895 | 0.8859 | | 0.3496 | 6.0 | 768 | 0.3827 | 0.8893 | | 0.3507 | 6.5 | 832 | 0.3723 | 0.8859 | | 0.3225 | 7.0 | 896 | 0.3741 | 0.8893 | | 0.2924 | 7.5 | 960 | 0.3271 | 0.9027 | | 0.2298 | 8.0 | 1024 | 0.3185 | 0.8993 | | 0.1888 | 8.5 | 1088 | 0.3093 | 0.9094 | | 0.1771 | 9.0 | 1152 | 0.2994 | 0.9094 | | 0.1461 | 9.5 | 1216 | 0.2907 | 0.9128 | | 0.1496 | 10.0 | 1280 | 0.3046 | 0.9027 | | 0.1284 | 10.5 | 1344 | 0.2999 | 0.9027 | | 0.1323 | 11.0 | 1408 | 0.2904 | 0.9060 | | 0.1291 | 11.5 | 1472 | 0.2939 | 0.9128 | | 0.1227 | 12.0 | 1536 | 0.2869 | 0.9027 | | 0.1033 | 12.5 | 1600 | 0.2886 | 0.9128 | | 0.0856 | 13.0 | 1664 | 0.3137 | 0.9195 | | 0.077 | 13.5 | 1728 | 0.3066 | 0.9161 | | 0.0672 | 14.0 | 1792 | 0.3010 | 0.9094 | | 0.0601 | 14.5 | 1856 | 0.3260 | 0.9128 | | 0.0469 | 15.0 | 1920 | 0.2773 | 0.9161 | | 0.0501 | 15.5 | 1984 | 0.2908 | 0.9161 | | 0.0518 | 16.0 | 2048 | 0.3022 | 0.9128 | | 0.0515 | 16.5 | 2112 | 0.3325 | 0.9228 | | 0.0537 | 17.0 | 2176 | 0.3087 | 0.9195 | | 0.0462 | 17.5 | 2240 | 0.2908 | 0.9295 | | 0.0406 | 18.0 | 2304 | 0.3139 | 0.9262 | | 0.0283 | 18.5 | 2368 | 0.3038 | 0.9329 | | 0.0196 | 19.0 | 2432 | 0.2968 | 0.9329 | | 0.0207 | 19.5 | 2496 | 0.3090 | 0.9295 | | 0.0248 | 20.0 | 2560 | 0.3097 | 0.9262 | | 0.0223 | 20.5 | 2624 | 0.2872 | 0.9262 | | 0.0205 | 21.0 | 2688 | 0.3517 | 0.9262 | | 0.0192 | 21.5 | 2752 | 0.3580 | 0.9295 | | 0.0238 | 22.0 | 2816 | 0.3922 | 0.9262 | | 0.0173 | 22.5 | 2880 | 0.3709 | 0.9228 | | 0.019 | 23.0 | 2944 | 0.3679 | 0.9295 | | 0.0132 | 23.5 | 3008 | 0.3949 | 0.9295 | | 0.0112 | 24.0 | 3072 | 0.3609 | 0.9329 | | 0.0122 | 24.5 | 3136 | 0.3732 | 0.9262 | | 0.0107 | 25.0 | 3200 | 0.3667 | 0.9295 | | 0.0116 | 25.5 | 3264 | 0.3775 | 0.9262 | | 0.0111 | 26.0 | 3328 | 0.3289 | 0.9262 | | 0.0151 | 26.5 | 3392 | 0.3770 | 0.9228 | | 0.0139 | 27.0 | 3456 | 0.3755 | 0.9228 | | 0.0117 | 27.5 | 3520 | 0.4131 | 0.9195 | | 0.0086 | 28.0 | 3584 | 0.3856 | 0.9262 | | 0.0101 | 28.5 | 3648 | 0.3674 | 0.9362 | | 0.0097 | 29.0 | 3712 | 0.3762 | 0.9396 | | 0.0086 | 29.5 | 3776 | 0.4119 | 0.9295 | | 0.0091 | 30.0 | 3840 | 0.3882 | 0.9362 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.1
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
SodaXII/efficientnet-b2_rice-leaf-disease-augmented-v4_v5_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # efficientnet-b2_rice-leaf-disease-augmented-v4_v5_fft This model is a fine-tuned version of [google/efficientnet-b2](https://huggingface.co/google/efficientnet-b2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2236 - Accuracy: 0.9362 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 256 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0775 | 0.5 | 64 | 2.0200 | 0.2282 | | 1.9215 | 1.0 | 128 | 1.7530 | 0.5336 | | 1.5941 | 1.5 | 192 | 1.3576 | 0.6309 | | 1.1312 | 2.0 | 256 | 0.8490 | 0.7517 | | 0.6799 | 2.5 | 320 | 0.5743 | 0.8221 | | 0.4743 | 3.0 | 384 | 0.4281 | 0.8624 | | 0.2937 | 3.5 | 448 | 0.3946 | 0.8893 | | 0.2342 | 4.0 | 512 | 0.3713 | 0.8758 | | 0.1563 | 4.5 | 576 | 0.3339 | 0.8893 | | 0.1296 | 5.0 | 640 | 0.2886 | 0.9128 | | 0.1026 | 5.5 | 704 | 0.3032 | 0.8926 | | 0.1009 | 6.0 | 768 | 0.2951 | 0.8893 | | 0.0956 | 6.5 | 832 | 0.2795 | 0.9128 | | 0.0817 | 7.0 | 896 | 0.3031 | 0.9094 | | 0.0591 | 7.5 | 960 | 0.2778 | 0.9195 | | 0.0444 | 8.0 | 1024 | 0.2435 | 0.9060 | | 0.0268 | 8.5 | 1088 | 0.2506 | 0.9228 | | 0.0198 | 9.0 | 1152 | 0.2692 | 0.8993 | | 0.0139 | 9.5 | 1216 | 0.2384 | 0.9195 | | 0.0164 | 10.0 | 1280 | 0.2712 | 0.9195 | | 0.0118 | 10.5 | 1344 | 0.2868 | 0.9128 | | 0.0139 | 11.0 | 1408 | 0.2262 | 0.9295 | | 0.0122 | 11.5 | 1472 | 0.2492 | 0.9128 | | 0.0132 | 12.0 | 1536 | 0.2751 | 0.9128 | | 0.0084 | 12.5 | 1600 | 0.3184 | 0.8993 | | 0.0082 | 13.0 | 1664 | 0.2596 | 0.9228 | | 0.0093 | 13.5 | 1728 | 0.2636 | 0.9228 | | 0.0059 | 14.0 | 1792 | 0.2501 | 0.9262 | | 0.0060 | 14.5 | 1856 | 0.3249 | 0.9027 | | 0.0036 | 15.0 | 1920 | 0.2584 | 0.9228 | | 0.0051 | 15.5 | 1984 | 0.2501 | 0.9161 | | 0.0049 | 16.0 | 2048 | 0.2698 | 0.9228 | | 0.0042 | 16.5 | 2112 | 0.2403 | 0.9262 | | 0.0054 | 17.0 | 2176 | 0.2536 | 0.9262 | | 0.0056 | 17.5 | 2240 | 0.2506 | 0.9228 | | 0.0031 | 18.0 | 2304 | 0.3199 | 0.9027 | | 0.0038 | 18.5 | 2368 | 0.3303 | 0.9228 | | 0.0029 | 19.0 | 2432 | 0.2250 | 0.9295 | | 0.0024 | 19.5 | 2496 | 0.2577 | 0.9161 | | 0.0023 | 20.0 | 2560 | 0.2365 | 0.9396 | | 0.0026 | 20.5 | 2624 | 0.2501 | 0.9295 | | 0.0034 | 21.0 | 2688 | 0.2283 | 0.9262 | | 0.0034 | 21.5 | 2752 | 0.2608 | 0.9195 | | 0.0045 | 22.0 | 2816 | 0.3040 | 0.9094 | | 0.0023 | 22.5 | 2880 | 0.2782 | 0.9228 | | 0.0026 | 23.0 | 2944 | 0.2520 | 0.9161 | | 0.0015 | 23.5 | 3008 | 0.2440 | 0.9228 | | 0.0015 | 24.0 | 3072 | 0.2341 | 0.9362 | | 0.0019 | 24.5 | 3136 | 0.2779 | 0.9161 | | 0.0018 | 25.0 | 3200 | 0.2662 | 0.9362 | | 0.0016 | 25.5 | 3264 | 0.2244 | 0.9362 | | 0.0014 | 26.0 | 3328 | 0.3105 | 0.9161 | | 0.0016 | 26.5 | 3392 | 0.2696 | 0.9195 | | 0.0022 | 27.0 | 3456 | 0.2696 | 0.9262 | | 0.0012 | 27.5 | 3520 | 0.2382 | 0.9362 | | 0.0019 | 28.0 | 3584 | 0.2430 | 0.9329 | | 0.0010 | 28.5 | 3648 | 0.2321 | 0.9396 | | 0.0010 | 29.0 | 3712 | 0.2566 | 0.9161 | | 0.0017 | 29.5 | 3776 | 0.2965 | 0.9295 | | 0.0013 | 30.0 | 3840 | 0.2236 | 0.9362 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.1
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
fdrmic/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2031 - Accuracy: 0.9459 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3727 | 1.0 | 370 | 0.2756 | 0.9337 | | 0.2145 | 2.0 | 740 | 0.2168 | 0.9378 | | 0.1835 | 3.0 | 1110 | 0.1918 | 0.9459 | | 0.147 | 4.0 | 1480 | 0.1857 | 0.9472 | | 0.1315 | 5.0 | 1850 | 0.1818 | 0.9472 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
Piyushpandey10104/vit-face-project-piyush
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-face-project-piyush This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.9800 - Accuracy: 0.48 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "akshay kumar", "alexandra daddario", "alia bhatt", "amitabh bachchan", "andy samberg", "anushka sharma", "billie eilish", "brad pitt", "camila cabello", "charlize theron", "claire holt", "courtney cox", "dwayne johnson", "elizabeth olsen", "ellen degeneres", "henry cavill", "hrithik roshan", "hugh jackman", "jessica alba", "kashyap", "lisa kudrow", "margot robbie", "marmik", "natalie portman", "priyanka chopra", "robert downey jr", "roger federer", "tom cruise", "vijay deverakonda", "virat kohli", "zac efron" ]
faridkarimli/SWIN_Gaudi_v1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SWIN_Gaudi_v1 This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-large-patch4-window12-192-22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.8090 - Accuracy: 0.2293 - Memory Allocated (gb): 1.84 - Max Memory Allocated (gb): 58.8 - Total Memory Available (gb): 94.62 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 1024 - total_eval_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06 - lr_scheduler_type: linear - num_epochs: 30.0 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | Memory Allocated (gb) | Allocated (gb) | Memory Available (gb) | |:-------------:|:-----:|:-----:|:--------:|:---------------:|:---------------------:|:--------------:|:---------------------:| | 5.3071 | 1.0 | 657 | 0.0801 | 6.2406 | 60.21 | 3.2 | 94.62 | | 3.1366 | 2.0 | 1314 | 0.1047 | 5.8481 | 60.21 | 3.2 | 94.62 | | 2.6048 | 3.0 | 1971 | 0.1238 | 5.5522 | 60.21 | 3.2 | 94.62 | | 1.9918 | 4.0 | 2628 | 0.1301 | 5.5551 | 60.21 | 3.2 | 94.62 | | 1.8353 | 5.0 | 3285 | 0.1415 | 5.4142 | 60.21 | 3.2 | 94.62 | | 1.7262 | 6.0 | 3942 | 0.1495 | 5.4061 | 60.21 | 3.2 | 94.62 | | 1.5135 | 7.0 | 4599 | 0.1468 | 5.4261 | 60.21 | 3.2 | 94.62 | | 1.4225 | 8.0 | 5256 | 0.1573 | 5.3333 | 60.21 | 3.2 | 94.62 | | 1.354 | 9.0 | 5913 | 0.1638 | 5.2205 | 60.21 | 3.2 | 94.62 | | 1.2511 | 10.0 | 6570 | 0.1708 | 5.2129 | 60.21 | 3.2 | 94.62 | | 1.1742 | 11.0 | 7227 | 0.1724 | 5.2002 | 60.21 | 3.2 | 94.62 | | 1.1342 | 12.0 | 7884 | 0.1782 | 5.1635 | 60.21 | 3.2 | 94.62 | | 1.0711 | 13.0 | 8541 | 0.1779 | 5.1436 | 60.21 | 3.2 | 94.62 | | 0.9971 | 14.0 | 9198 | 0.1817 | 5.1076 | 60.21 | 3.2 | 94.62 | | 0.9774 | 15.0 | 9855 | 0.1935 | 4.9076 | 60.21 | 3.2 | 94.62 | | 0.9174 | 16.0 | 10512 | 0.1890 | 5.0318 | 60.21 | 3.2 | 94.62 | | 0.8675 | 17.0 | 11169 | 0.1951 | 5.0392 | 60.21 | 3.2 | 94.62 | | 0.8499 | 18.0 | 11826 | 0.1978 | 5.0243 | 60.21 | 3.2 | 94.62 | | 0.8262 | 19.0 | 12483 | 0.1972 | 5.0843 | 60.21 | 3.2 | 94.62 | | 0.7623 | 20.0 | 13140 | 0.2048 | 5.0004 | 60.21 | 3.2 | 94.62 | | 0.7481 | 21.0 | 13797 | 0.2132 | 4.8428 | 60.24 | 3.2 | 94.62 | | 0.7284 | 22.0 | 14454 | 0.2149 | 4.8461 | 60.24 | 3.2 | 94.62 | | 0.6834 | 23.0 | 15111 | 0.2159 | 4.8741 | 60.24 | 3.2 | 94.62 | | 0.6591 | 24.0 | 15768 | 0.2187 | 4.8993 | 60.24 | 3.2 | 94.62 | | 0.6447 | 25.0 | 16425 | 0.2196 | 4.8415 | 60.24 | 3.2 | 94.62 | | 0.6107 | 26.0 | 17082 | 0.2216 | 4.8600 | 60.24 | 3.2 | 94.62 | | 0.5958 | 27.0 | 17739 | 0.2245 | 4.8391 | 60.24 | 3.2 | 94.62 | | 0.5836 | 28.0 | 18396 | 0.2265 | 4.8561 | 60.24 | 3.2 | 94.62 | | 0.5547 | 29.0 | 19053 | 0.2295 | 4.7933 | 60.24 | 3.2 | 94.62 | | 0.547 | 30.0 | 19710 | 0.2293 | 4.8090 | 60.24 | 3.2 | 94.62 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.6.0+hpu_1.20.0-543.git4952fce - Datasets 3.5.0 - Tokenizers 0.20.3
[ "3150", "7279", "12267", "15267", "8181", "12263", "6469", "15322", "755", "184", "3501", "8749", "5411", "11090", "1176", "15154", "7198", "13439", "1432", "2219", "7338", "9986", "5439", "12921", "4688", "8686", "9356", "12530", "6406", "10983", "3705", "13494", "2008", "3537", "2485", "9773", "15450", "13679", "13236", "13793", "11717", "8123", "11098", "6493", "10737", "3270", "10288", "6881", "1657", "1460", "13482", "6820", "11248", "2080", "3247", "15029", "7639", "10799", "711", "13596", "9392", "13049", "12499", "10341", "11730", "3049", "2089", "4350", "1203", "13403", "911", "14681", "1612", "1275", "10697", "9871", "9072", "7353", "10624", "113", "2554", "920", "3201", "11067", "7312", "9178", "8348", "7905", "9549", "1679", "13655", "2647", "11344", "11748", "11809", "2757", "3925", "12044", "11473", "1750", "3267", "3037", "304", "991", "217", "4724", "3454", "15335", "5825", "8084", "4578", "8303", "12034", "5806", "10313", "12244", "9297", "2465", "9266", "12956", "320", "14986", "4926", "7960", "14747", "15400", "3055", "9956", "11559", "9918", "14619", "5290", "14312", "14238", "8357", "2521", "7929", "3134", "5947", "2351", "14258", "3728", "7445", "8726", "799", "10015", "1055", "11872", "9054", "10441", "13763", "13553", "14942", "12084", "13734", "10029", "5057", "2460", "7950", "5950", "272", "13784", "1403", "15494", "7972", "8917", "14000", "11507", "3006", "1351", "6180", "2951", "10109", "1480", "442", "5591", "12525", "977", "13014", "5630", "9079", "3333", "11846", "8349", "11491", "6695", "11335", "14542", "14277", "9015", "6201", "12181", "5422", "6518", "12641", "12427", "3428", "9622", "11366", "9983", "1879", "15156", "2515", "14678", "5162", "3650", "3948", "10411", "1997", "5344", "12848", "14653", "14759", "12646", "2961", "2920", "8380", "12454", "10385", "8757", "6034", "5017", "1196", "8074", "1433", "3121", "7035", "7638", "2793", "10598", "5066", "11089", "9046", "6334", "729", "2664", "2444", "6738", "2567", "14058", "132", "10924", "6464", "13740", "5999", "11154", "2221", "233", "7736", "5983", "2919", "13839", "11427", "7728", "127", "9926", "10951", "12221", "12858", "1784", "9377", "8428", "12376", "13416", "13148", "5128", "4333", "9736", "5423", "5526", "14570", "10560", "9847", "7050", "11770", "11204", "10956", "11287", "12460", "8370", "3597", "1867", "6485", "10485", "3383", "3298", "6840", "9105", "9791", "13467", "4094", "2751", "7902", "10891", "14473", "13379", "15375", "10114", "4828", "12828", "12413", "142", "7434", "782", "12959", "8848", "94", "12350", "11180", "5102", "13705", "14586", "4984", "6420", "6854", "963", "9452", "2643", "5774", "13949", "11305", "4904", "3305", "10841", "11382", "14140", "7677", "11964", "811", "9588", "4961", "15300", "1815", "14150", "2752", "5887", "6867", "11426", "13222", "13909", "3197", "5256", "13862", "3720", "580", "10421", "15002", "996", "12540", "9261", "6241", "13307", "9515", "5904", "9006", "12099", "11073", "12914", "8678", "4455", "12369", "4793", "4894", "15346", "7830", "6711", "10704", "6117", "5694", "2363", "11547", "6305", "5069", "12718", "11966", "13719", "1261", "9335", "7220", "400", "7322", "11639", "99", "12269", "9002", "3012", "9754", "1762", "5462", "1553", "7689", "10532", "5165", "13962", "4519", "10628", "6977", "8932", "11821", "7760", "8508", "168", "10043", "11718", "9776", "12747", "5388", "252", "12381", "5674", "5729", "11358", "2782", "6239", "5174", "415", "10245", "14122", "11122", "12065", "12424", "10575", "10738", "7037", "6690", "10103", "13005", "7705", "6633", "15100", "5115", "10211", "13022", "5310", "11725", "525", "4851", "2512", "14512", "8831", "176", "3593", "9300", "5222", "4877", "8462", "11258", "2346", "12944", "13249", "8902", "939", "8947", "4026", "15480", "10261", "2558", "679", "1822", "11166", "529", "7041", "12882", "15050", "9087", "11959", "443", "4276", "4489", "171", "3097", "9706", "6073", "11145", "1473", "9289", "8336", "13308", "13733", "5146", "7974", "15057", "11464", "9651", "13070", "4035", "13779", "6198", "12096", "11209", "2955", "14608", "6479", "1183", "8037", "5813", "10674", "626", "10811", "7882", "12563", "11841", "5082", "3005", "9735", "5485", "13598", "3992", "6052", "10746", "7280", "12930", "4537", "4179", "12183", "12522", "10234", "15030", "4539", "7650", "4974", "11208", "3017", "3686", "10006", "738", "11998", "2786", "4381", "12496", "9406", "13756", "10948", "14699", "1529", "1585", "10992", "1322", "5311", "10452", "2502", "2594", "15199", "5993", "9569", "13020", "11343", "1735", "8632", "288", "3714", "5822", "3425", "9282", "8452", "3335", "261", "4127", "2406", "530", "14904", "13394", "8522", "7021", "7942", "7318", "9383", "2116", "10338", "8528", "1848", "14524", "1753", "4814", "15055", "12491", "13846", "12115", "6579", "826", "8323", "1240", "15187", "5808", "14587", "1563", "15482", "7957", "10829", "9089", "14208", "10477", "2240", "5623", "1713", "9078", "15064", "861", "2656", "3575", "8629", "7399", "564", "7450", "2105", "14328", "11660", "14350", "7208", "1371", "91", "8627", "8669", "15485", "12560", "3209", "11311", "6829", "6330", "3251", "12933", "5874", "14669", "9553", "4605", "1487", "12436", "2344", "469", "605", "8377", "13171", "623", "10080", "4595", "13654", "1721", "11002", "11526", "2463", "10096", "13951", "7803", "4439", "3131", "13545", "9045", "9915", "12363", "12298", "14719", "11402", "4031", "6592", "14059", "14069", "1986", "8104", "7301", "2115", "13936", "8587", "11549", "12520", "4597", "2702", "13077", "13233", "11047", "12021", "5937", "15299", "240", "2962", "6273", "13552", "11612", "7764", "12218", "11756", "6077", "3060", "11411", "12762", "554", "421", "12865", "1124", "12430", "7106", "4341", "6162", "1730", "10208", "13551", "14726", "14977", "11948", "3208", "11716", "8207", "4951", "7324", "7186", "7071", "728", "1353", "2451", "3910", "15257", "10856", "4854", "13567", "1708", "8611", "4580", "13239", "11056", "13818", "332", "2922", "13752", "27", "381", "2073", "7305", "10290", "5882", "1992", "13850", "301", "14709", "7498", "9941", "15306", "3750", "4354", "7507", "4625", "14869", "7337", "785", "5015", "13690", "87", "2508", "3183", "10483", "8569", "13028", "6993", "10842", "10786", "11534", "3667", "8532", "11671", "14453", "12132", "7397", "8884", "448", "9658", "7930", "7418", "5759", "1287", "1803", "12784", "12", "1984", "1017", "6900", "13069", "10358", "6582", "1126", "12999", "1380", "13530", "4157", "12709", "5701", "11854", "13689", "8670", "5989", "5727", "12561", "8341", "2595", "13448", "7105", "14812", "150", "12050", "6013", "8383", "11400", "11409", "8222", "12905", "13333", "5678", "5684", "15458", "428", "8471", "13538", "7923", "7828", "2792", "1377", "6963", "13519", "8188", "11691", "10932", "7031", "2194", "14244", "5055", "8126", "9886", "10128", "8354", "3002", "2681", "11617", "6866", "7421", "13381", "3855", "11535", "4566", "3863", "12654", "11757", "2345", "7247", "6495", "12038", "7017", "10067", "2533", "7162", "7509", "15428", "3417", "12095", "4582", "10988", "10604", "11825", "7459", "3103", "10383", "1733", "5342", "1145", "3565", "2551", "2249", "5584", "6431", "13922", "2079", "11683", "8612", "1575", "13787", "3475", "1701", "11388", "3550", "8197", "2112", "7785", "15436", "38", "1286", "2452", "8142", "7147", "7733", "6487", "14357", "9891", "9878", "11012", "8595", "10892", "5781", "4404", "1166", "3780", "11135", "12911", "14118", "14014", "3759", "5252", "8294", "14199", "496", "8653", "2848", "14979", "14036", "2611", "9456", "6538", "1039", "2291", "8696", "9445", "7268", "4955", "13035", "9630", "8147", "8955", "3429", "13360", "15444", "11557", "5305", "10391", "4859", "59", "9957", "1041", "8999", "5001", "5894", "4193", "12439", "12100", "2422", "5695", "6946", "9150", "12521", "7194", "2239", "8264", "11431", "14141", "10050", "7715", "7372", "4836", "13046", "14384", "14811", "10343", "1431", "13043", "6328", "11486", "7007", "12645", "3881", "13582", "13804", "4731", "5394", "4242", "6172", "13229", "5657", "8904", "14367", "1650", "10942", "12001", "3504", "13182", "734", "7749", "5537", "5409", "3657", "67", "7472", "4588", "4235", "12860", "230", "7816", "9875", "10695", "11950", "696", "4728", "13943", "6228", "1619", "4430", "1333", "14519", "14701", "5663", "7027", "9673", "6534", "4111", "5163", "9968", "7210", "12377", "11217", "4748", "6477", "3093", "3499", "14764", "14730", "6316", "235", "566", "8688", "1880", "3285", "2513", "14566", "239", "5912", "11121", "1933", "209", "11392", "6552", "4199", "2975", "7020", "10909", "3505", "10978", "10677", "5108", "6069", "10166", "15376", "13728", "10553", "13137", "538", "8069", "5598", "15070", "11817", "12227", "5303", "6108", "14703", "2047", "8241", "561", "7093", "6716", "12952", "1976", "8288", "1142", "831", "8603", "791", "6713", "3632", "14163", "12079", "3472", "13399", "7948", "1361", "3248", "190", "1157", "12967", "13745", "5920", "2811", "6271", "4459", "7124", "8987", "9713", "15373", "5099", "14066", "5641", "5858", "9614", "9523", "3168", "1170", "3184", "8785", "6161", "1696", "6620", "5453", "1381", "7068", "12579", "11386", "3967", "3367", "12727", "8841", "9628", "3987", "9746", "6759", "4116", "11659", "11799", "5768", "7651", "5446", "3542", "1329", "14683", "5786", "9779", "3554", "6951", "11478", "7067", "12593", "2534", "14065", "86", "1883", "14649", "5672", "12720", "15254", "13910", "1526", "3850", "409", "7119", "6630", "8639", "2317", "52", "897", "7658", "4010", "2727", "2748", "10013", "4576", "3951", "925", "1593", "1872", "10158", "15075", "14816", "11292", "353", "4289", "8772", "4222", "14944", "14792", "10730", "1570", "10076", "7896", "7062", "8255", "10675", "6036", "1468", "10662", "3943", "4648", "6038", "3498", "591", "8685", "2769", "12919", "6761", "10788", "562", "2048", "14685", "9244", "11533", "5261", "1500", "13368", "579", "8592", "3630", "698", "8512", "11810", "788", "9313", "11940", "13911", "9759", "420", "15403", "2581", "2228", "14472", "5333", "10556", "14528", "2334", "923", "2869", "12514", "12367", "5143", "3623", "5512", "13487", "4250", "3027", "6674", "12097", "8557", "14313", "5608", "10756", "7431", "4266", "2429", "11632", "65", "8486", "8537", "9608", "6675", "13659", "8020", "478", "14419", "8100", "682", "10716", "12500", "7706", "12058", "1198", "4208", "2490", "1394", "537", "13185", "1858", "14967", "3380", "857", "3158", "1205", "5416", "10616", "6619", "9305", "646", "3171", "15155", "11800", "9564", "2268", "12760", "4191", "4065", "3015", "4799", "10278", "10092", "14888", "12139", "4998", "5289", "6742", "14628", "7819", "7065", "4299", "15025", "5300", "5211", "13188", "13749", "11778", "3543", "1521", "2576", "8301", "8353", "3068", "9710", "10308", "1188", "6096", "10980", "8781", "12976", "4747", "14063", "14045", "8619", "14329", "11845", "8490", "1340", "1225", "3110", "2171", "11921", "3360", "3676", "7096", "8225", "11981", "1646", "3320", "8748", "11176", "3958", "9743", "9330", "2136", "5975", "6202", "10779", "14481", "2473", "1863", "124", "4572", "2881", "5523", "9841", "950", "2093", "5711", "11866", "902", "2418", "2348", "15192", "6383", "14025", "8062", "11781", "14980", "10962", "3381", "9765", "8116", "358", "1354", "574", "433", "8165", "14599", "3046", "704", "13600", "13516", "5149", "5103", "5906", "7617", "13026", "12600", "9165", "9388", "3673", "12576", "9529", "56", "8372", "5870", "460", "10922", "10113", "5503", "10996", "13486", "4607", "220", "15406", "10586", "10413", "11861", "13344", "9106", "12745", "8626", "6890", "10136", "9798", "12327", "2617", "2497", "13539", "6253", "6501", "6333", "5397", "1192", "11975", "3202", "14842", "14945", "5451", "11860", "6564", "3419", "15117", "5552", "14731", "5977", "10407", "13120", "14918", "11790", "9543", "763", "3090", "9960", "7799", "10660", "13564", "5667", "15353", "49", "2671", "9674", "15172", "12336", "11492", "11924", "4115", "7185", "6685", "3474", "1718", "599", "10814", "1922", "10436", "7538", "1884", "10807", "11646", "14235", "15481", "3316", "2250", "8656", "14005", "13980", "12231", "5376", "1789", "12307", "11827", "13715", "2907", "12036", "10311", "8521", "1774", "12888", "9889", "2003", "12426", "10213", "4233", "2790", "2585", "2488", "14672", "12778", "3574", "12193", "14982", "13454", "11113", "7170", "3261", "14173", "2147", "1804", "7139", "1388", "751", "11847", "2243", "8784", "15221", "14226", "12293", "8624", "2992", "9887", "10876", "7937", "10622", "1874", "2133", "8075", "4055", "11609", "3113", "12504", "6673", "5396", "8004", "12769", "6164", "10541", "7861", "8727", "1541", "5037", "15204", "7719", "10453", "6907", "15109", "5769", "3178", "5093", "5984", "5387", "7488", "8693", "8018", "9980", "7375", "10691", "5779", "11769", "7287", "7572", "9116", "5048", "1005", "216", "5104", "8615", "8", "7712", "13817", "634", "10578", "9894", "368", "2010", "1078", "5318", "4543", "11504", "1909", "3693", "5918", "10820", "13549", "3653", "1276", "3806", "9938", "11115", "13218", "12412", "13270", "577", "681", "680", "10759", "10438", "3106", "10233", "1391", "7801", "14955", "2359", "11714", "6294", "11543", "12373", "800", "15012", "1095", "1532", "9531", "2049", "1940", "11453", "735", "10408", "7688", "15293", "3185", "13183", "10053", "215", "8687", "3559", "14435", "3579", "10178", "12623", "9896", "3230", "12741", "4490", "15175", "15486", "10063", "6879", "15032", "11200", "13680", "1562", "4526", "9415", "10454", "1796", "5316", "15418", "13501", "10634", "2214", "7594", "6446", "9828", "4914", "5117", "12571", "2603", "11265", "2321", "4393", "5323", "9925", "12242", "4357", "8505", "2860", "1037", "7518", "9719", "6071", "6331", "8388", "2272", "2388", "4114", "9835", "11162", "4419", "1048", "6267", "9807", "2672", "13431", "14092", "7938", "14650", "4751", "6224", "12824", "2377", "5520", "7308", "11840", "2233", "12199", "2634", "9824", "9995", "313", "6933", "2650", "7165", "15467", "7516", "10292", "768", "3039", "3794", "5391", "507", "1251", "6588", "9667", "7043", "13857", "15251", "8503", "10085", "7052", "1438", "2856", "628", "9461", "14974", "12098", "8024", "8249", "10078", "11193", "5022", "13498", "14475", "9916", "4834", "13541", "8747", "2717", "13918", "2590", "2379", "1053", "8208", "14291", "14919", "3955", "14310", "12032", "11838", "10955", "13000", "4473", "15297", "8033", "13651", "10155", "11238", "8944", "12925", "1452", "8314", "7373", "6551", "7631", "5100", "11600", "10615", "14758", "5296", "2364", "4542", "6404", "11110", "2810", "15466", "7767", "7898", "12321", "3216", "2538", "7209", "2343", "9168", "486", "2845", "11338", "6635", "6750", "2435", "9510", "4819", "2327", "13156", "9636", "5722", "48", "9833", "1964", "12005", "12117", "14354", "11128", "1647", "4550", "6123", "13037", "12744", "6929", "8259", "943", "6671", "5484", "7852", "8059", "10550", "7752", "5011", "7369", "7849", "4411", "2271", "6155", "6430", "11240", "15124", "8283", "14287", "7966", "10205", "13475", "8175", "4743", "11455", "12863", "12495", "3636", "2885", "13507", "8302", "8799", "15118", "583", "7115", "14640", "10940", "6549", "13718", "2496", "3974", "6704", "12270", "15498", "10777", "5710", "5871", "13446", "8394", "5649", "6544", "13528", "11650", "5216", "6680", "15453", "1435", "9932", "2938", "15079", "7606", "7278", "10664", "13653", "11842", "4144", "870", "13683", "4994", "11945", "7860", "7864", "7804", "5903", "4468", "10933", "12105", "13359", "4056", "3816", "13133", "13738", "7621", "5945", "2566", "12386", "9679", "302", "11018", "10822", "222", "8445", "15000", "2764", "1030", "5107", "12488", "2801", "2357", "12834", "7351", "389", "10350", "6566", "8911", "714", "5136", "8309", "15479", "9194", "7940", "5365", "13799", "13898", "12947", "6199", "9250", "12493", "6499", "12066", "2186", "6133", "8409", "11082", "11561", "1038", "706", "7454", "780", "1507", "12302", "4584", "9368", "5050", "4483", "5465", "12706", "9412", "4764", "1119", "8000", "14789", "15240", "3357", "10221", "15001", "14529", "14110", "2957", "11264", "8129", "13664", "347", "4121", "14715", "5454", "5915", "10328", "10524", "10378", "13411", "425", "7967", "6821", "7371", "10872", "11107", "1248", "5490", "13213", "6318", "13389", "9561", "13926", "7655", "6214", "3095", "6296", "7754", "5974", "13681", "1045", "3898", "14193", "5688", "6264", "13426", "383", "3571", "13287", "2831", "5190", "11406", "922", "8795", "5130", "6659", "12642", "501", "10124", "6169", "12480", "14060", "2378", "14185", "8975", "3265", "11349", "4429", "2063", "2880", "13622", "8280", "9067", "3451", "1232", "10900", "10192", "7597", "6844", "1490", "6370", "2754", "12483", "8455", "13725", "5708", "3711", "4013", "5739", "7853", "13144", "5274", "6751", "767", "121", "14784", "10760", "63", "2424", "13138", "5169", "6585", "6067", "12182", "8003", "793", "4456", "3726", "10305", "1693", "6147", "10743", "3032", "8897", "6346", "14782", "8378", "14470", "15394", "11990", "26", "4424", "262", "563", "7773", "6233", "5671", "13645", "4390", "7660", "5145", "906", "2472", "4603", "15238", "13202", "14802", "802", "12445", "13294", "24", "5940", "2227", "11396", "9757", "959", "12063", "1567", "312", "1837", "14488", "9427", "13785", "2870", "8281", "6615", "11876", "8966", "13842", "1223", "3979", "2531", "9058", "4005", "6244", "9730", "14034", "14214", "8943", "2212", "4916", "5354", "14894", "9222", "2352", "9237", "8212", "6232", "3791", "1271", "7363", "309", "2960", "12776", "12082", "12184", "4432", "4684", "12075", "13025", "13612", "15218", "3558", "10859", "2198", "8039", "5788", "14883", "12211", "1975", "13601", "4206", "1345", "6085", "10733", "11497", "5725", "6046", "9761", "1749", "4598", "3703", "3334", "286", "9624", "7221", "5193", "6597", "14552", "2013", "8305", "13165", "5952", "10417", "5326", "4907", "10275", "1635", "2738", "6601", "382", "11828", "1023", "6842", "9198", "5291", "12208", "9684", "11754", "4379", "6304", "1074", "14983", "44", "14294", "9317", "14527", "7831", "10014", "612", "10243", "5088", "2287", "9917", "731", "2181", "7897", "11596", "1252", "13888", "15365", "321", "3115", "11425", "8491", "1383", "2847", "2631", "10064", "7276", "6242", "7559", "1116", "8547", "1648", "10088", "1807", "13647", "7059", "14705", "7679", "3539", "13998", "14728", "14578", "8365", "8493", "10629", "10281", "10312", "4848", "1294", "14526", "13238", "589", "2246", "6222", "6941", "9349", "3287", "12537", "6856", "4442", "10722", "14831", "7477", "9647", "8542", "6427", "4802", "13768", "8125", "10415", "1297", "4156", "14851", "508", "11882", "11694", "10727", "539", "9169", "4621", "7359", "13466", "10617", "1404", "12415", "10339", "8130", "5390", "11323", "6884", "4130", "6072", "9043", "12144", "4278", "1482", "401", "7730", "3434", "4394", "12940", "6508", "11663", "11723", "1132", "8597", "10257", "3271", "1632", "945", "5403", "8561", "14756", "1099", "7401", "1746", "8081", "10701", "3776", "12972", "10699", "2704", "14455", "13714", "13837", "4129", "311", "7133", "7916", "9789", "8734", "14239", "8379", "9129", "13543", "9583", "12868", "269", "11689", "12719", "6559", "4997", "14917", "3929", "8946", "13815", "1704", "13056", "8407", "1699", "12608", "1651", "5933", "71", "15454", "578", "720", "7347", "13324", "14755", "2546", "8203", "3236", "14887", "1466", "8573", "6276", "12407", "1409", "3978", "4888", "11328", "4723", "14857", "1869", "9653", "3885", "7256", "185", "5258", "14666", "4586", "14711", "1724", "2983", "14298", "10960", "12333", "5922", "291", "14161", "12017", "14352", "8658", "6186", "12177", "1263", "13006", "4041", "13172", "11034", "8872", "9147", "5083", "4267", "2470", "254", "6754", "3151", "10143", "7593", "1026", "12985", "14625", "7793", "9698", "11156", "4119", "3846", "3479", "14569", "1485", "7883", "2989", "8810", "9665", "2313", "109", "839", "14271", "13084", "1798", "9545", "11518", "8782", "6082", "532", "13184", "1548", "15201", "3556", "7416", "8006", "6917", "3327", "2651", "11638", "6757", "7058", "11298", "5755", "10458", "4635", "10169", "2005", "13106", "4180", "10721", "10860", "521", "4205", "4837", "8031", "1596", "4678", "5081", "14064", "2453", "4003", "12161", "10386", "13565", "5355", "3232", "6893", "14109", "9353", "11023", "14188", "14881", "6859", "10009", "6697", "9030", "1625", "6131", "1063", "11529", "2981", "9642", "3361", "350", "3874", "8675", "1633", "10557", "8832", "9068", "8218", "12261", "12693", "3952", "1740", "2691", "1875", "15217", "13455", "15041", "2988", "12164", "10518", "13092", "14197", "9318", "6", "4359", "1642", "6987", "1087", "9869", "4882", "4175", "3904", "11463", "8476", "3010", "15231", "6035", "9467", "8928", "6994", "15472", "7030", "6168", "13867", "2208", "13261", "1164", "4060", "10506", "14502", "3892", "1947", "2028", "11401", "7619", "1917", "2715", "3192", "5818", "5720", "2495", "2557", "13615", "12892", "4332", "11206", "9463", "6121", "14201", "3867", "9853", "12917", "6058", "10889", "6026", "9033", "440", "739", "4898", "4202", "11077", "13109", "3860", "13437", "14866", "9974", "15125", "2131", "13514", "9586", "6919", "2925", "3461", "6696", "7961", "11257", "12174", "6221", "10935", "12354", "10574", "12748", "10937", "3897", "192", "1741", "5321", "9538", "7985", "5986", "849", "6519", "858", "12508", "6764", "13042", "2119", "582", "9742", "14413", "7956", "3165", "2382", "2503", "13972", "7928", "3351", "3812", "12980", "7004", "6976", "6899", "13628", "12277", "14440", "11602", "14853", "14933", "13973", "7813", "5854", "12887", "5262", "11620", "6760", "6980", "2732", "3675", "7622", "2260", "6109", "11569", "1268", "14155", "8816", "12823", "3696", "10440", "6027", "4307", "14738", "5250", "2304", "13436", "12595", "14691", "1805", "5049", "654", "8109", "892", "3446", "7611", "2336", "10789", "12669", "12353", "2000", "4847", "2280", "584", "6016", "10640", "3336", "7344", "11466", "904", "10684", "11509", "5351", "6513", "13055", "7008", "10511", "11814", "10545", "12818", "635", "7637", "3717", "2946", "10203", "9496", "6311", "7721", "6146", "4775", "2117", "7330", "14990", "479", "14469", "12684", "9769", "11368", "14424", "14809", "6583", "2825", "10381", "14929", "6165", "10653", "2441", "2038", "14240", "6873", "4988", "8120", "14433", "1865", "14644", "15159", "12605", "13395", "2608", "12119", "712", "13591", "9542", "2998", "390", "5062", "10964", "408", "13387", "4365", "6986", "4963", "2624", "12705", "12871", "14160", "12983", "12666", "6702", "11045", "12971", "1788", "9533", "88", "9322", "15022", "10741", "4797", "9380", "8994", "2529", "2867", "9689", "5032", "15476", "8531", "7273", "11905", "11750", "1949", "3912", "7400", "4164", "3534", "1823", "14144", "7979", "10011", "13105", "4438", "1262", "761", "13710", "5179", "13531", "3970", "8328", "10994", "13370", "10170", "7843", "3365", "5911", "4948", "10974", "8415", "7991", "9382", "6374", "998", "14051", "14106", "13561", "11233", "3493", "31", "14800", "2659", "2927", "2309", "11871", "2813", "7153", "3207", "14269", "15301", "13102", "5762", "8720", "3020", "417", "2719", "14132", "3679", "7361", "1763", "5662", "14116", "11236", "832", "4857", "14176", "6465", "4146", "5255", "10539", "6995", "7237", "11086", "13721", "12226", "997", "11397", "10373", "11514", "4630", "6730", "14891", "12018", "12303", "11360", "15137", "3930", "6610", "5514", "12507", "8376", "5868", "13990", "15266", "4826", "9819", "3546", "8982", "2213", "6599", "3707", "15024", "2024", "1561", "6861", "5412", "14186", "8066", "14390", "9160", "12910", "60", "9882", "11146", "2783", "2875", "12147", "3159", "2110", "73", "4725", "5031", "11900", "5595", "5533", "6661", "14327", "15449", "14505", "13782", "4714", "12864", "1977", "229", "8589", "14490", "12257", "12670", "6498", "912", "10140", "4128", "8634", "14145", "1864", "13255", "2238", "990", "8187", "7099", "13557", "806", "1667", "13444", "4104", "13814", "8566", "7111", "4744", "11149", "4183", "4389", "4691", "8990", "4923", "11273", "13974", "2325", "12451", "905", "13826", "10083", "6676", "7772", "13220", "13089", "9284", "5293", "3231", "9274", "13524", "14618", "7193", "14906", "8560", "14202", "15129", "11498", "3155", "1942", "13513", "3594", "9241", "8034", "1397", "4342", "2559", "12884", "8215", "14211", "13286", "4538", "5313", "8667", "2011", "5309", "12689", "5276", "790", "4565", "7227", "4380", "11037", "8905", "12893", "7643", "9816", "7299", "10086", "13457", "5080", "11955", "7640", "9855", "1905", "2312", "7311", "7781", "7654", "8368", "2204", "699", "4686", "5195", "11185", "14323", "1213", "7676", "5716", "9992", "10569", "5798", "3306", "5472", "6889", "15220", "3082", "7743", "2324", "2121", "5187", "6399", "10223", "5834", "1464", "13510", "11063", "2550", "6648", "5938", "4929", "11505", "2489", "6470", "12114", "4516", "108", "8984", "6870", "1932", "7980", "10244", "8308", "12389", "1871", "5877", "3339", "12958", "12253", "7243", "149", "5670", "957", "4497", "8402", "14021", "5424", "8319", "12548", "13794", "10007", "1479", "14538", "12275", "7766", "3911", "15197", "6076", "14793", "156", "14723", "1497", "4400", "6996", "1001", "5352", "6798", "11467", "8200", "2004", "11976", "588", "4700", "14643", "12153", "11589", "7357", "6998", "6001", "7616", "6541", "246", "13863", "7696", "7023", "3678", "6115", "15432", "14302", "5389", "15457", "2766", "5251", "13746", "10878", "2202", "14517", "7214", "9440", "5070", "8454", "10915", "9138", "1109", "4318", "5173", "2776", "10108", "1127", "10706", "10564", "1764", "11326", "13822", "5009", "118", "8099", "9838", "13350", "1877", "4794", "2745", "6789", "13181", "7319", "5540", "3359", "465", "13544", "10531", "12134", "13900", "9350", "11142", "7714", "547", "13231", "7811", "1207", "8581", "7340", "431", "10831", "4730", "283", "179", "10945", "3072", "5201", "3512", "12816", "13078", "2187", "11578", "4036", "10309", "11304", "2222", "3927", "15442", "3388", "7012", "5929", "6653", "14557", "13950", "4737", "514", "4417", "2740", "11896", "5828", "2421", "13008", "7525", "15371", "1631", "13892", "4500", "7710", "6451", "5461", "15188", "753", "12431", "3902", "11019", "4053", "11510", "2434", "11341", "14181", "11031", "1802", "13191", "14224", "14658", "8802", "5502", "13419", "14004", "2711", "12387", "8960", "11615", "10883", "9690", "1066", "8332", "11749", "10588", "10685", "6054", "8046", "11731", "11550", "4063", "14396", "6384", "4975", "7229", "792", "5205", "13490", "4838", "1130", "13631", "8101", "5426", "14174", "8467", "14183", "3181", "4626", "889", "12658", "5161", "14200", "3709", "14158", "14042", "11150", "13580", "14580", "8732", "11260", "14735", "1283", "11272", "11908", "2061", "11404", "4617", "13550", "13809", "8359", "6158", "7575", "9478", "5432", "8827", "11061", "4881", "4176", "11262", "13769", "15042", "4544", "1000", "12886", "2937", "3786", "5976", "1706", "9815", "4032", "13969", "8705", "1628", "8007", "14584", "9554", "8638", "2043", "8488", "11773", "15011", "14958", "5812", "3721", "13991", "4941", "9065", "1441", "363", "6189", "11995", "4759", "7190", "174", "12908", "11424", "90", "3770", "535", "4100", "1510", "4280", "6381", "11014", "5545", "3959", "4238", "3599", "647", "10984", "855", "11025", "12059", "5564", "12158", "9984", "11052", "2283", "2808", "326", "14985", "8321", "1467", "4201", "4309", "1195", "13095", "14595", "4511", "1169", "4153", "3482", "7874", "14624", "4858", "5803", "362", "14804", "5733", "13119", "285", "4476", "11281", "276", "13450", "9340", "11168", "5028", "5350", "7018", "13896", "12040", "1850", "6457", "14593", "14243", "14206", "255", "9559", "8236", "2689", "7263", "14815", "3509", "14125", "7261", "4443", "7807", "4301", "14954", "10747", "13577", "6094", "3830", "4879", "2986", "6701", "14266", "13619", "13989", "5086", "13047", "10035", "2505", "13913", "4212", "13365", "10152", "851", "10913", "5302", "14189", "627", "7076", "12814", "1317", "11999", "12397", "14164", "2232", "7895", "4646", "12594", "9338", "11261", "8993", "3480", "9146", "9128", "1103", "942", "14016", "10467", "14522", "1310", "14763", "12655", "2556", "5003", "9231", "14315", "13694", "4169", "13916", "15087", "3526", "3098", "10732", "8430", "4703", "13241", "10873", "10370", "8954", "7155", "3235", "11295", "13706", "6928", "7780", "13886", "6312", "5315", "15214", "12662", "9454", "13982", "752", "10561", "3582", "10576", "14198", "201", "6562", "8945", "13860", "2395", "567", "3127", "5677", "11325", "12402", "3661", "542", "6219", "12479", "1590", "10402", "8983", "4068", "10353", "10708", "7697", "9589", "8922", "12607", "4512", "5338", "5435", "3790", "10696", "14361", "7253", "4674", "3872", "10694", "4742", "8233", "5574", "820", "5508", "15085", "4922", "868", "11068", "2834", "688", "15239", "15331", "3921", "14697", "13620", "13278", "9122", "4675", "14740", "14075", "9711", "6897", "3971", "4644", "5570", "9825", "14718", "8513", "3989", "8714", "13383", "13017", "7245", "9826", "1059", "5547", "4672", "12020", "4095", "8552", "12835", "2623", "765", "3835", "13284", "1453", "2935", "2493", "9788", "6075", "10602", "3070", "12687", "7080", "4911", "10348", "15430", "14912", "1292", "8145", "1120", "7042", "3048", "1332", "12986", "9577", "5978", "5772", "6920", "14478", "10122", "11693", "5600", "11631", "894", "8649", "11891", "3163", "1720", "7810", "2207", "814", "11879", "3442", "13638", "6087", "5260", "10563", "1184", "9611", "1398", "7536", "8754", "12019", "15274", "953", "8729", "5441", "958", "45", "709", "636", "12243", "6030", "11008", "8124", "686", "9472", "9110", "8770", "11608", "12047", "4204", "15308", "9032", "3066", "15206", "14388", "3555", "11727", "11457", "11626", "7486", "13919", "284", "3200", "5681", "1348", "7567", "8610", "993", "10971", "2151", "4482", "6731", "1449", "6127", "5925", "14712", "10409", "4810", "2354", "4391", "11321", "3862", "12216", "7405", "7724", "15357", "13732", "6197", "8588", "1712", "3708", "12408", "4279", "7342", "4749", "15462", "5715", "1331", "236", "10230", "5257", "14516", "2683", "14129", "3996", "8298", "11496", "6261", "11066", "11997", "10512", "1044", "4720", "4958", "7876", "6684", "11729", "7732", "7510", "8894", "7465", "5437", "11118", "12111", "9263", "1311", "8196", "6339", "3722", "7317", "373", "11895", "12966", "2864", "3368", "657", "1945", "856", "3123", "10630", "6801", "5881", "10320", "13422", "3281", "6111", "2939", "8789", "14604", "1946", "5091", "15149", "10525", "4167", "6088", "6365", "2492", "13899", "6587", "2637", "2682", "7403", "2022", "2987", "5669", "11636", "5971", "3215", "10072", "15226", "2555", "1890", "8475", "10868", "6797", "14168", "4778", "4323", "13404", "528", "10844", "2863", "10392", "3637", "13644", "4616", "5923", "12235", "13816", "416", "3330", "12313", "9784", "4428", "1014", "13453", "14629", "193", "9635", "5054", "10190", "9726", "13155", "2627", "1885", "11007", "13848", "8220", "7996", "10022", "10481", "2122", "2916", "8936", "11522", "12370", "11701", "7943", "9629", "14657", "7341", "12471", "14146", "9844", "5751", "12821", "10711", "13216", "13460", "10443", "8148", "12704", "5002", "14689", "5235", "13927", "10049", "13775", "2224", "12448", "2258", "10546", "4642", "14546", "14307", "1165", "9764", "11102", "1036", "6525", "5930", "3307", "405", "6868", "674", "12781", "4813", "5691", "13849", "8274", "1745", "6063", "2619", "14316", "8633", "15307", "8910", "12535", "10354", "9802", "12279", "9497", "6245", "9339", "1551", "4938", "58", "4486", "10276", "3520", "4304", "5851", "13529", "13883", "12636", "9417", "13744", "7003", "781", "4768", "1725", "12960", "9175", "8286", "534", "11877", "3538", "9141", "979", "13476", "5639", "5367", "10835", "12946", "5491", "12443", "3560", "11715", "6523", "8654", "1307", "7506", "13783", "6581", "13570", "9682", "3406", "10433", "10153", "10364", "725", "2996", "4979", "2600", "2108", "6205", "8679", "12624", "81", "8179", "6022", "3913", "14950", "2145", "12505", "6048", "3517", "7564", "6565", "13629", "13709", "9191", "4583", "10771", "1477", "14446", "11591", "1948", "7053", "7746", "2281", "14938", "13626", "13859", "779", "11746", "10161", "3013", "14232", "8373", "14477", "2461", "11859", "11385", "12202", "14487", "3052", "10384", "14212", "11637", "4441", "4784", "9464", "8221", "5224", "9395", "3407", "274", "12169", "14808", "6136", "13327", "14182", "6148", "11956", "1517", "14724", "640", "7073", "13217", "7235", "1396", "6940", "4422", "11420", "7725", "9343", "10005", "6256", "12015", "8040", "3004", "8912", "6515", "14761", "5700", "9139", "15223", "12403", "10377", "12582", "14838", "380", "14598", "9795", "10647", "695", "12212", "2387", "9860", "8874", "15052", "344", "6229", "3757", "7206", "14597", "4268", "10805", "8052", "5460", "4772", "4426", "8651", "13481", "1064", "6560", "642", "10359", "14771", "1352", "15091", "6462", "4125", "2886", "2735", "10301", "7731", "1566", "9271", "1995", "777", "12547", "11270", "9927", "1105", "8400", "8585", "11324", "2082", "10135", "4645", "4131", "3768", "10510", "12418", "14418", "8941", "15364", "1549", "10393", "388", "7872", "14839", "12633", "14695", "4649", "3255", "10079", "4493", "4259", "766", "11575", "3870", "13194", "15391", "7709", "9680", "9442", "4294", "7222", "555", "7946", "7140", "7475", "6954", "12249", "349", "15499", "5078", "3864", "7973", "719", "8858", "2373", "10118", "2883", "2245", "10388", "11868", "12977", "13318", "4862", "7132", "13272", "15313", "3916", "1609", "4353", "11786", "2274", "3838", "9278", "6631", "1663", "8216", "1463", "3433", "9897", "4002", "3455", "7561", "3260", "10357", "5919", "12840", "6364", "1832", "11797", "4447", "14465", "12913", "9142", "12853", "6056", "5089", "11970", "7381", "3784", "7701", "1634", "5418", "10665", "14196", "15465", "2568", "10335", "9453", "3662", "11855", "6904", "6855", "13410", "754", "13852", "9656", "10317", "649", "9792", "10544", "5063", "3213", "15255", "1067", "10074", "4102", "11640", "1069", "1112", "1695", "290", "8457", "7847", "2216", "5371", "5816", "4162", "13702", "5518", "10487", "9211", "9782", "4427", "295", "11642", "7844", "8646", "7687", "1008", "5014", "9579", "6887", "11835", "13758", "2527", "3659", "12492", "2408", "6254", "10232", "8252", "11105", "4708", "11869", "12875", "13739", "13905", "5679", "2090", "2858", "12556", "4765", "1426", "13076", "14283", "1400", "7049", "2755", "1280", "14788", "6120", "8210", "1057", "13127", "9088", "12970", "14951", "8487", "568", "9885", "13322", "18", "1420", "8825", "7988", "10748", "5844", "6958", "10181", "12080", "9474", "6717", "3363", "5656", "1710", "14882", "6605", "3057", "12589", "11013", "3634", "12619", "5633", "3573", "5902", "10216", "5610", "4682", "2449", "9188", "4039", "14750", "9154", "5625", "13038", "3275", "397", "122", "9631", "14389", "6156", "3817", "4657", "4270", "1020", "13844", "10051", "8539", "13830", "10864", "10047", "8971", "13143", "7252", "10854", "12337", "15095", "2678", "2330", "15015", "3153", "14485", "2165", "476", "3277", "3889", "6721", "9117", "4288", "10380", "10643", "7275", "7016", "114", "4092", "4824", "4227", "4592", "5994", "7175", "5860", "8957", "13618", "10637", "10981", "1050", "7505", "10059", "12751", "8642", "14989", "7207", "1328", "701", "4983", "7700", "3845", "6787", "6767", "9803", "10403", "10369", "11893", "13941", "14113", "2023", "13855", "3908", "986", "453", "11096", "3878", "10258", "5341", "17", "5278", "13402", "6938", "2252", "4258", "10520", "4325", "14355", "11221", "15417", "13642", "7045", "7761", "1792", "438", "11005", "9214", "2486", "2169", "3730", "11776", "15464", "587", "10590", "3129", "7478", "7791", "13711", "1413", "14452", "9953", "15080", "8155", "6539", "6667", "14120", "3249", "13633", "9424", "4079", "5569", "12926", "6132", "11954", "2807", "4806", "11741", "12414", "2734", "1684", "4161", "12640", "8814", "14767", "7456", "3087", "11178", "9508", "2367", "4604", "6799", "666", "9208", "5869", "2017", "2721", "4507", "165", "9558", "10073", "2417", "9233", "8044", "13348", "5797", "186", "15500", "2411", "11531", "11279", "3358", "11685", "7196", "8219", "913", "8453", "4629", "9449", "9650", "13503", "12462", "509", "7691", "13158", "9738", "6524", "2573", "14139", "10218", "8728", "10827", "1158", "7055", "12599", "8287", "9718", "2392", "12513", "10490", "14425", "8144", "10", "4368", "9166", "15108", "2420", "41", "15027", "54", "14399", "13554", "3895", "7601", "2630", "12961", "9325", "13968", "15404", "6851", "13731", "661", "8284", "4075", "2065", "8719", "13753", "15455", "3102", "8408", "12362", "5112", "4792", "15420", "7302", "5402", "1697", "9796", "7633", "8446", "13079", "5121", "7315", "9298", "11038", "10592", "12228", "4910", "5488", "9458", "2710", "9625", "10887", "11227", "11923", "5617", "46", "13767", "7796", "12309", "12232", "6913", "10866", "7502", "12475", "7283", "213", "3995", "2779", "12210", "1323", "6710", "6744", "9512", "3193", "9770", "8828", "8695", "969", "12295", "6832", "10020", "9346", "4546", "9259", "5345", "5879", "5951", "5245", "3008", "6182", "573", "12166", "6059", "1743", "6628", "12382", "11", "9562", "4021", "12324", "4782", "944", "6644", "1851", "8250", "7778", "2984", "8198", "5325", "14114", "14131", "13321", "9066", "13939", "13275", "7108", "10834", "5631", "11820", "102", "7580", "1337", "8668", "15285", "12009", "14105", "8976", "4736", "5775", "6024", "7539", "2041", "5964", "3240", "1180", "3487", "7542", "12667", "10778", "12318", "1509", "12123", "12014", "9441", "5824", "11991", "11499", "1006", "7927", "6258", "5596", "11033", "13812", "9229", "2355", "9500", "8243", "8431", "611", "30", "10766", "1146", "1901", "9149", "1414", "13429", "14395", "8533", "9685", "110", "14751", "7659", "2511", "14843", "337", "9176", "13623", "12587", "3999", "11785", "5515", "9431", "7618", "11020", "5899", "1222", "8746", "14227", "14371", "5784", "8660", "5900", "14909", "13723", "9348", "9293", "12700", "8097", "893", "2977", "2229", "10555", "11967", "13257", "10382", "6936", "2081", "1664", "2096", "15072", "6810", "2703", "11593", "9115", "5735", "8189", "12375", "15463", "6478", "14544", "7922", "9634", "6746", "11926", "3563", "3485", "2902", "6554", "8149", "11051", "2026", "13630", "13362", "95", "7952", "11124", "14055", "2944", "4338", "2575", "9506", "13259", "3541", "14072", "12259", "12192", "11286", "862", "2707", "5061", "1028", "11182", "4045", "2936", "13361", "14162", "1265", "12661", "10990", "11058", "5445", "4707", "2244", "6020", "8570", "11241", "4650", "4822", "14011", "4733", "268", "7082", "11160", "6830", "3146", "9905", "14646", "4337", "8002", "1035", "7094", "12125", "4773", "9527", "10997", "12138", "1027", "8345", "3154", "6915", "196", "8568", "4853", "5299", "12296", "12533", "7543", "8895", "6177", "3803", "1270", "1365", "13209", "15038", "1833", "1338", "11667", "14824", "9948", "14264", "14963", "1301", "1304", "6371", "11449", "13955", "7553", "1429", "4803", "12188", "9522", "1021", "4628", "11108", "2726", "1522", "1150", "9468", "7690", "13797", "12160", "5859", "11698", "9143", "1707", "13021", "10332", "10387", "3390", "4283", "2027", "9466", "14693", "14096", "3014", "378", "1070", "9236", "7858", "429", "1266", "82", "7605", "4939", "4917", "15250", "10037", "6536", "7057", "2390", "6454", "275", "3164", "8562", "5579", "1408", "9494", "8514", "3937", "359", "3731", "5263", "2731", "8863", "896", "374", "11189", "1873", "6748", "2865", "11753", "8956", "4562", "2464", "214", "14679", "15114", "12603", "1134", "7129", "9507", "13282", "4329", "11886", "13649", "5587", "10837", "5966", "6914", "15234", "3677", "9571", "5124", "8364", "7776", "5170", "678", "12644", "4334", "1920", "1296", "11095", "12869", "9990", "9593", "5789", "8011", "8710", "10605", "10372", "2196", "4596", "11980", "1952", "5875", "3981", "2688", "7150", "13065", "726", "10197", "9419", "3243", "7066", "11479", "7223", "3939", "1881", "13409", "10010", "4763", "1914", "11483", "8176", "4254", "926", "11363", "1031", "9892", "13819", "1002", "12601", "5555", "9525", "3544", "7034", "11858", "2890", "10131", "692", "5955", "9286", "3025", "8450", "12538", "10315", "12314", "12399", "7171", "13223", "2648", "4485", "8485", "5771", "2338", "7388", "8786", "12598", "9837", "10422", "10645", "10499", "4750", "6060", "5519", "14822", "10466", "4284", "14157", "2561", "10144", "10264", "10620", "6356", "1436", "5386", "5075", "8273", "2199", "11712", "14591", "253", "6720", "4791", "14828", "12085", "14154", "10012", "7560", "967", "11837", "756", "11532", "4286", "12810", "8395", "10880", "5908", "1765", "4964", "164", "4492", "3092", "2950", "9321", "393", "12609", "2615", "11519", "11988", "1278", "13970", "4263", "3829", "13417", "1376", "11371", "11506", "414", "8964", "4369", "117", "11500", "12771", "7173", "317", "3778", "4423", "7100", "12774", "12989", "14292", "1636", "3161", "7413", "4272", "8578", "5486", "14147", "12007", "404", "11538", "1661", "12106", "14879", "1062", "8867", "12803", "10781", "7634", "4047", "3141", "4458", "10468", "9455", "11143", "10171", "13608", "7350", "14151", "10848", "4264", "2583", "5281", "10318", "7570", "10670", "13737", "2948", "6953", "9073", "8644", "14137", "12787", "12384", "13999", "11657", "13933", "11229", "13117", "2416", "14760", "11901", "6972", "7546", "9702", "7325", "4344", "7770", "4704", "9299", "2394", "11653", "7537", "987", "707", "1091", "6617", "4277", "14237", "4947", "4891", "9768", "14119", "3143", "1965", "4220", "5791", "4690", "745", "11916", "7061", "9172", "13685", "15090", "7596", "4767", "3034", "11414", "15277", "4633", "8721", "8813", "2893", "4406", "11517", "8292", "227", "36", "384", "1374", "3740", "12553", "2123", "7225", "12380", "5963", "12906", "4563", "5516", "5153", "3091", "3182", "3567", "3827", "15227", "1306", "5777", "2746", "1015", "12688", "4038", "13428", "407", "3572", "5114", "12698", "2674", "6607", "11444", "3432", "2491", "11850", "6483", "6795", "4833", "11795", "1937", "5322", "14220", "7707", "13346", "667", "8889", "14445", "8036", "1569", "12681", "4293", "6823", "11475", "13542", "2750", "10528", "2319", "559", "5138", "6226", "8934", "2789", "4367", "14284", "12006", "14401", "4067", "7240", "10488", "7533", "9152", "2601", "6864", "10260", "5509", "2665", "11131", "2308", "10498", "9136", "280", "11556", "10967", "14247", "11708", "3410", "11678", "4726", "7074", "11129", "10589", "1110", "526", "5521", "1910", "10492", "2923", "5275", "1136", "9064", "1424", "14817", "3223", "3321", "4071", "9144", "9023", "2092", "7300", "9541", "11103", "11011", "2818", "3998", "9901", "519", "2749", "7484", "2020", "5497", "10966", "12637", "8978", "10065", "10803", "7352", "4656", "4697", "9766", "11560", "11515", "9556", "9314", "8950", "14442", "2591", "9666", "8859", "2415", "6656", "11782", "11254", "544", "7083", "12927", "15190", "12838", "3928", "4488", "4296", "2292", "10149", "8073", "2675", "1061", "11383", "15265", "9524", "10105", "5151", "14925", "9535", "14848", "11641", "8545", "2509", "2539", "11658", "6727", "15478", "7718", "119", "3793", "9285", "3600", "9643", "6531", "1722", "8211", "12780", "4509", "10509", "676", "14525", "4472", "1406", "9557", "11951", "305", "3918", "5550", "2862", "10225", "12572", "4830", "1422", "14741", "2952", "3269", "5621", "5487", "9540", "4798", "11684", "10920", "15350", "5653", "1171", "14127", "4689", "3735", "15171", "9985", "8253", "3886", "9481", "12929", "8507", "5752", "219", "536", "2947", "2374", "4027", "14972", "10650", "70", "1682", "4150", "5020", "9662", "1029", "13602", "13682", "7138", "1161", "13774", "11824", "15164", "10429", "1603", "11275", "6382", "10363", "3036", "4756", "297", "7391", "684", "14821", "11965", "9606", "10184", "1125", "9961", "12736", "4012", "13856", "8741", "9843", "11079", "12701", "12214", "8555", "3900", "14928", "11571", "2625", "4457", "6125", "1958", "462", "7685", "13313", "1163", "15427", "2462", "12657", "13027", "1982", "12485", "3777", "5841", "9376", "11314", "5761", "7909", "1542", "908", "14348", "9170", "3237", "7191", "10507", "6729", "14558", "9181", "7158", "5541", "11767", "8885", "4058", "11521", "9183", "717", "11320", "266", "14111", "4174", "7321", "3986", "10389", "12463", "8551", "4809", "12315", "1990", "9877", "2180", "13235", "4", "4096", "3473", "325", "2690", "15016", "1316", "3001", "3901", "3310", "1970", "1327", "7551", "2166", "15132", "3966", "7589", "8761", "13571", "3564", "10639", "1605", "318", "512", "14417", "10212", "10019", "1944", "14862", "1516", "10032", "10475", "5294", "4807", "15460", "1795", "10690", "4680", "4534", "14531", "3100", "11852", "982", "12359", "5494", "9670", "5757", "11961", "12090", "12482", "11686", "5405", "2982", "14596", "10093", "1235", "1122", "2910", "9536", "11423", "360", "2985", "6248", "10296", "1072", "13098", "8008", "1405", "2887", "4020", "2795", "13288", "1767", "9599", "8201", "895", "11973", "3346", "7647", "8913", "11784", "10390", "9280", "14288", "12262", "9428", "3799", "4155", "4312", "750", "6031", "9834", "11310", "1291", "12340", "1508", "12731", "7091", "4622", "2838", "8861", "3019", "11920", "6342", "2742", "2574", "14770", "3813", "9041", "4571", "3553", "4702", "11986", "15126", "1330", "10214", "13011", "8925", "12544", "2237", "4717", "12916", "7886", "4820", "13200", "1315", "1312", "7366", "8494", "5972", "9303", "12449", "15473", "622", "6705", "11805", "12434", "11766", "8330", "3924", "1427", "3412", "1102", "5492", "5905", "12154", "15235", "1367", "3172", "1256", "5105", "2353", "10262", "1216", "6135", "548", "2993", "12091", "4608", "10137", "9620", "4790", "13141", "7260", "4712", "1854", "2442", "12140", "598", "12962", "4328", "9410", "11702", "3094", "8529", "1140", "4339", "4721", "11867", "2934", "1719", "633", "6973", "15076", "10663", "15141", "15056", "4606", "2071", "3023", "1727", "9617", "4433", "12588", "2536", "10632", "15461", "12245", "7441", "1674", "11134", "3408", "7063", "4631", "10351", "10001", "11085", "3450", "134", "12429", "9329", "978", "9187", "10367", "5364", "1114", "256", "5180", "983", "3056", "669", "5369", "15474", "14101", "4991", "14865", "2120", "4554", "5077", "1246", "3691", "3592", "1151", "5071", "3073", "13107", "15167", "581", "13071", "10496", "5699", "10912", "15337", "9476", "9179", "14426", "2263", "11670", "6124", "15283", "10395", "4343", "4841", "5131", "3030", "1971", "2265", "1269", "10649", "13652", "11299", "7250", "8403", "3514", "4676", "11445", "11458", "2852", "15451", "7203", "888", "7880", "8579", "590", "5292", "600", "7310", "13452", "5026", "324", "14807", "7112", "11126", "10904", "10627", "11590", "9235", "4893", "12456", "6433", "2448", "4118", "12045", "2068", "4076", "703", "2127", "1960", "1694", "8381", "13944", "7406", "3180", "13087", "489", "15039", "11516", "8765", "5765", "4515", "8382", "11100", "14872", "4514", "10337", "6957", "1903", "13145", "11127", "6533", "2723", "10621", "12152", "1589", "386", "1626", "3890", "4698", "14376", "14152", "3511", "1060", "12730", "3548", "4207", "14776", "12425", "399", "4780", "11645", "2184", "12794", "5506", "14245", "1895", "1457", "11284", "3695", "3985", "4679", "617", "2019", "1476", "7267", "2724", "15160", "3244", "14603", "3771", "9316", "3652", "11761", "2160", "10583", "12322", "12845", "12288", "14783", "7264", "2205", "10970", "8525", "11004", "12664", "15178", "6089", "4273", "15492", "13869", "687", "9495", "2882", "8926", "12626", "7288", "8489", "10437", "14937", "5125", "2069", "14387", "5528", "9849", "9063", "7141", "7964", "10757", "12973", "12466", "9763", "9173", "5738", "1113", "8502", "975", "12316", "10500", "3656", "3877", "9601", "2034", "8248", "11006", "3488", "14818", "13086", "2652", "9694", "5495", "6138", "12648", "2775", "8385", "5242", "10838", "14702", "853", "1597", "3088", "3195", "4727", "4159", "4170", "11390", "4972", "13010", "12742", "14503", "3638", "7104", "5750", "12712", "5524", "3586", "3449", "3540", "11247", "665", "2289", "7663", "14448", "1016", "14102", "4421", "2518", "348", "10400", "4685", "4080", "5865", "4553", "14749", "14605", "9502", "3973", "5568", "3189", "370", "1298", "6010", "9104", "10863", "2018", "14736", "3612", "1375", "11604", "1373", "15270", "1167", "8183", "5374", "907", "7224", "12070", "13678", "5998", "2270", "778", "5501", "12765", "14142", "6122", "5855", "1100", "3403", "5872", "11059", "8641", "14880", "4711", "3767", "2569", "9279", "10368", "5076", "4796", "5368", "4479", "13337", "202", "4310", "1175", "13009", "14325", "1101", "8963", "10503", "12374", "1253", "6476", "12898", "335", "14623", "9594", "10325", "2658", "7056", "2487", "1308", "8599", "1033", "885", "1838", "4817", "188", "8788", "8013", "9398", "15326", "13838", "14061", "6037", "15177", "9055", "4710", "11280", "850", "13023", "3404", "406", "8819", "9582", "12671", "9049", "2185", "15475", "1295", "75", "14100", "7469", "10084", "12284", "2285", "12585", "14769", "7142", "10635", "8881", "10762", "4306", "12230", "5499", "12453", "9436", "12674", "6011", "10394", "14073", "14023", "10444", "1242", "4502", "992", "11690", "6642", "3796", "4549", "11055", "5850", "5556", "6937", "9707", "387", "917", "2423", "15031", "4996", "11758", "15189", "13707", "1159", "195", "3210", "11043", "12739", "9492", "11732", "1172", "7215", "981", "9632", "327", "13221", "10071", "645", "9217", "12767", "4331", "14923", "1279", "11354", "13418", "2206", "13894", "10754", "6657", "9668", "9145", "7149", "4109", "15484", "12043", "14543", "4412", "15471", "10999", "4190", "166", "1019", "10707", "13688", "11340", "4732", "9644", "244", "5544", "710", "13760", "3857", "7717", "14757", "12464", "11709", "2544", "9163", "10026", "9471", "4475", "4564", "1098", "10998", "4269", "2660", "8699", "10159", "4705", "9488", "6550", "6896", "3839", "9192", "2592", "9604", "450", "976", "4808", "4069", "11042", "4461", "9193", "1363", "8968", "900", "13396", "4221", "47", "12053", "1757", "4787", "4487", "4662", "14300", "8375", "5144", "14171", "1638", "7429", "12621", "5288", "11831", "1956", "11071", "1104", "12928", "14861", "1968", "9027", "4781", "14936", "693", "10126", "2842", "2111", "1344", "14790", "3156", "13825", "586", "7648", "9778", "11439", "10571", "51", "3764", "3227", "12975", "5141", "12622", "5835", "1654", "4440", "11035", "8576", "4203", "4494", "8012", "4590", "10631", "29", "8351", "8875", "423", "6831", "9123", "13520", "10763", "9895", "6570", "3903", "7293", "8436", "10606", "10742", "7777", "14642", "1592", "1826", "11618", "1211", "3458", "4322", "5785", "4083", "2076", "13903", "10608", "8161", "718", "11898", "15098", "1492", "12126", "157", "8779", "2045", "5954", "11379", "9722", "3647", "112", "15", "278", "807", "12332", "9114", "3683", "4527", "3760", "7748", "14564", "8456", "6686", "11114", "4524", "7384", "128", "96", "9574", "11525", "13966", "492", "3364", "4745", "2124", "11480", "12523", "8621", "2220", "13425", "8110", "9993", "4940", "4867", "7481", "9367", "14301", "14562", "6733", "2722", "1608", "3300", "7527", "1773", "12151", "6979", "2640", "631", "11536", "7646", "882", "9799", "3688", "1717", "3766", "11985", "6321", "828", "2498", "10521", "8523", "7829", "10855", "2901", "6425", "1209", "552", "8257", "11614", "7246", "7103", "5535", "7765", "11927", "2254", "2370", "8709", "7216", "5500", "5558", "8170", "3840", "616", "2737", "7873", "3810", "4256", "10865", "7786", "2481", "14819", "6142", "3932", "4275", "10720", "12057", "4474", "11621", "2040", "6883", "13488", "7867", "5660", "1913", "7986", "14834", "15157", "10087", "4209", "11880", "10987", "11212", "13094", "14272", "14952", "2730", "11885", "3772", "9137", "3729", "8618", "689", "11395", "6411", "8716", "3464", "11210", "4609", "13384", "14690", "8766", "13201", "6492", "14686", "9971", "3355", "2696", "9930", "9265", "14460", "14510", "4106", "3622", "13878", "3668", "7148", "11628", "6514", "936", "4498", "4244", "106", "9252", "8526", "1624", "14708", "10284", "9699", "12847", "12130", "12987", "3273", "15120", "8458", "8444", "10775", "2262", "4189", "4931", "13285", "6546", "9387", "10024", "8793", "4843", "32", "3395", "1878", "1582", "180", "4138", "10427", "12793", "11293", "12790", "10025", "6166", "457", "14408", "7870", "10416", "10802", "15315", "3946", "10638", "345", "12016", "13605", "1545", "15445", "11541", "10127", "9904", "1544", "4194", "11728", "829", "357", "2007", "7840", "15071", "8712", "14268", "1586", "9649", "11648", "12452", "8900", "6660", "14634", "5469", "6459", "4219", "2577", "9919", "2638", "11267", "13713", "339", "11252", "3628", "93", "7166", "3256", "7630", "11912", "10666", "10449", "3834", "1793", "6557", "6175", "3629", "372", "4168", "1980", "13134", "7664", "8449", "10611", "4821", "14406", "4185", "9097", "11097", "8405", "8238", "11450", "4978", "11302", "10293", "12625", "694", "5029", "10534", "1860", "7901", "4832", "15370", "3478", "11764", "13085", "10132", "6458", "808", "10474", "5221", "5476", "10954", "6160", "11319", "2855", "3906", "10989", "11476", "1742", "3398", "1458", "4265", "3173", "13413", "15258", "8042", "11050", "21", "10346", "4977", "8544", "2523", "11044", "3484", "5414", "9548", "5588", "12879", "9551", "13803", "3045", "10907", "15209", "7038", "13276", "3148", "6349", "5046", "2058", "12713", "6930", "13634", "6668", "3779", "7107", "376", "12328", "9687", "14717", "2918", "11566", "13173", "12254", "14836", "3347", "5178", "12498", "14285", "15269", "4528", "221", "14801", "11288", "7900", "845", "7176", "15363", "1931", "6636", "3268", "10522", "3749", "1639", "5559", "7881", "672", "3618", "10270", "9856", "11144", "11021", "13104", "7412", "9846", "12209", "6028", "6102", "9036", "4557", "6527", "3280", "367", "3323", "9565", "2109", "6400", "7877", "2032", "8068", "11936", "1981", "5237", "2889", "3936", "7756", "1846", "13908", "6367", "77", "6435", "12922", "9232", "2771", "7036", "12694", "6083", "7501", "7888", "10439", "2296", "3196", "7355", "1711", "11222", "2107", "546", "13398", "10194", "8358", "6576", "13594", "11177", "13853", "3610", "15409", "2976", "1554", "12478", "3503", "330", "9085", "6975", "13613", "105", "10925", "10033", "1437", "5240", "9014", "14194", "10603", "5134", "13880", "8311", "4465", "11928", "1686", "14043", "4599", "8335", "5142", "7368", "7742", "11141", "5482", "3758", "14722", "7644", "9616", "342", "13832", "8969", "15232", "8339", "6416", "9309", "6563", "3727", "10714", "4236", "11681", "10016", "8038", "6101", "12072", "13946", "12077", "8290", "1478", "10862", "57", "1780", "12527", "14948", "673", "6882", "10081", "15380", "3385", "3378", "12287", "6448", "1859", "1314", "10769", "13080", "13051", "3917", "11919", "13957", "14930", "5747", "10626", "6724", "1081", "2785", "1645", "14554", "12470", "6931", "13045", "9503", "8028", "11296", "10599", "3229", "13273", "15035", "6119", "14960", "6099", "9640", "14474", "9101", "14020", "8134", "8246", "1523", "4769", "7125", "13643", "10946", "8325", "14368", "13754", "12786", "3467", "1662", "2714", "7970", "13227", "5449", "4255", "5990", "11705", "7599", "11356", "5817", "6223", "11153", "4953", "11308", "6288", "2758", "822", "3670", "9203", "12755", "2803", "13123", "2514", "50", "4413", "14267", "4575", "11125", "12918", "4962", "5101", "10133", "10489", "14959", "12990", "4945", "4591", "13067", "4124", "12422", "4347", "15088", "5192", "8418", "2552", "875", "1835", "5212", "10139", "2039", "2759", "9267", "9228", "2687", "5696", "303", "15106", "5589", "5241", "4594", "10895", "4870", "28", "1177", "13166", "12468", "515", "8054", "10405", "2668", "1025", "9697", "11857", "1983", "12145", "8887", "3783", "2269", "6348", "10591", "13976", "6608", "946", "5736", "13802", "9034", "1601", "2876", "10513", "8461", "10430", "8337", "3288", "11849", "7490", "7713", "8838", "11137", "11874", "732", "2500", "4892", "7180", "5573", "9095", "1558", "9225", "12051", "2174", "2256", "3991", "9654", "4587", "15259", "8571", "522", "2358", "2001", "14468", "933", "4829", "12299", "5181", "10229", "11544", "12306", "322", "879", "13129", "11888", "9546", "13805", "4869", "473", "8598", "15319", "6903", "7417", "11696", "11111", "9609", "4355", "9459", "6614", "2626", "6387", "13470", "13442", "8419", "15044", "1518", "8553", "13833", "10658", "10846", "14104", "11798", "6324", "8849", "7156", "13748", "4469", "8989", "2520", "5644", "7962", "11875", "11443", "8139", "130", "15224", "9220", "1978", "5534", "8482", "6956", "10919", "11374", "12653", "14437", "13610", "11803", "11747", "8032", "7884", "282", "3602", "12768", "12339", "9739", "11347", "7992", "7404", "12178", "7443", "14567", "3611", "7271", "2456", "2526", "14393", "4317", "8938", "2913", "6664", "12773", "8347", "7528", "5987", "14727", "2085", "9823", "9384", "5425", "6841", "4403", "7784", "2894", "12957", "333", "11234", "1302", "1652", "8666", "5629", "964", "12416", "5468", "10195", "6057", "11962", "13393", "5072", "14978", "9908", "7769", "1621", "2177", "6734", "11164", "8387", "5286", "15286", "3761", "746", "7463", "7674", "8064", "3457", "9276", "6737", "4062", "2153", "3319", "14534", "11572", "9537", "5440", "10036", "1687", "2995", "8070", "11946", "6475", "6545", "1093", "8763", "8363", "9737", "11870", "6091", "9484", "12822", "7625", "8796", "5489", "7613", "2542", "3328", "12162", "7040", "4667", "668", "4827", "1505", "11737", "7468", "12083", "13206", "9202", "8691", "5913", "6231", "8977", "6112", "9273", "8501", "8479", "5096", "6140", "1047", "4313", "6827", "9709", "14241", "115", "3205", "9874", "648", "15338", "3453", "10045", "2972", "3028", "11616", "4513", "13907", "62", "2183", "12049", "4303", "7588", "7255", "13636", "12590", "6966", "12804", "12725", "10609", "13103", "10141", "270", "10175", "5191", "13851", "804", "9351", "10939", "13447", "13315", "2524", "14746", "1660", "4950", "8317", "3627", "11720", "6262", "15134", "3438", "8929", "14886", "10236", "8809", "6015", "9025", "10040", "14047", "11269", "8794", "10793", "2211", "470", "1677", "8173", "12656", "10584", "8672", "13873", "13515", "2765", "2784", "10683", "5946", "2438", "4677", "3666", "2474", "15333", "1318", "4627", "7358", "5800", "13121", "10921", "8053", "11372", "1928", "14192", "13511", "6895", "129", "15359", "12102", "11375", "2328", "10717", "8951", "7878", "379", "8711", "10875", "12696", "251", "11677", "3980", "10542", "9433", "4085", "10731", "5019", "14895", "14304", "162", "2756", "1728", "14968", "6405", "8091", "7727", "7623", "7327", "1894", "11413", "3640", "9342", "14404", "14331", "8751", "2072", "8067", "11802", "39", "11081", "11545", "1504", "11394", "12849", "805", "1776", "5732", "2763", "9020", "3343", "6561", "3486", "13884", "14103", "14766", "2259", "14098", "5611", "511", "3949", "9929", "12663", "2661", "7394", "2820", "11934", "14489", "9526", "12850", "6497", "2528", "5383", "15081", "7422", "4285", "2980", "6815", "3795", "9612", "2036", "2633", "786", "11459", "4865", "4777", "2548", "7699", "3147", "7192", "4928", "9451", "7398", "2701", "8432", "4921", "9201", "6259", "13836", "3820", "7672", "1233", "854", "15143", "4405", "13292", "10062", "12143", "4229", "15112", "12534", "11738", "14835", "8138", "2480", "10678", "15212", "11910", "1054", "13986", "13762", "12564", "2439", "8942", "3962", "10772", "8662", "3581", "6700", "14083", "1257", "6252", "411", "7833", "15316", "9180", "194", "7695", "5651", "6390", "12191", "9086", "13616", "5087", "153", "3217", "13522", "15020", "12438", "12171", "6144", "9295", "971", "10038", "14572", "8094", "14458", "13382", "6662", "6335", "9978", "8424", "9369", "1231", "4995", "675", "15144", "7851", "6833", "4510", "1230", "9613", "8095", "12938", "2516", "7554", "13256", "6104", "10121", "4425", "11170", "15096", "9475", "10472", "8722", "6230", "1726", "2323", "15369", "10780", "8731", "12934", "14364", "1189", "3125", "4223", "3400", "14551", "13599", "10911", "7265", "1051", "11416", "5597", "1783", "175", "13228", "801", "12440", "553", "14126", "14094", "11188", "15089", "14975", "2905", "11577", "7179", "11418", "10914", "1527", "8088", "1238", "2912", "3111", "4992", "12923", "12777", "5339", "4927", "12196", "6092", "5689", "2", "11669", "13977", "7226", "14385", "4989", "12118", "7137", "6858", "12087", "6185", "7626", "8356", "12410", "1998", "12078", "1769", "4445", "5166", "11957", "7127", "11819", "4300", "4349", "13325", "12349", "11216", "241", "5857", "6814", "12068", "3067", "1855", "4883", "9491", "13932", "14431", "11271", "1267", "14813", "5953", "2264", "461", "9004", "14594", "7476", "3035", "11339", "10255", "7323", "5483", "4402", "15332", "14010", "3801", "9578", "13677", "5340", "9219", "6983", "2425", "7420", "1106", "4993", "11902", "10700", "10463", "1370", "3167", "6212", "13593", "3617", "10689", "13397", "13097", "2201", "7022", "10527", "12248", "5384", "2739", "15294", "10577", "11470", "11048", "13044", "10120", "10572", "12620", "5717", "11389", "3062", "13407", "3894", "3681", "9103", "9421", "2475", "6452", "5995", "700", "3084", "7513", "14981", "8465", "12737", "434", "16", "14339", "12912", "6850", "14088", "13536", "12391", "5466", "8556", "3245", "4434", "234", "12432", "1615", "14924", "2685", "15013", "12093", "13997", "8275", "5247", "7163", "5566", "4136", "5109", "6682", "5285", "12405", "1828", "14428", "9933", "7452", "2168", "6852", "2966", "5632", "1817", "7976", "14635", "12826", "13329", "12054", "14077", "14279", "3332", "4471", "6467", "9857", "5845", "13646", "8724", "2767", "13994", "5481", "3566", "11656", "8985", "6351", "3969", "9839", "13140", "3427", "14108", "4099", "9277", "7201", "12294", "796", "10823", "1085", "11762", "776", "12614", "9357", "1349", "10265", "13349", "8803", "2405", "15327", "5665", "3136", "12574", "1688", "11530", "1866", "4632", "8061", "4107", "8852", "6638", "12659", "11830", "11198", "187", "13441", "6872", "2315", "15165", "4295", "784", "1012", "136", "4437", "12889", "5047", "1282", "5650", "10852", "12317", "5737", "9162", "13132", "4122", "2605", "5981", "8883", "11511", "9256", "14378", "5931", "1576", "11679", "3781", "14038", "3241", "11447", "1455", "7609", "6888", "11929", "1770", "6843", "12055", "8300", "9077", "624", "9026", "8744", "570", "3899", "12770", "5319", "937", "8331", "6779", "663", "15176", "5513", "6725", "7865", "6926", "12240", "7557", "12222", "9783", "2788", "7757", "155", "466", "506", "2430", "14579", "3690", "14402", "8278", "4082", "12575", "6007", "13204", "9288", "5027", "13485", "8194", "12880", "11434", "5815", "13096", "10899", "7230", "8242", "13781", "2970", "7069", "13716", "4695", "14611", "3993", "11711", "422", "2781", "12717", "2050", "10812", "3716", "1974", "8025", "1579", "12788", "7426", "3377", "12855", "11812", "5728", "11937", "15390", "9648", "13443", "4811", "815", "3833", "5956", "11984", "2300", "11856", "9443", "12963", "9090", "4135", "12435", "1419", "6942", "3142", "3580", "10676", "2437", "1305", "1640", "1714", "4377", "6679", "14253", "9262", "13477", "8876", "10434", "7146", "15008", "139", "7693", "2736", "14915", "13954", "2560", "14630", "2712", "3422", "334", "12707", "13526", "12647", "1573", "8524", "13480", "11783", "3242", "4420", "10461", "5450", "12238", "1255", "7978", "2278", "6772", "3118", "4876", "2718", "15058", "10412", "2563", "3312", "6535", "12334", "12643", "11186", "708", "4816", "9226", "5593", "1082", "7571", "6547", "5021", "13742", "3392", "14412", "9361", "9401", "995", "167", "11551", "15105", "8251", "13576", "6715", "10688", "10646", "8092", "2248", "3751", "5118", "13788", "9870", "7758", "15426", "9391", "14166", "14520", "15229", "5184", "9485", "7918", "9935", "9199", "1215", "1678", "12013", "15383", "9204", "771", "9600", "13975", "9071", "12056", "7205", "2042", "10504", "8230", "5407", "8918", "3891", "1734", "4930", "1622", "4658", "12877", "14295", "5982", "12825", "14263", "9955", "12672", "13210", "14372", "3508", "12861", "9745", "2942", "13840", "11040", "1129", "1987", "2879", "10145", "1334", "15272", "12401", "10322", "8953", "4287", "11539", "6051", "3742", "4120", "14582", "8771", "8535", "14897", "13828", "1212", "4638", "15312", "9780", "13573", "7739", "952", "14363", "1135", "12122", "3510", "11435", "8783", "7740", "8901", "9890", "11682", "9246", "1814", "1073", "5622", "14079", "15010", "208", "4856", "12081", "12710", "1227", "12634", "13160", "12831", "7862", "7495", "2139", "3152", "12802", "10397", "7131", "5836", "7982", "9677", "2851", "1493", "2446", "5036", "8846", "9084", "12732", "7573", "7958", "7101", "5890", "13806", "9414", "9505", "2046", "881", "11249", "15345", "2060", "10426", "15276", "12433", "13663", "15219", "12904", "1775", "8700", "10017", "11430", "7995", "7971", "4360", "9336", "6003", "14084", "8442", "6988", "13611", "1889", "2728", "4319", "2055", "3723", "1577", "8713", "6964", "12758", "13568", "3416", "3843", "2234", "9099", "11917", "13283", "14081", "6414", "1926", "6440", "6419", "8234", "5415", "4330", "8268", "15202", "10094", "1870", "7526", "3506", "3893", "13299", "11174", "445", "10182", "11482", "6494", "5301", "3085", "2571", "8346", "733", "8213", "248", "1236", "9021", "8293", "8855", "940", "2340", "8920", "8056", "1191", "7297", "9659", "12839", "6654", "15102", "7438", "6151", "7474", "11647", "12348", "7172", "9127", "15037", "2667", "5312", "3149", "6176", "9721", "1399", "13034", "6876", "10336", "1787", "11226", "12360", "13915", "2445", "14667", "14261", "2051", "787", "13491", "4232", "7144", "13668", "6167", "3841", "14660", "10263", "11474", "797", "3598", "9037", "13163", "1754", "15162", "2144", "6773", "7871", "1244", "15425", "4668", "15261", "4959", "3344", "11049", "7763", "2686", "3577", "1277", "11167", "9074", "7556", "9818", "12721", "7540", "4019", "14324", "2530", "6835", "9092", "5620", "1557", "10425", "4864", "2676", "8601", "2094", "8078", "12789", "12364", "1892", "4503", "4655", "5932", "7835", "5546", "13641", "11309", "5375", "15288", "13937", "135", "3933", "6216", "14250", "2029", "43", "13708", "15248", "7933", "3175", "2499", "9920", "11332", "9362", "13751", "14539", "11771", "10536", "11191", "6410", "7095", "12404", "10304", "6812", "816", "4228", "8481", "2189", "10130", "8366", "11582", "1115", "6651", "1514", "12726", "13018", "12137", "13574", "424", "2699", "10046", "8027", "19", "10751", "2832", "840", "3847", "4762", "11777", "10782", "5225", "4875", "3734", "2564", "9260", "6408", "7218", "1954", "656", "6438", "4057", "13451", "12205", "5636", "12754", "12678", "4478", "8717", "13810", "8750", "13265", "1193", "6049", "10280", "13110", "6429", "1144", "468", "1818", "13004", "1668", "13597", "955", "10516", "3409", "6300", "12326", "14217", "3352", "366", "10277", "4382", "3524", "1847", "7259", "5196", "5988", "6358", "10530", "5271", "8683", "11245", "13364", "15382", "11307", "10959", "13877", "8026", "13914", "7671", "7666", "9664", "9804", "11205", "4758", "5269", "4878", "556", "5164", "13897", "12685", "11619", "2114", "2176", "13765", "10623", "10300", "8577", "14342", "12948", "716", "12807", "4536", "7641", "3752", "9831", "14993", "484", "14028", "8774", "14935", "10826", "138", "12447", "7410", "3694", "13316", "520", "5647", "8689", "12237", "11378", "7136", "5985", "6726", "10753", "11813", "2606", "10356", "8614", "10600", "137", "1571", "1118", "543", "13674", "2293", "4915", "15069", "12997", "7435", "9001", "13928", "2588", "11978", "13971", "1533", "12187", "10077", "1186", "1418", "14780", "12515", "6265", "6516", "8702", "3295", "4541", "691", "15262", "9486", "11700", "11823", "5718", "10740", "2895", "7846", "14153", "12489", "7331", "10816", "8680", "6952", "14230", "5575", "7720", "7335", "11297", "4008", "3953", "14648", "2218", "11939", "11586", "5008", "9479", "14961", "3651", "5059", "6622", "10613", "11000", "14337", "9394", "12247", "11472", "8558", "11468", "1738", "8823", "11117", "9595", "1953", "3401", "13632", "5116", "10160", "200", "12591", "6511", "2799", "6935", "14565", "2295", "11462", "8924", "10982", "15014", "1900", "454", "7211", "15349", "12974", "14032", "259", "15495", "527", "8800", "4004", "11969", "7827", "721", "13506", "1876", "1247", "13088", "10938", "11246", "8949", "352", "12331", "11010", "8623", "934", "13963", "5604", "12379", "11072", "7121", "8604", "7684", "8787", "3393", "3225", "2440", "8536", "14497", "394", "1790", "10659", "6395", "12357", "2066", "14613", "495", "9490", "12856", "10881", "14803", "6070", "7072", "6512", "8404", "9676", "5154", "10451", "12896", "9411", "14353", "2535", "8438", "2141", "1812", "11811", "13434", "8797", "12836", "15314", "10060", "6991", "6283", "15222", "2095", "4464", "7054", "10107", "1369", "10824", "12761", "1618", "6532", "5168", "14003", "9845", "3331", "1320", "1425", "7854", "10549", "15491", "877", "1658", "6453", "3431", "9371", "1934", "15168", "12501", "7576", "4855", "9148", "3997", "9057", "6220", "1702", "14548", "7077", "8023", "6183", "4028", "6188", "8206", "12996", "4450", "5253", "14858", "12113", "11370", "9866", "8229", "3699", "7462", "14636", "8607", "12487", "5627", "13518", "6865", "3340", "15330", "7751", "11818", "1916", "14159", "8217", "14664", "6625", "13887", "2006", "6621", "12610", "13197", "9407", "11839", "15169", "8801", "5232", "4919", "9385", "13831", "7649", "15152", "7894", "9061", "9893", "9691", "4699", "10031", "1336", "7716", "9581", "5939", "8282", "8655", "14555", "1386", "6793", "6437", "2670", "6297", "6314", "8166", "4073", "15244", "3805", "5208", "3007", "2805", "5236", "4081", "7670", "9598", "8962", "7981", "10055", "7098", "3741", "8369", "5837", "9964", "6847", "3345", "5741", "6739", "12469", "722", "1617", "11751", "11087", "2100", "12679", "8620", "4396", "10905", "5707", "12891", "5343", "1483", "6282", "2884", "13297", "6004", "12680", "2175", "6806", "7258", "13965", "14655", "11106", "4340", "9331", "2809", "3748", "14748", "3114", "15393", "341", "8135", "3065", "11724", "15362", "7681", "6658", "10344", "6822", "798", "14062", "23", "4899", "7009", "11676", "1888", "8256", "12757", "2641", "2906", "4504", "14560", "7233", "4387", "15438", "10414", "11941", "13938", "4659", "14008", "10713", "13906", "7134", "8997", "3665", "13252", "8948", "10446", "9310", "14414", "9460", "14965", "14190", "7821", "3975", "13676", "5712", "336", "9848", "2700", "15396", "773", "2033", "846", "8830", "14436", "146", "11230", "11232", "1779", "456", "1440", "2386", "5615", "13801", "5867", "6098", "7809", "14491", "2476", "7504", "12342", "4025", "12312", "2547", "14486", "6776", "6375", "12029", "14934", "14498", "8029", "15147", "8080", "13205", "11196", "6783", "2596", "4064", "3322", "12110", "7392", "15341", "10289", "13700", "5132", "15170", "4970", "9403", "841", "6191", "7087", "14231", "671", "5198", "5238", "6756", "6128", "14607", "10241", "2158", "7891", "994", "4774", "9135", "8090", "7010", "12352", "13459", "13170", "9646", "12329", "2817", "15018", "7998", "5814", "6718", "6643", "569", "7822", "14622", "8909", "551", "6526", "8473", "9426", "1918", "1852", "12297", "1143", "1536", "464", "2087", "1655", "8546", "1675", "3942", "2436", "557", "10231", "12592", "6012", "3329", "3011", "8425", "12103", "11091", "2037", "15053", "11290", "15006", "14019", "11601", "6507", "6885", "9972", "3107", "3354", "6703", "5709", "7379", "9946", "7500", "1540", "3704", "2217", "8285", "3254", "7512", "8682", "4517", "6480", "873", "11763", "11454", "8575", "13007", "1428", "14687", "15115", "1825", "2413", "14540", "7856", "13489", "6456", "9928", "2772", "5753", "1808", "4374", "14626", "3875", "5766", "6251", "294", "10340", "7497", "6118", "10374", "10910", "4074", "2064", "4452", "14739", "11796", "14615", "6193", "4663", "10840", "5706", "3926", "3818", "3869", "764", "6394", "1899", "10587", "662", "12255", "12612", "1430", "760", "1578", "10543", "8258", "14332", "3096", "3389", "13811", "664", "1250", "13058", "1190", "524", "1309", "14825", "15003", "1416", "13177", "3311", "11664", "1022", "11438", "7683", "14745", "7151", "7440", "3968", "1791", "3238", "6126", "9530", "7624", "4623", "14674", "0", "1259", "12201", "2118", "3465", "11244", "12629", "7547", "8261", "1032", "837", "609", "5358", "2401", "7145", "14507", "12170", "5833", "13847", "10082", "4694", "9728", "9544", "14430", "2861", "1470", "14768", "13151", "8030", "4392", "4049", "2075", "10874", "210", "11548", "8834", "876", "12937", "1326", "12979", "9720", "14278", "12954", "12156", "11595", "7213", "1439", "5135", "4126", "13800", "12190", "1930", "7612", "11202", "7274", "6990", "10607", "5171", "12372", "14394", "1564", "15161", "2666", "13340", "9663", "9247", "6698", "9372", "6669", "11789", "8223", "6803", "11163", "5901", "2360", "3074", "5634", "5698", "2953", "7393", "9483", "7545", "12550", "4800", "4863", "12356", "2990", "5000", "6190", "5846", "2284", "9292", "11203", "3715", "8980", "8398", "4023", "10200", "8270", "3570", "14499", "9888", "10494", "2173", "485", "7333", "7304", "10295", "11192", "10165", "4072", "11398", "10004", "1111", "10410", "7520", "7473", "4037", "451", "441", "3608", "4327", "12536", "10125", "2257", "2341", "4066", "12697", "3596", "11172", "11036", "6397", "14870", "5218", "7657", "4925", "10375", "11046", "7425", "8864", "2350", "1816", "14022", "12092", "14138", "9688", "8899", "602", "15242", "12524", "6153", "1137", "13770", "15107", "1003", "6589", "12107", "14222", "8737", "7602", "11652", "9771", "14228", "8534", "7270", "7244", "4139", "9251", "13854", "1454", "245", "14274", "1751", "4414", "6319", "2203", "7471", "12260", "12444", "4968", "13430", "8509", "14336", "14900", "5967", "4386", "10476", "7523", "11829", "13309", "14898", "9024", "11316", "12749", "2458", "8447", "6422", "560", "1739", "12690", "8209", "7837", "13319", "3371", "3832", "5248", "10191", "13351", "10681", "3854", "5935", "6968", "4771", "133", "355", "6392", "4913", "6353", "9365", "10902", "3887", "3391", "2994", "8843", "14400", "11429", "12815", "12673", "7549", "14365", "620", "13627", "11523", "6627", "9563", "5366", "4448", "12543", "9898", "6337", "11001", "924", "4651", "5213", "6849", "14416", "12539", "5690", "3500", "6578", "8730", "14039", "3413", "651", "775", "3038", "14826", "13882", "4552", "85", "8146", "8049", "11393", "12251", "510", "4178", "163", "1581", "8227", "2578", "11351", "7219", "9308", "6291", "13400", "13033", "8045", "5329", "14677", "8152", "14270", "11119", "5148", "11704", "8854", "9660", "10923", "7592", "10540", "2478", "12568", "1355", "5373", "14492", "5", "9113", "9911", "5680", "12490", "6641", "4531", "6080", "12542", "1486", "14953", "10478", "7692", "1368", "12785", "1659", "9257", "3536", "11362", "8931", "6813", "10398", "824", "8047", "8563", "11942", "11661", "4316", "10963", "10075", "6569", "1819", "3732", "3423", "14716", "1969", "11687", "3931", "14785", "6558", "11583", "6606", "12067", "2662", "8847", "1627", "7", "10554", "15414", "4987", "2616", "629", "2593", "12149", "13868", "2306", "447", "7182", "7487", "10830", "14914", "2125", "6278", "11029", "292", "9324", "9708", "4016", "2015", "14773", "4825", "7168", "4957", "11471", "8600", "8798", "2999", "5949", "498", "1152", "9432", "9732", "1501", "4158", "14943", "14890", "6732", "13942", "2663", "8559", "12088", "2914", "6692", "13566", "7919", "3882", "12141", "13100", "8871", "15077", "1849", "10961", "10928", "5682", "12541", "7183", "10687", "9131", "5177", "314", "5199", "14009", "14259", "6170", "9083", "10056", "12965", "5336", "6502", "11915", "618", "8582", "7917", "13142", "449", "14420", "1555", "14476", "12101", "15305", "3738", "13432", "11833", "13497", "1598", "6426", "11772", "11194", "8882", "257", "743", "6008", "6090", "10849", "5277", "4262", "2242", "549", "5880", "9830", "8586", "4601", "4388", "14617", "7236", "576", "10869", "11662", "11190", "12799", "13195", "11104", "5053", "1786", "7848", "6967", "4484", "9805", "5380", "3972", "6084", "6824", "12419", "9048", "7482", "8163", "9872", "10804", "9167", "4218", "6270", "8520", "9573", "9444", "6639", "1925", "12833", "15207", "6154", "4152", "12753", "3954", "13657", "14281", "4274", "795", "1539", "14846", "4134", "5778", "5723", "14321", "6874", "1915", "10718", "5025", "11218", "458", "15287", "1896", "13192", "8111", "12994", "5110", "2078", "9345", "11721", "4943", "8821", "11996", "6908", "6774", "14480", "8121", "9981", "5664", "8517", "1520", "7503", "11353", "8459", "9967", "11906", "10734", "12176", "2584", "2705", "3387", "5420", "11935", "5958", "13169", "10839", "8580", "2908", "1393", "9240", "2543", "391", "13215", "6818", "1672", "4687", "13326", "249", "8758", "6210", "12076", "3765", "11605", "608", "7374", "13871", "12358", "13083", "4198", "10361", "13267", "9320", "329", "8780", "15372", "14087", "3069", "10517", "11078", "9332", "13199", "10117", "7349", "5012", "178", "13540", "13924", "6274", "331", "7424", "13277", "1844", "13504", "4090", "9775", "7232", "13720", "6100", "3139", "659", "6489", "6338", "10810", "14037", "14638", "1811", "13798", "13736", "2035", "2733", "4835", "8334", "5392", "10204", "3495", "1049", "4336", "3445", "7788", "8483", "4030", "7797", "4224", "8439", "14044", "6791", "10462", "9360", "4706", "6066", "13016", "12207", "2322", "883", "1531", "12734", "10548", "12185", "8058", "7635", "5452", "237", "11655", "4740", "4290", "9762", "3547", "9062", "1993", "4937", "12378", "12189", "2191", "3976", "6978", "2331", "14676", "14602", "14969", "5157", "2447", "11161", "5782", "9253", "1572", "37", "8735", "6645", "5873", "1831", "8817", "7248", "12716", "4886", "4620", "13729", "5013", "11844", "2744", "13823", "13343", "11699", "15007", "6275", "8406", "9402", "8392", "9931", "10000", "14112", "13264", "5962", "4372", "2868", "2679", "3289", "3802", "13250", "10365", "7794", "9817", "10784", "5819", "7521", "5758", "7480", "6839", "3021", "14383", "3907", "11184", "4009", "3851", "4952", "10515", "5979", "12866", "3595", "12310", "13376", "3674", "1260", "13773", "12932", "9637", "2399", "7493", "1935", "3940", "2991", "12472", "9096", "9951", "3848", "14864", "10039", "12876", "13139", "14", "3112", "13013", "9399", "354", "7834", "2684", "8306", "8016", "2613", "10761", "7365", "10735", "11274", "9910", "1891", "14179", "5479", "13300", "12752", "14940", "12323", "9701", "42", "8232", "12635", "14317", "1018", "5147", "8106", "836", "2057", "1243", "5244", "5507", "965", "9227", "6816", "5675", "7380", "7574", "9003", "1128", "984", "8617", "2030", "6782", "2859", "13001", "5377", "1824", "10095", "3130", "4302", "4969", "15047", "3314", "2275", "11624", "13841", "14215", "11503", "5200", "860", "9487", "9215", "1284", "10104", "8226", "12304", "11739", "1690", "5106", "11878", "5239", "4398", "1951", "14242", "6490", "7395", "14026", "15215", "4361", "1801", "2279", "8304", "14459", "10099", "15397", "4639", "3370", "7449", "632", "14733", "1506", "247", "89", "5560", "9268", "6272", "15292", "1321", "4918", "3162", "2455", "10482", "10894", "145", "14175", "9151", "11489", "13303", "12197", "11495", "5852", "9312", "11364", "7987", "13534", "14675", "4532", "7290", "14099", "12474", "12409", "14992", "1052", "11412", "8518", "7354", "12129", "15083", "10808", "5590", "15388", "7747", "4451", "12695", "9618", "4454", "10798", "14297", "11863", "14903", "13090", "12180", "4014", "9429", "8933", "11734", "2796", "11508", "12311", "14326", "6266", "161", "1465", "11215", "8102", "658", "79", "8315", "916", "8541", "2113", "8327", "1943", "7047", "13093", "3003", "13414", "9301", "7109", "6005", "12041", "13157", "7157", "11213", "2891", "10559", "14091", "6537", "9504", "3798", "11574", "12031", "97", "3177", "12484", "10222", "9230", "12738", "14799", "2230", "4093", "517", "7820", "14696", "10185", "2361", "643", "1364", "6892", "5493", "8371", "9242", "2427", "6181", "14814", "15367", "1637", "7642", "724", "5970", "6521", "9248", "9555", "10307", "3945", "890", "13556", "10787", "11171", "6345", "2692", "5279", "12859", "6368", "3137", "13669", "10729", "502", "13535", "5643", "6655", "4982", "10975", "2054", "748", "2697", "12441", "8996", "6444", "7583", "11971", "11881", "14257", "6428", "9378", "3459", "5896", "2255", "2680", "3957", "2163", "7645", "6407", "12516", "4770", "6826", "1502", "1362", "3905", "4197", "2974", "5811", "1451", "10949", "14661", "6709", "11336", "1272", "5084", "9610", "10680", "14863", "5223", "15377", "1962", "13929", "6468", "7187", "12766", "9323", "3658", "4795", "10404", "1685", "11224", "8739", "10898", "13559", "10897", "7345", "15392", "2969", "1071", "4140", "4243", "6315", "432", "3525", "4818", "11779", "8401", "5609", "516", "3601", "7983", "12649", "10401", "12628", "12723", "2903", "11629", "1228", "8637", "8318", "10219", "6074", "14135", "1766", "2729", "14752", "14134", "13153", "13039", "143", "6694", "5094", "2302", "5554", "8362", "3276", "13808", "14340", "13879", "9111", "1179", "14568", "6809", "4669", "9109", "6650", "13558", "6325", "12715", "2572", "4589", "10447", "15348", "685", "3648", "8939", "653", "10723", "11407", "12900", "887", "1747", "15194", "15284", "9017", "13874", "6179", "4117", "7197", "4683", "7499", "10362", "1288", "5726", "6786", "4463", "2431", "5475", "14521", "13621", "2853", "3170", "6910", "100", "13271", "9513", "4496", "3221", "7195", "5433", "8564", "3309", "9633", "14286", "14382", "3643", "4444", "5767", "7944", "10090", "11419", "14859", "11598", "5434", "11032", "2301", "6107", "5705", "11713", "11283", "8427", "8835", "15448", "455", "3169", "14902", "821", "1594", "9000", "2197", "12991", "14349", "8777", "6945", "2106", "6103", "14187", "8048", "5320", "12383", "14219", "12618", "8093", "487", "7289", "9873", "11744", "9423", "1290", "6875", "13304", "2099", "14956", "11680", "3466", "493", "8762", "15477", "2830", "11546", "10806", "10187", "12341", "14706", "9973", "5648", "5936", "14007", "1821", "4163", "11387", "1239", "6766", "2621", "5202", "14806", "4901", "14995", "9405", "491", "13125", "9867", "4671", "11573", "1089", "5831", "7167", "12420", "3399", "15092", "11960", "6017", "9315", "8320", "2368", "5052", "2768", "10116", "4077", "9900", "6796", "3436", "7266", "1893", "10111", "2157", "1524", "11623", "11250", "14443", "1755", "6891", "10168", "7610", "8608", "14988", "3712", "1096", "6640", "15340", "4356", "14786", "9418", "14076", "10867", "1245", "14671", "4610", "8279", "10818", "4973", "12829", "10610", "2747", "2273", "11493", "11851", "3053", "2978", "11911", "3884", "5790", "2540", "13876", "8648", "8840", "7307", "5840", "9184", "864", "8390", "4089", "7269", "7669", "5895", "12801", "11139", "531", "4912", "2791", "7313", "9912", "7652", "3528", "1299", "9134", "25", "843", "3947", "15407", "13435", "10021", "4535", "1410", "11065", "12330", "11175", "2375", "4184", "14632", "9196", "2921", "11253", "14523", "3463", "1387", "5914", "2305", "5687", "7005", "3198", "12805", "968", "7362", "2282", "6649", "3291", "10167", "15249", "10918", "9120", "10896", "8420", "14439", "1574", "11485", "8921", "540", "3922", "972", "10209", "2653", "10702", "2161", "13583", "356", "8429", "13953", "13569", "13149", "6837", "8386", "5594", "6665", "7239", "7120", "10226", "6332", "4624", "13901", "5360", "11342", "5427", "1761", "13356", "14616", "6985", "10558", "9438", "6134", "14415", "11726", "8167", "6415", "10298", "13993", "8374", "7467", "11369", "12606", "5527", "11972", "11381", "3754", "13355", "14411", "13743", "14984", "13178", "15344", "9264", "13726", "1511", "1206", "12577", "1325", "7453", "3842", "467", "7907", "4577", "12450", "10089", "7955", "7128", "12175", "11436", "14246", "13232", "14341", "4556", "5431", "3660", "8096", "12565", "10352", "6116", "13533", "5892", "8915", "14018", "3811", "12942", "9750", "3421", "1324", "11553", "9989", "14013", "5359", "11873", "7825", "14050", "12651", "14962", "1967", "7563", "15179", "8397", "7189", "5268", "6771", "14713", "228", "4315", "10725", "13717", "12935", "9626", "3145", "835", "6413", "2138", "12459", "4530", "8435", "7675", "13338", "9213", "7558", "10943", "12396", "1472", "14333", "6000", "13648", "13113", "11613", "11807", "13473", "5234", "8628", "9683", "15290", "13588", "14583", "9963", "8583", "6040", "6441", "12167", "3174", "818", "172", "999", "9962", "12998", "14370", "4261", "9255", "13866", "11944", "2924", "12446", "9212", "10253", "5561", "10238", "13390", "3490", "8118", "3219", "3047", "1691", "1799", "742", "15123", "8468", "9714", "3938", "1752", "8661", "9797", "6869", "10619", "2410", "8986", "13684", "8715", "8815", "4246", "6055", "3605", "13445", "419", "11742", "2644", "6794", "11138", "9102", "10101", "15422", "11649", "9747", "5034", "2335", "13889", "4812", "11064", "3804", "5843", "7254", "2941", "12682", "2794", "15264", "3144", "1123", "12817", "10595", "7850", "762", "10098", "915", "12011", "1939", "4641", "865", "10247", "9275", "7306", "7892", "11780", "5357", "299", "12338", "6503", "8708", "6310", "10323", "541", "6184", "6450", "7231", "5742", "7494", "9725", "8674", "6093", "10235", "12506", "4043", "9396", "3375", "503", "9155", "2973", "1537", "14729", "15268", "11563", "12872", "3533", "14871", "3452", "6110", "6047", "15193", "1616", "14911", "12319", "11465", "6504", "898", "15113", "7566", "10456", "7711", "3769", "11373", "8543", "385", "7378", "2900", "4840", "12052", "2253", "14609", "1813", "10973", "4401", "4866", "14932", "8591", "4196", "1446", "12756", "8645", "3866", "12631", "6287", "12300", "6388", "2877", "6962", "4741", "8326", "8159", "3308", "2074", "13665", "12993", "14495", "872", "15328", "6819", "8289", "7006", "10976", "11922", "9863", "15180", "6623", "2297", "12519", "3787", "8549", "1936", "9302", "13890", "8692", "6362", "10406", "12165", "2657", "315", "10526", "5668", "14688", "4376", "15019", "11710", "9952", "9859", "3266", "7470", "13363", "4713", "14089", "12616", "3160", "7199", "13301", "13759", "5098", "1357", "13789", "13057", "11133", "947", "11009", "4666", "8998", "6466", "13263", "3203", "9568", "9437", "10189", "4889", "8269", "5848", "10776", "8492", "2129", "15412", "14873", "1988", "7800", "9947", "3545", "2741", "1550", "5743", "9584", "12035", "7479", "10941", "12371", "9177", "9809", "9355", "2366", "4722", "8086", "9821", "5264", "10770", "7584", "1495", "8959", "9723", "7383", "13735", "8154", "4407", "12220", "6770", "3435", "6934", "9381", "13813", "14921", "12273", "15324", "7390", "6023", "13059", "3515", "13484", "5734", "3462", "7945", "9012", "15068", "2170", "6442", "2909", "11953", "14086", "8844", "8878", "5229", "12569", "2598", "238", "15459", "5613", "9661", "11918", "1985", "9028", "14673", "2091", "488", "11024", "14515", "14262", "4453", "13240", "12033", "11675", "4234", "11333", "15352", "9239", "8550", "5719", "4415", "14422", "9518", "9473", "10199", "11592", "14306", "2172", "9615", "3233", "2146", "12857", "2298", "12573", "9457", "9602", "14447", "9959", "11815", "13578", "3350", "14694", "6396", "14375", "12173", "11843", "1412", "10193", "9390", "5638", "2389", "4070", "9186", "13617", "8470", "13741", "9409", "6594", "1907", "207", "287", "8818", "2709", "2762", "14901", "5408", "13934", "13935", "2316", "6290", "7953", "8421", "12729", "4177", "12290", "3557", "15063", "4324", "12112", "15439", "4311", "15230", "8640", "8860", "9080", "12272", "13262", "12291", "10719", "6041", "1731", "1456", "7582", "11268", "7939", "12746", "9876", "11410", "8140", "13474", "8192", "12366", "6784", "11039", "5045", "6802", "1906", "4936", "7423", "6763", "14006", "1237", "14276", "4779", "6105", "13757", "4033", "15236", "690", "3964", "10379", "12699", "15099", "14479", "1156", "15343", "7887", "10202", "8510", "9576", "2294", "9811", "5085", "871", "2798", "2967", "14805", "8313", "8299", "7446", "6997", "1845", "1401", "10459", "6217", "11242", "4149", "3923", "774", "10150", "3176", "6805", "8657", "212", "7002", "10861", "6340", "12281", "15241", "7430", "7814", "9942", "11484", "3747", "12104", "7694", "263", "11692", "5158", "7586", "3239", "5744", "14614", "13305", "10227", "9966", "2506", "9977", "10319", "14973", "7517", "4946", "13581", "3009", "9717", "6590", "9944", "7682", "13342", "7529", "7968", "6747", "12795", "4670", "13985", "12581", "10573", "11554", "14275", "1671", "3439", "14409", "10906", "12351", "8908", "6473", "8991", "13824", "10800", "9801", "7667", "10752", "2266", "7925", "14833", "5471", "14496", "4766", "3430", "14743", "14225", "69", "13672", "6611", "12639", "6683", "11933", "3753", "14698", "8764", "9528", "3654", "6401", "4729", "1178", "5571", "4018", "13372", "833", "10551", "2484", "6363", "7798", "6593", "8893", "1698", "4048", "15181", "10342", "6137", "10097", "13670", "2964", "4908", "8790", "4637", "4636", "5645", "7620", "10726", "4078", "3272", "625", "2932", "11357", "928", "5525", "6145", "8498", "550", "4050", "621", "12224", "12271", "11542", "9124", "14085", "11415", "10256", "9420", "9075", "5346", "9982", "13930", "8350", "3396", "3755", "9842", "8484", "8184", "6878", "15246", "2247", "14041", "10508", "2332", "13012", "11334", "14946", "12461", "3529", "7665", "8316", "8190", "11775", "6863", "8043", "5120", "7188", "4383", "8015", "13658", "14407", "14068", "11315", "12086", "4924", "8050", "12941", "15336", "6543", "919", "13331", "13162", "14319", "11567", "2021", "9016", "9363", "533", "2636", "10917", "7029", "5866", "7999", "5330", "604", "6019", "9827", "4173", "11207", "5122", "7569", "10112", "9171", "9519", "8584", "9161", "12779", "10929", "9425", "4738", "4098", "2673", "11930", "2580", "7408", "351", "3079", "4895", "7085", "9829", "12274", "279", "1217", "3990", "2482", "7842", "3521", "5395", "7376", "6530", "6281", "11666", "8775", "5172", "14547", "10758", "1443", "14576", "929", "80", "6777", "2706", "2629", "14429", "8297", "12873", "3104", "14299", "2843", "182", "3349", "10347", "242", "4481", "4884", "15423", "12074", "14775", "11017", "2841", "7656", "8974", "13002", "15302", "15136", "11157", "343", "2210", "9056", "7738", "11451", "307", "1200", "844", "2267", "10491", "4363", "12978", "9019", "9254", "2878", "6484", "5536", "6263", "11015", "2622", "4906", "3477", "6308", "7088", "11610", "459", "12955", "1011", "3441", "7817", "11469", "2850", "7524", "9592", "11804", "151", "13776", "2231", "14195", "723", "12843", "11285", "7181", "5581", "7309", "6391", "3719", "223", "14421", "11576", "10501", "9734", "5804", "10106", "5576", "4569", "10041", "9715", "9703", "11448", "11433", "10418", "3228", "3645", "2833", "14563", "1961", "11199", "10366", "1220", "11564", "13995", "7508", "5079", "3471", "14905", "7802", "13861", "14509", "9294", "12950", "13967", "7377", "5448", "13579", "8413", "13345", "3250", "7386", "14994", "4217", "1174", "8919", "3988", "12602", "10048", "11759", "13040", "12512", "11989", "4985", "14506", "7975", "10785", "10671", "3606", "9550", "10173", "4499", "4568", "9566", "7044", "9130", "8338", "4647", "5347", "15151", "4574", "9132", "14107", "3671", "10445", "8857", "10636", "6159", "5182", "10471", "12555", "8565", "11736", "4783", "92", "4470", "14377", "5378", "619", "8474", "9752", "12529", "10715", "3089", "9998", "6927", "11151", "4358", "12256", "6234", "11355", "1068", "11405", "5628", "9022", "7123", "2911", "12320", "13254", "14798", "5097", "8554", "4868", "8652", "10123", "3083", "4007", "747", "10791", "8914", "2314", "13081", "2277", "1772", "2562", "6902", "14753", "15139", "6572", "9996", "6095", "6906", "8393", "8851", "7154", "15046", "9741", "8113", "9333", "9712", "7460", "15399", "13843", "6556", "338", "11889", "7826", "2471", "14627", "9868", "15173", "3669", "4529", "10958", "670", "10450", "10668", "1963", "12724", "11806", "13724", "8519", "13", "13186", "6922", "5129", "10302", "9861", "571", "9206", "6285", "12344", "5259", "12881", "10423", "13463", "1474", "3994", "10044", "5421", "9205", "3919", "7320", "2695", "3411", "2829", "10250", "15158", "4195", "147", "13320", "5980", "346", "7698", "13176", "6357", "825", "1861", "4932", "14500", "12215", "1004", "13357", "7118", "15433", "5530", "6765", "10656", "13747", "5957", "7702", "5111", "12842", "11109", "14589", "962", "3109", "6960", "15334", "12668", "11755", "10174", "10952", "396", "7492", "7339", "8822", "6313", "10287", "1347", "4559", "2797", "11537", "5838", "3736", "4786", "12121", "11300", "5406", "980", "2067", "3614", "3831", "5580", "6043", "10310", "8235", "7060", "5572", "15045", "13378", "14791", "14647", "948", "9945", "3078", "104", "1434", "11350", "494", "2134", "125", "6344", "6240", "13440", "15138", "9943", "6768", "6033", "14794", "3292", "13505", "9785", "10151", "5203", "10703", "3325", "9358", "10420", "6616", "13152", "6152", "7161", "6044", "5776", "6714", "4137", "3552", "4015", "10570", "7915", "11993", "13639", "5884", "14330", "9306", "8361", "8609", "8005", "5233", "4154", "2016", "6925", "884", "6461", "4760", "12061", "6389", "12846", "13650", "783", "2303", "8322", "10851", "14966", "14641", "1519", "9076", "4017", "5810", "14847", "9906", "7579", "3044", "8041", "15103", "6139", "11668", "14090", "10535", "5119", "10801", "4634", "13483", "5219", "14777", "12037", "3384", "10267", "12617", "14454", "12283", "5799", "3246", "9207", "1360", "5505", "9118", "11501", "6150", "13458", "15086", "11223", "8085", "931", "4409", "9281", "11120", "13424", "13827", "12131", "1820", "8083", "14121", "7385", "14123", "2517", "12632", "6002", "3644", "8548", "8254", "505", "3588", "7768", "8742", "3616", "4935", "8001", "4815", "9520", "7444", "1611", "3702", "1341", "1389", "7026", "2933", "14680", "2195", "2083", "8466", "4011", "7249", "9786", "13314", "3824", "7169", "12953", "5064", "15278", "9621", "6306", "4873", "14600", "9393", "5356", "10790", "6326", "9836", "5673", "6586", "10177", "2400", "140", "8079", "3950", "938", "6327", "9638", "12252", "3447", "8035", "6187", "7084", "8707", "12062", "8169", "4326", "2874", "5210", "3587", "6845", "13334", "11384", "1496", "7000", "9511", "5510", "14322", "12246", "9940", "1147", "9619", "2866", "7448", "14057", "13983", "14991", "11913", "5463", "6086", "15410", "13190", "7396", "10672", "9686", "13492", "14203", "9509", "3941", "4653", "12939", "5215", "3642", "4240", "4172", "6780", "5618", "7845", "5065", "15360", "7603", "2888", "1042", "13687", "10502", "14832", "11520", "13225", "15321", "3105", "3194", "6141", "15320", "9234", "3279", "9311", "5928", "5349", "7152", "10269", "9627", "1644", "12476", "1602", "9337", "13563", "8778", "9159", "12982", "5543", "12652", "7632", "2620", "13336", "5661", "565", "15253", "12554", "5730", "5007", "11289", "7019", "12046", "8089", "7294", "1758", "13917", "4052", "15447", "9934", "9098", "4660", "12198", "6029", "2084", "2193", "9327", "7629", "5965", "3299", "11606", "2371", "12566", "3584", "2477", "3317", "3071", "444", "2365", "10018", "11565", "3825", "13921", "13208", "12258", "2821", "9216", "4850", "3476", "1390", "72", "7212", "4399", "14620", "12335", "812", "12135", "11155", "10147", "13352", "7491", "3687", "6293", "7014", "11688", "13245", "10345", "3029", "12142", "5030", "11622", "9539", "8636", "8776", "8132", "2618", "8177", "4046", "365", "10744", "3762", "3282", "2143", "4143", "10479", "3326", "3132", "12390", "970", "3460", "8697", "2012", "2398", "3448", "13061", "5362", "921", "4905", "12808", "13196", "6257", "4006", "3944", "111", "9858", "9772", "5018", "6860", "12915", "909", "14850", "9140", "8478", "11801", "5616", "15005", "14737", "954", "9924", "5655", "141", "10282", "11225", "8808", "9108", "6163", "14877", "6247", "4614", "8769", "15260", "5498", "7745", "9133", "2148", "13958", "6857", "5040", "2226", "1670", "10217", "13449", "12060", "2857", "203", "13289", "14128", "6755", "427", "2694", "13471", "6736", "12423", "11123", "11740", "3655", "10667", "12532", "7934", "6652", "985", "956", "14095", "4061", "8540", "1462", "7048", "2849", "2077", "6950", "10052", "3077", "7485", "10154", "6880", "1185", "8880", "13691", "8806", "2397", "869", "9224", "1683", "7661", "2846", "1552", "1", "13462", "9794", "3304", "13146", "3374", "10249", "1778", "3117", "14451", "7678", "10329", "9447", "3836", "652", "14133", "3689", "11695", "9258", "1927", "11243", "5652", "13940", "9374", "8907", "8312", "9902", "11235", "2419", "5601", "1107", "1024", "7879", "5194", "6949", "11179", "1607", "5829", "5856", "15203", "10396", "932", "264", "2525", "13237", "12627", "10661", "8260", "8186", "6836", "2971", "281", "2053", "12010", "13858", "15205", "3443", "5074", "12368", "15329", "1313", "4446", "5780", "10797", "886", "12902", "14854", "10186", "1840", "10884", "644", "1680", "14868", "7143", "13385", "15040", "1219", "4351", "13243", "8631", "11907", "8896", "12567", "4709", "10947", "1610", "6629", "10953", "3128", "3222", "4435", "6992", "10795", "7914", "5467", "11903", "15200", "2132", "14074", "13468", "1996", "3", "5823", "1666", "7686", "12743", "3220", "9448", "7926", "6203", "10138", "6663", "794", "4320", "9570", "10448", "12000", "2459", "10268", "9082", "6984", "8127", "5702", "15060", "4171", "15443", "4665", "3849", "6687", "12792", "2915", "10764", "4718", "1303", "5140", "6369", "13931", "13790", "3865", "1595", "5265", "7762", "8698", "15021", "3909", "6417", "8065", "12347", "7959", "6269", "4896", "13339", "4253", "10813", "13671", "3935", "11848", "11707", "7838", "5332", "13607", "6723", "8768", "12883", "6378", "13401", "15163", "12650", "10882", "3206", "15415", "3963", "12458", "1841", "2959", "7598", "10836", "6062", "5204", "4216", "11165", "10399", "2545", "3620", "4226", "12874", "13586", "8755", "2839", "12213", "2761", "3426", "3119", "2819", "8853", "7316", "10547", "8499", "9596", "6039", "9044", "2290", "10783", "7789", "15133", "8128", "2522", "4308", "4249", "8965", "8866", "3744", "2372", "15296", "11130", "4042", "12949", "12885", "14027", "14742", "15101", "2604", "9081", "13923", "4215", "12878", "9965", "159", "5480", "2507", "9059", "10210", "13268", "1911", "10061", "7997", "7332", "14663", "639", "3879", "6211", "930", "11428", "2137", "2103", "197", "10266", "11391", "6192", "6350", "8837", "13224", "2483", "10110", "3187", "9291", "15228", "1513", "8076", "14639", "9498", "10220", "13699", "14391", "11277", "11555", "2457", "4560", "10644", "15033", "10705", "10431", "15094", "15317", "13960", "12120", "10034", "13054", "4281", "7013", "13032", "1538", "14456", "6081", "6045", "3603", "5821", "226", "2713", "6255", "1868", "5183", "15368", "10815", "9482", "3516", "9605", "15493", "12206", "2655", "9883", "9820", "6207", "7033", "12428", "2409", "8506", "8500", "8981", "1079", "7595", "2031", "10985", "15145", "3405", "3518", "12200", "6510", "3294", "5209", "13614", "3492", "15271", "2380", "6790", "10246", "13060", "9958", "7604", "3440", "4245", "4719", "13293", "12285", "5658", "9994", "5842", "9091", "7954", "7984", "15310", "2383", "3578", "5417", "11794", "14957", "7722", "918", "11377", "7382", "7704", "1358", "14289", "8265", "1809", "2479", "7064", "7089", "13258", "5228", "11894", "601", "10567", "4611", "8892", "15043", "5231", "6268", "11327", "7204", "5637", "14031", "4321", "1756", "2126", "8725", "5578", "7451", "2412", "12909", "749", "10042", "10228", "3264", "11958", "10327", "1065", "1076", "4145", "9814", "7977", "11558", "5401", "10593", "11016", "2381", "8829", "12683", "12897", "12562", "6173", "8940", "8441", "810", "13697", "13230", "10419", "398", "10297", "4654", "3356", "9806", "3646", "4123", "12128", "9681", "8740", "5033", "3188", "5801", "2835", "5363", "5039", "11987", "4160", "15078", "4305", "15496", "7296", "6785", "9499", "13624", "4054", "5713", "14360", "5944", "10529", "13154", "10903", "8979", "7969", "12481", "6540", "14223", "11367", "10003", "12012", "12584", "11231", "9756", "7436", "2154", "15148", "403", "7097", "12278", "3576", "3883", "3513", "15358", "13024", "7432", "9572", "5692", "14344", "9822", "11030", "9470", "10819", "9753", "9375", "7653", "11494", "11644", "4681", "10768", "14889", "4148", "6811", "4132", "5331", "12570", "14305", "11437", "12820", "14293", "1902", "13030", "4954", "12703", "6817", "10294", "9884", "14308", "10334", "14795", "7818", "9189", "2318", "8164", "1836", "10224", "4247", "3394", "15342", "11528", "3081", "1781", "9209", "9700", "6965", "13029", "3562", "15216", "4581", "14916", "14359", "5283", "5361", "6834", "8752", "8804", "8193", "8930", "12806", "4788", "13371", "6336", "9469", "14362", "8958", "2362", "204", "9521", "3965", "8480", "10986", "2940", "11359", "4186", "8574", "12722", "3382", "5004", "12782", "8625", "6434", "1759", "13525", "3502", "15325", "10745", "14483", "14645", "14002", "2599", "10291", "6113", "1587", "2589", "1957", "12894", "10469", "10179", "8935", "10686", "3080", "13111", "13587", "6853", "1133", "8247", "5907", "13603", "1897", "9185", "5511", "12136", "3782", "1591", "11540", "6670", "7708", "15366", "14345", "7885", "12437", "4097", "6624", "7855", "13332", "12069", "13771", "11440", "14762", "1448", "6528", "12558", "6752", "5562", "4844", "3214", "6944", "4966", "1264", "14216", "11088", "9514", "9692", "5400", "14845", "10272", "13147", "14941", "1141", "10242", "14029", "11094", "2190", "12392", "11568", "5760", "1785", "12028", "6753", "323", "15408", "5126", "8497", "12586", "8115", "8437", "98", "3809", "9740", "14530", "10271", "11159", "316", "9029", "5973", "5961", "8202", "8836", "15355", "2062", "7457", "4861", "7578", "2044", "13420", "13984", "10944", "14070", "15386", "5206", "14463", "4752", "14829", "9790", "9125", "5921", "12895", "13870", "1546", "6740", "12613", "2140", "15150", "8448", "6678", "6449", "15213", "7511", "22", "10825", "4739", "1034", "3876", "3418", "2466", "15389", "2347", "14535", "8151", "6634", "4999", "6800", "15073", "14351", "10698", "4735", "1547", "4480", "1335", "9999", "13893", "10514", "198", "9705", "8496", "2223", "14311", "6971", "9290", "14410", "2149", "10853", "4612", "14056", "2443", "1703", "5599", "1499", "3522", "14143", "3122", "6722", "15497", "2200", "9587", "8516", "11635", "12763", "14338", "5150", "13041", "11974", "3135", "9223", "7590", "6792", "630", "9547", "12797", "13207", "7282", "5582", "13956", "1392", "8117", "14884", "14380", "7262", "11080", "13872", "14434", "11148", "13464", "12964", "12890", "6542", "15263", "6025", "7217", "3126", "4466", "9751", "6804", "5934", "15111", "8329", "1097", "4225", "13131", "169", "11147", "12615", "13392", "5548", "7024", "14585", "243", "7836", "4292", "10252", "12109", "3022", "9182", "10885", "11237", "6372", "12030", "15298", "3133", "2931", "10215", "15421", "1202", "8071", "13167", "6423", "2892", "8276", "15059", "15303", "11136", "6065", "13366", "8205", "14545", "7389", "1481", "1366", "12398", "12042", "13502", "11765", "1154", "1378", "14335", "260", "15110", "8743", "5474", "15295", "9307", "12195", "6989", "5539", "2086", "2965", "5399", "10930", "3982", "11152", "1777", "4384", "8826", "14423", "11899", "13764", "14115", "1447", "14467", "4239", "4734", "11099", "1007", "12551", "5642", "4346", "558", "7990", "3362", "12851", "9787", "5635", "7442", "4600", "206", "5666", "5282", "11069", "15225", "12064", "11625", "13964", "13755", "7904", "4231", "5991", "10995", "8773", "6970", "10809", "14637", "5878", "13701", "10239", "6909", "13180", "9010", "8464", "5123", "8927", "5916", "8820", "7548", "6769", "265", "1359", "11490", "7530", "10618", "4561", "4960", "10198", "5849", "14441", "4241", "9597", "11195", "8756", "10058", "14379", "3815", "267", "14358", "5792", "14926", "6032", "5794", "3234", "8659", "10285", "15280", "7839", "14856", "13704", "6208", "8807", "7532", "4112", "499", "7535", "3871", "8767", "11197", "737", "4375", "3101", "3414", "14920", "1491", "2872", "1339", "10890", "328", "11301", "3061", "14573", "1543", "4852", "9249", "3257", "2549", "1966", "9591", "8131", "6905", "13696", "9018", "5948", "3915", "12467", "14204", "1559", "1249", "7550", "8291", "13766", "5960", "11075", "9645", "9575", "13122", "11183", "9334", "436", "5051", "13295", "11949", "513", "8733", "7407", "9093", "11140", "13891", "4101", "3527", "12236", "10828", "40", "2494", "14670", "8906", "8923", "7242", "14820", "9195", "6360", "14318", "15104", "7755", "9052", "5612", "14536", "9899", "6886", "7367", "11643", "8630", "15282", "4976", "9607", "5060", "3262", "6981", "6728", "5095", "13242", "4986", "1843", "9532", "13555", "6447", "10330", "6781", "4754", "12023", "6299", "10871", "4491", "377", "3437", "1121", "1512", "11330", "1853", "10360", "4547", "11768", "12406", "3724", "3402", "8051", "11361", "1498", "11255", "7823", "9813", "7346", "12388", "4934", "13405", "14700", "1229", "7427", "6848", "6249", "13778", "13193", "9923", "6959", "4348", "10950", "13353", "5703", "8245", "4696", "10424", "3211", "14537", "1999", "2720", "15384", "6681", "15048", "11994", "13073", "1197", "11862", "12395", "231", "14369", "11947", "11674", "14282", "11345", "9008", "12394", "9210", "5969", "2454", "2632", "10614", "1385", "13175", "11884", "9810", "8295", "2433", "2310", "13281", "12365", "13792", "6500", "842", "677", "13750", "12992", "3914", "5529", "12830", "8706", "3099", "7562", "1525", "13118", "2806", "7890", "7680", "116", "3984", "9997", "15490", "3301", "11887", "10028", "11084", "14501", "11513", "14466", "4967", "10566", "3685", "7113", "12280", "13386", "15431", "11931", "6218", "6195", "10991", "6517", "7081", "11365", "960", "9939", "1254", "11074", "68", "770", "1273", "3284", "5586", "9243", "14209", "4230", "3532", "15062", "3649", "437", "12002", "189", "6298", "10505", "439", "11282", "2369", "7866", "2235", "14508", "1732", "7360", "10957", "3788", "3789", "10349", "3258", "7514", "14462", "14207", "2396", "5832", "9909", "5197", "4248", "9245", "10581", "14450", "5888", "15441", "9462", "3733", "6225", "8021", "13660", "10523", "10597", "10432", "7585", "5626", "2645", "6573", "61", "8972", "1886", "14575", "1689", "9731", "2391", "13427", "10470", "13408", "4949", "12772", "1168", "8239", "13015", "4410", "463", "15004", "4373", "1274", "713", "6432", "941", "5133", "12924", "12266", "10129", "8136", "8112", "11062", "13064", "13673", "3041", "5473", "7607", "1160", "3315", "13495", "1898", "1973", "13323", "660", "901", "14514", "8204", "375", "13253", "12557", "5090", "9446", "6068", "12417", "15195", "8343", "2602", "1356", "13274", "10371", "6021", "5385", "13537", "10552", "1994", "12951", "10850", "13019", "7122", "9050", "13066", "14049", "12400", "14030", "15034", "6567", "10157", "13048", "10901", "2088", "4860", "15437", "15347", "5583", "8862", "3607", "6646", "12217", "5327", "15311", "3664", "12728", "7812", "8988", "7841", "5456", "1614", "13532", "9950", "7402", "35", "12150", "4845", "11488", "4585", "11774", "8824", "9781", "7522", "3494", "5348", "6014", "5805", "4980", "13369", "3706", "1077", "11952", "9416", "9164", "8856", "3353", "14922", "9344", "8684", "8063", "7251", "6871", "15186", "9370", "10068", "7741", "5044", "14427", "14464", "8019", "3303", "1649", "5335", "4252", "12233", "9221", "2814", "5886", "7326", "3956", "148", "6618", "13493", "4956", "13920", "1199", "1382", "7079", "5324", "13509", "13456", "10162", "12133", "14346", "8970", "9585", "8014", "2597", "14236", "6788", "1058", "14178", "702", "13590", "15385", "1620", "4593", "13291", "935", "12265", "6361", "6626", "2646", "2402", "8451", "9379", "6443", "2407", "12094", "1588", "4804", "2376", "12048", "9430", "6307", "12308", "8877", "15065", "4902", "15247", "14662", "13311", "13703", "3626", "12026", "7759", "3763", "14040", "2816", "8812", "8992", "6808", "7913", "9793", "2009", "866", "11053", "9450", "13807", "8886", "11925", "14381", "471", "5185", "2968", "12421", "1842", "10201", "3199", "3379", "9359", "10879", "8718", "11460", "13415", "14078", "8022", "66", "13112", "8224", "4366", "5585", "12735", "1395", "13189", "14997", "4909", "15026", "5605", "6309", "9341", "9516", "2097", "6320", "10669", "8590", "4165", "5517", "5748", "1086", "14191", "9879", "15483", "1630", "1088", "15281", "2469", "13496", "3076", "8122", "1224", "11220", "1580", "14665", "306", "5334", "308", "9400", "10206", "13388", "8352", "4133", "12596", "14876", "120", "9970", "8833", "10652", "15381", "5924", "11318", "9112", "5602", "13834", "7920", "5458", "2896", "12832", "11832", "11026", "4166", "5910", "5167", "2822", "15351", "15470", "5862", "10119", "14403", "610", "9413", "7001", "10774", "5393", "2958", "11791", "14830", "951", "13637", "10183", "4282", "8961", "4942", "13695", "1459", "5683", "7965", "11337", "7771", "3823", "14024", "8417", "14668", "5429", "14265", "5217", "1606", "1950", "13063", "8792", "11932", "1797", "4990", "11587", "1692", "2823", "14167", "15142", "15140", "9767", "14601", "8572", "1676", "15354", "8150", "15130", "4872", "9", "1810", "3859", "412", "9031", "3018", "5565", "11581", "14574", "683", "3837", "7086", "6386", "475", "10648", "5006", "15211", "7989", "7114", "10102", "7951", "7075", "12711", "11580", "2871", "1484", "5927", "6602", "14444", "8172", "11787", "13136", "5809", "4449", "2802", "11487", "9921", "5410", "14996", "7544", "7908", "10773", "2414", "7433", "13864", "13465", "4291", "2949", "3713", "7164", "914", "181", "13546", "3868", "14156", "7447", "13298", "11979", "10100", "637", "12393", "13875", "14012", "371", "5749", "6932", "5693", "211", "3624", "5459", "650", "3290", "482", "4885", "14987", "2963", "6807", "15153", "12988", "3633", "2251", "592", "5230", "705", "7782", "1204", "2261", "5160", "8898", "7092", "14927", "727", "8606", "289", "9808", "3050", "593", "8530", "14588", "13174", "8157", "3318", "4523", "1979", "13091", "5607", "10115", "13373", "12465", "2349", "13666", "10355", "14787", "9040", "10188", "9465", "3348", "6009", "7238", "12385", "988", "8077", "12708", "8704", "15245", "3591", "6130", "7941", "9404", "9639", "11512", "4692", "392", "12071", "5861", "5307", "3613", "472", "4755", "1187", "14797", "15429", "12611", "5243", "144", "6295", "13606", "7931", "9580", "14779", "813", "6923", "1471", "3204", "4661", "9567", "5614", "15084", "13050", "395", "1991", "13712", "8214", "8087", "13796", "2102", "9352", "8266", "14772", "5297", "4548", "14255", "12597", "14682", "2311", "410", "12442", "2904", "15208", "8676", "10651", "9270", "789", "11633", "9501", "6708", "14432", "5370", "14511", "2854", "9840", "3826", "2276", "12981", "12345", "1214", "8333", "14612", "2468", "5455", "11421", "7673", "5266", "14251", "4579", "33", "5997", "14885", "3960", "15135", "6577", "13421", "2612", "3059", "1423", "10303", "13523", "9949", "9693", "7281", "3341", "10654", "15036", "6376", "271", "10533", "11422", "12004", "13625", "5724", "8622", "3373", "13527", "12116", "9969", "13791", "5214", "4555", "8903", "13686", "15182", "4640", "11112", "6555", "8185", "4673", "12264", "14177", "14550", "8527", "8231", "11057", "12163", "8495", "4408", "3821", "8673", "3252", "9975", "9386", "10710", "14221", "730", "7387", "9347", "10794", "13987", "1857", "3774", "3828", "6280", "14457", "11502", "14765", "14561", "8650", "5624", "3680", "13150", "11211", "6209", "430", "12146", "9408", "6778", "5295", "2167", "3302", "9777", "12509", "12155", "11890", "6491", "12969", "6612", "4521", "11456", "10054", "545", "2307", "3342", "9744", "6481", "9991", "8191", "9678", "2777", "15304", "10196", "11239", "1556", "1912", "12325", "2326", "9319", "11743", "5419", "13198", "7464", "1210", "14651", "13052", "1613", "4110", "5442", "3157", "13068", "7790", "13130", "6947", "10565", "3814", "10240", "14035", "7924", "2537", "5477", "12239", "585", "9988", "12811", "6114", "3366", "10765", "11864", "11585", "880", "9157", "12899", "8868", "2716", "4213", "8995", "9903", "5826", "11719", "6106", "3800", "11228", "15487", "5553", "3226", "76", "12511", "1955", "14067", "15017", "9287", "5968", "6603", "13992", "5139", "2155", "9389", "9009", "10594", "2698", "8389", "11313", "1138", "10176", "4024", "12008", "11588", "7963", "6838", "874", "15398", "3397", "9042", "2824", "7051", "5041", "1444", "8760", "11611", "1108", "2192", "10693", "6196", "7893", "10679", "13478", "14260", "6505", "2101", "10612", "1379", "10817", "14754", "7277", "2450", "12798", "6509", "2432", "13115", "7729", "6439", "9913", "11733", "3253", "11481", "402", "14810", "13575", "1528", "6277", "8133", "10601", "14347", "9832", "1218", "14320", "2654", "5898", "8103", "5137", "14080", "14373", "13187", "903", "14093", "11634", "15185", "2130", "12292", "14493", "11403", "12282", "2519", "9851", "10927", "7135", "5246", "7292", "10274", "4618", "2804", "152", "14471", "2669", "6522", "3697", "7889", "8158", "3043", "6939", "10057", "2635", "8440", "14606", "10254", "6735", "6707", "15416", "7228", "2827", "10562", "14149", "518", "7531", "7415", "5567", "6598", "4897", "7200", "5577", "12494", "4084", "9669", "10286", "7857", "4540", "3977", "7241", "10164", "3551", "12862", "10473", "15387", "14356", "10641", "2342", "5721", "8842", "4981", "2426", "10916", "6666", "13996", "12241", "13829", "12604", "1488", "13981", "3120", "15051", "4397", "14184", "131", "10333", "5796", "13075", "11329", "3725", "3568", "13099", "4823", "7628", "6157", "4831", "15456", "2384", "817", "5740", "6575", "12867", "715", "9675", "3604", "12800", "5942", "1149", "10172", "4965", "4716", "2510", "15256", "4385", "14621", "1084", "7668", "1989", "1959", "7411", "64", "250", "2337", "10283", "12517", "12809", "8472", "3609", "426", "3983", "7284", "5893", "5787", "5267", "14893", "5756", "13251", "11003", "12268", "1929", "966", "14148", "10739", "4506", "12759", "12355", "2607", "11883", "13469", "10480", "1234", "13845", "10857", "13072", "10821", "7437", "8602", "4091", "9987", "5379", "12783", "3888", "4192", "413", "9716", "5996", "7130", "13865", "14976", "14913", "6379", "12234", "1131", "3278", "14218", "1665", "10993", "11173", "1461", "7912", "3496", "8010", "4034", "14518", "6053", "7568", "12870", "15378", "8271", "8805", "13062", "2164", "446", "3051", "1421", "12194", "6580", "1600", "9069", "2052", "3372", "6178", "11054", "6982", "78", "5917", "1013", "5073", "13438", "6079", "9623", "1281", "10273", "8237", "10580", "4789", "3444", "12968", "5438", "14860", "1641", "15067", "10331", "8414", "5457", "12225", "2828", "13667", "1806", "14553", "974", "6548", "300", "13786", "8412", "3369", "11201", "7177", "8391", "6366", "9854", "6911", "8967", "14504", "14892", "11303", "13640", "9936", "10376", "13317", "14366", "14397", "4345", "5159", "13260", "10931", "10465", "523", "12775", "3040", "4113", "15198", "7994", "10858", "2501", "11158", "5830", "5287", "7541", "4147", "3033", "14970", "13168", "14398", "12943", "3641", "9397", "3858", "15318", "5478", "3852", "11562", "11380", "3042", "7046", "12027", "14852", "13835", "5646", "3589", "7090", "2837", "10484", "2800", "6174", "1862", "11527", "15210", "7775", "6206", "13375", "5532", "13290", "13820", "11266", "9724", "6706", "6246", "14710", "12630", "759", "11259", "13074", "11735", "13698", "9434", "5372", "15243", "13952", "11943", "2997", "13391", "3797", "10460", "14303", "8055", "14778", "13354", "5764", "8643", "183", "13226", "3386", "12203", "3138", "5398", "11417", "11093", "12583", "6064", "3880", "14052", "1744", "3190", "10002", "9197", "9907", "1342", "11904", "5864", "13604", "11477", "8240", "10582", "13374", "8324", "12995", "12276", "5023", "5016", "6677", "5092", "15119", "8272", "6948", "11697", "6647", "5839", "9758", "14827", "11963", "4087", "9603", "6688", "13212", "14532", "13234", "4477", "9517", "2956", "481", "6061", "9100", "12204", "10493", "12819", "3590", "3682", "11552", "6018", "6609", "10724", "14571", "4418", "101", "4757", "1092", "8916", "2815", "4933", "6596", "7703", "4880", "2898", "8228", "2179", "4431", "10321", "2917", "14296", "6171", "3140", "927", "9480", "15279", "11276", "8267", "3856", "12903", "7608", "4505", "13947", "8663", "10279", "1921", "7032", "10936", "14213", "9477", "11627", "7419", "13584", "10625", "15435", "12526", "13248", "4237", "7565", "10030", "11834", "4182", "6317", "14165", "1285", "11938", "1450", "5189", "12578", "14170", "1565", "1155", "12531", "6322", "9979", "11461", "9038", "5428", "7178", "2778", "13500", "2241", "8973", "10877", "11346", "12675", "13164", "891", "12289", "736", "15049", "10749", "1194", "13656", "4257", "497", "8423", "6916", "8119", "8952", "6279", "5304", "8463", "7921", "7286", "11251", "3286", "12691", "14169", "15419", "14633", "6352", "84", "1904", "758", "10314", "14405", "2639", "5314", "7750", "1768", "1090", "3663", "6236", "14254", "11651", "615", "4558", "5538", "7515", "4314", "11256", "7824", "6149", "2428", "2162", "4211", "3861", "3058", "3296", "12945", "12039", "13562", "14309", "11865", "1760", "1827", "14704", "6355", "13247", "12796", "8891", "5704", "9652", "4944", "12740", "6424", "8690", "15446", "8396", "5056", "5470", "7587", "6969", "11909", "2403", "7409", "6303", "369", "1300", "3583", "11826", "4362", "11570", "12159", "14015", "10180", "7291", "1221", "14971", "6604", "15023", "3179", "14494", "11579", "14878", "10977", "6637", "1736", "9158", "4187", "15054", "14559", "15424", "6712", "15093", "8360", "14720", "10259", "15196", "14774", "8367", "8098", "13296", "899", "827", "4416", "1568", "7458", "8082", "5686", "910", "12502", "6194", "4900", "5876", "3108", "14205", "4887", "8811", "2979", "13312", "8850", "5298", "3274", "1938", "13727", "14210", "218", "7370", "12686", "2188", "12022", "838", "6359", "4142", "5226", "12024", "15289", "15411", "6632", "10237", "1972", "11263", "13662", "293", "14899", "13031", "7102", "13433", "13108", "3619", "1700", "1442", "2236", "11181", "5284", "10497", "8017", "8162", "14556", "14577", "8681", "1494", "9053", "13948", "4753", "3313", "1583", "1258", "14136", "13219", "14734", "5035", "10537", "418", "6689", "11441", "830", "9865", "3785", "823", "5186", "8307", "4613", "1887", "7744", "6974", "5188", "13412", "15082", "5156", "8340", "83", "232", "8873", "3684", "1839", "12305", "10655", "6693", "9850", "12791", "6343", "614", "4151", "15309", "13517", "2677", "10316", "1056", "1148", "10888", "8616", "10091", "10495", "12692", "13902", "4842", "3489", "7552", "273", "11597", "6921", "7489", "3468", "13003", "7783", "11076", "6421", "12301", "834", "8664", "2582", "6200", "11672", "2182", "14273", "14334", "5847", "8888", "15452", "103", "3873", "2329", "2780", "310", "5909", "8759", "7906", "7787", "158", "11312", "1584", "7070", "8701", "7779", "13885", "2899", "5926", "6472", "5676", "7414", "10682", "9200", "8277", "3523", "11977", "10207", "12455", "12827", "809", "205", "13472", "11092", "2014", "1882", "13589", "15174", "2159", "1350", "2836", "277", "5685", "14590", "3530", "8605", "5731", "2209", "6347", "1737", "5308", "6877", "597", "8399", "7726", "595", "4059", "15066", "340", "14654", "6250", "8199", "3064", "2320", "819", "8538", "5941", "2565", "10585", "607", "14841", "11132", "10596", "224", "2059", "11792", "199", "14252", "1830", "9800", "6591", "12477", "2299", "3297", "9039", "10579", "11408", "15395", "8426", "3961", "10642", "12486", "14046", "5447", "7461", "9881", "7028", "4871", "11306", "10796", "8182", "5943", "4210", "15252", "606", "12497", "4761", "8263", "2708", "9954", "4460", "7356", "10070", "14781", "13244", "1681", "12343", "6529", "8137", "13979", "15402", "6377", "14097", "2844", "8839", "11599", "9749", "4298", "11070", "4701", "9560", "6301", "5795", "9218", "863", "3259", "5155", "5714", "10728", "1117", "14461", "6097", "15356", "12545", "173", "14117", "4352", "14314", "9760", "14840", "6574", "1729", "7753", "6213", "10066", "9107", "4567", "2225", "12003", "3718", "11083", "1919", "1834", "3792", "13959", "8870", "6758", "12361", "4141", "9439", "6143", "1623", "7899", "3376", "13302", "1475", "12676", "13269", "3621", "6520", "10299", "8477", "5005", "1782", "12157", "9812", "11399", "1829", "452", "12286", "5522", "2156", "7364", "9283", "4000", "12854", "225", "6898", "847", "11278", "7832", "3263", "9864", "15122", "1530", "10893", "14124", "4785", "6227", "5443", "4849", "13895", "10134", "14386", "12750", "126", "14656", "10972", "5306", "5697", "5891", "11041", "1515", "12411", "14659", "2098", "5010", "4086", "4467", "6204", "160", "4364", "7466", "2579", "3497", "6862", "3491", "14939", "7285", "6050", "11376", "8344", "1673", "5959", "11442", "361", "8703", "3026", "15413", "13330", "3535", "258", "14229", "3024", "4040", "5328", "3756", "8143", "9880", "14875", "6741", "11788", "4693", "6474", "12124", "15323", "3569", "15166", "12127", "4051", "10755", "9657", "13961", "3224", "6691", "4436", "5746", "4805", "9304", "9174", "5270", "5606", "5992", "11630", "6393", "11722", "11836", "4522", "14392", "10027", "5640", "13126", "14844", "1723", "11853", "6584", "13761", "11808", "3585", "1800", "7126", "603", "1653", "15379", "5885", "2774", "5557", "1941", "12852", "7734", "11654", "15116", "13777", "5773", "14837", "9937", "10538", "14908", "3775", "14896", "13266", "11822", "3293", "6238", "744", "4839", "2928", "14234", "7859", "177", "8195", "10969", "5542", "13547", "3934", "1153", "6237", "10326", "2002", "13423", "14374", "5563", "14438", "14910", "6302", "14053", "4573", "9013", "9727", "3086", "3191", "13082", "12148", "4525", "4260", "7334", "11673", "14823", "1319", "13179", "6373", "6341", "15339", "9328", "8105", "2393", "12733", "14707", "13692", "2504", "3549", "10148", "6445", "10156", "296", "2787", "7328", "15361", "1080", "13358", "500", "13310", "14180", "7910", "1924", "803", "8160", "12837", "7903", "7078", "6042", "5430", "8114", "11897", "10146", "6999", "6613", "3338", "14017", "6719", "5068", "14513", "4462", "11101", "6329", "7483", "848", "15468", "8384", "1908", "5220", "769", "6286", "13461", "13795", "14280", "2288", "7303", "14482", "2770", "12936", "11665", "12528", "14867", "9729", "1293", "13499", "2128", "1923", "6825", "11752", "2135", "4297", "13521", "12346", "8937", "8515", "2826", "7627", "4971", "741", "2943", "6235", "7636", "14931", "13367", "8443", "3420", "8736", "5592", "15405", "9695", "1748", "13508", "9156", "961", "10248", "12901", "10568", "12250", "12457", "3807", "5273", "9641", "7116", "7011", "2628", "13772", "9733", "12764", "2142", "10486", "10709", "1181", "4890", "6380", "5152", "7295", "8694", "1040", "9862", "12920", "14054", "14581", "5227", "613", "14631", "14964", "3698", "1771", "4200", "11982", "10142", "9121", "6403", "8635", "11022", "2150", "2642", "6894", "12559", "10455", "12841", "7581", "4664", "12549", "480", "9914", "11594", "1489", "14610", "8567", "7257", "9655", "7343", "3808", "3692", "4776", "483", "6463", "7039", "5043", "6418", "6571", "8171", "7117", "2339", "12813", "13347", "989", "7993", "10464", "1445", "4801", "11983", "8107", "9094", "6243", "4746", "6943", "6260", "2104", "13341", "10435", "10519", "10845", "13912", "5024", "14048", "11968", "504", "14744", "14721", "4508", "14874", "2215", "7496", "9774", "15131", "11219", "15440", "1226", "11524", "3710", "13114", "5464", "8671", "2333", "13512", "10023", "6471", "12510", "7869", "1469", "8355", "1402", "13675", "5436", "8593", "6486", "8753", "7314", "2610", "1599", "7723", "5793", "1139", "14725", "123", "14343", "4903", "14947", "14796", "2743", "11291", "11607", "6743", "1046", "15127", "3739", "4501", "11446", "14855", "9671", "7614", "13560", "2753", "6749", "7202", "12552", "7806", "2926", "7947", "6918", "949", "8060", "3701", "13978", "8153", "11452", "4652", "6846", "1560", "14130", "9051", "14592", "9035", "2930", "5038", "8647", "6506", "12073", "8596", "5175", "10908", "757", "1604", "8469", "6323", "13406", "8665", "5883", "8180", "7662", "10968", "10712", "12931", "11432", "7868", "6284", "594", "15121", "3531", "5413", "7159", "5353", "4029", "5783", "6398", "6215", "13128", "11892", "12229", "10306", "12702", "12665", "14256", "7935", "10833", "2945", "9489", "490", "8178", "3483", "11317", "4088", "5802", "13053", "14533", "3166", "5807", "14732", "6600", "14692", "14849", "6482", "8613", "4370", "2614", "6775", "2897", "10457", "10324", "11116", "4551", "3737", "6455", "13380", "5337", "14999", "9005", "7792", "5381", "5404", "6568", "13159", "11028", "3218", "9704", "8416", "5127", "13693", "14907", "15191", "10965", "9435", "5551", "3896", "13821", "15233", "7932", "655", "5280", "11703", "9696", "1535", "10792", "15488", "13101", "1411", "3519", "5820", "10008", "10692", "13116", "7455", "8009", "14541", "9070", "9153", "9534", "1384", "5853", "13595", "15183", "7174", "1716", "10673", "7577", "1656", "14290", "7272", "12984", "3456", "11760", "74", "867", "10750", "5249", "3054", "6699", "15237", "14071", "11706", "4518", "12179", "11745", "13328", "4022", "13306", "1241", "9269", "6402", "6912", "5042", "2587", "697", "8504", "15273", "13722", "4545", "15291", "11027", "10163", "15275", "8677", "13279", "9364", "7519", "10767", "12714", "6129", "8296", "575", "3844", "8845", "9493", "2570", "10926", "7805", "6412", "4715", "1669", "14233", "1043", "13635", "15128", "364", "15374", "9190", "12186", "7534", "1208", "859", "6553", "6488", "13548", "6901", "9326", "4395", "13780", "8342", "11914", "7774", "3700", "14449", "7600", "9007", "4251", "638", "3822", "1201", "973", "3283", "10832", "14652", "5176", "3469", "55", "2812", "4643", "8511", "13203", "14172", "1705", "9047", "4615", "154", "6745", "1372", "9119", "12907", "12219", "12580", "596", "4271", "6385", "191", "7815", "8072", "15184", "12473", "3639", "1709", "6924", "5531", "7025", "11187", "3773", "5504", "2404", "2840", "13280", "1173", "14248", "9976", "8244", "7160", "4378", "3561", "10442", "12812", "15009", "4602", "2070", "9748", "15401", "11584", "3481", "14249", "1856", "3615", "13572", "8460", "5770", "7808", "1415", "8791", "12518", "4495", "6828", "13904", "10657", "4371", "8745", "5272", "14484", "15146", "6672", "3324", "8879", "3415", "435", "2178", "2929", "6354", "2609", "9272", "3745", "2773", "15061", "8174", "5549", "1346", "6078", "2152", "9672", "1289", "1075", "5827", "7428", "4619", "2385", "3424", "14549", "4001", "878", "12660", "12223", "2725", "5113", "1162", "9296", "2649", "319", "13036", "13988", "11060", "3212", "13214", "5254", "53", "3000", "474", "3063", "1715", "1503", "8262", "10069", "12638", "8108", "13881", "13609", "3337", "10843", "9354", "2693", "1407", "11793", "2553", "572", "4874", "5067", "8410", "7329", "4214", "3819", "740", "4108", "12503", "4044", "5382", "13945", "34", "8865", "6595", "1794", "2586", "8869", "11331", "4920", "11348", "10886", "7949", "6762", "7348", "7336", "12108", "2954", "7863", "6006", "3625", "641", "1417", "6292", "6460", "14033", "5745", "8411", "10934", "10428", "13730", "9373", "15097", "10633", "7184", "5659", "3186", "14998", "9852", "4570", "7735", "7234", "9060", "2467", "13161", "5603", "3075", "7110", "11322", "8141", "2286", "11169", "20", "7936", "7795", "14949", "9755", "11294", "3743", "298", "12168", "14082", "14684", "4103", "1010", "3920", "3031", "9422", "5654", "5317", "6436", "3746", "15028", "3853", "2532", "852", "11992", "13661", "12089", "13211", "2356", "772", "170", "7555", "10870", "12546", "4846", "8594", "9366", "4181", "9126", "7015", "5496", "9922", "6409", "14714", "7737", "1643", "1343", "11603", "3470", "7615", "13592", "8168", "5897", "8723", "7875", "9238", "15489", "1094", "4335", "4533", "2025", "5207", "5619", "7298", "15469", "8890", "3124", "9011", "10979", "13377", "5863", "477", "8433", "13135", "7439", "6496", "5754", "3635", "13585", "14001", "3672", "13925", "6289", "3507", "6955", "5763", "5444", "13124", "1534", "9552", "12844", "11352", "2541", "11816", "10736", "8738", "8434", "2056", "13246", "1629", "7911", "6961", "1083", "8156", "8310", "4188", "7591", "11214", "10251", "4520", "2873", "8057", "3116", "12677", "12172", "13335", "10847", "3631", "2760", "4105", "1182", "13479", "5889", "8422", "15074", "12025", "107", "1009", "3016", "5058", "15434", "9590" ]
SodaXII/convnextv2-tiny-1k-224_rice-leaf-disease-augmented-v4_v5_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnextv2-tiny-1k-224_rice-leaf-disease-augmented-v4_v5_fft This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2563 - Accuracy: 0.9530 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 256 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9901 | 0.5 | 64 | 1.7784 | 0.4664 | | 1.4592 | 1.0 | 128 | 1.0787 | 0.7248 | | 0.7698 | 1.5 | 192 | 0.5368 | 0.8389 | | 0.3994 | 2.0 | 256 | 0.3007 | 0.9094 | | 0.1834 | 2.5 | 320 | 0.2267 | 0.9262 | | 0.1011 | 3.0 | 384 | 0.2156 | 0.9195 | | 0.0291 | 3.5 | 448 | 0.1964 | 0.9396 | | 0.017 | 4.0 | 512 | 0.2273 | 0.9396 | | 0.0036 | 4.5 | 576 | 0.2356 | 0.9329 | | 0.0029 | 5.0 | 640 | 0.1751 | 0.9530 | | 0.0013 | 5.5 | 704 | 0.1765 | 0.9530 | | 0.0011 | 6.0 | 768 | 0.1850 | 0.9530 | | 0.001 | 6.5 | 832 | 0.1829 | 0.9530 | | 0.0009 | 7.0 | 896 | 0.1773 | 0.9564 | | 0.0006 | 7.5 | 960 | 0.1920 | 0.9564 | | 0.0004 | 8.0 | 1024 | 0.1813 | 0.9564 | | 0.0003 | 8.5 | 1088 | 0.1982 | 0.9530 | | 0.0003 | 9.0 | 1152 | 0.2061 | 0.9530 | | 0.0003 | 9.5 | 1216 | 0.2035 | 0.9530 | | 0.0002 | 10.0 | 1280 | 0.2063 | 0.9530 | | 0.0002 | 10.5 | 1344 | 0.2074 | 0.9530 | | 0.0002 | 11.0 | 1408 | 0.2072 | 0.9530 | | 0.0002 | 11.5 | 1472 | 0.2093 | 0.9530 | | 0.0002 | 12.0 | 1536 | 0.2017 | 0.9564 | | 0.0002 | 12.5 | 1600 | 0.2109 | 0.9530 | | 0.0001 | 13.0 | 1664 | 0.2203 | 0.9530 | | 0.0001 | 13.5 | 1728 | 0.2152 | 0.9530 | | 0.0001 | 14.0 | 1792 | 0.2220 | 0.9530 | | 0.0001 | 14.5 | 1856 | 0.2251 | 0.9530 | | 0.0001 | 15.0 | 1920 | 0.2214 | 0.9530 | | 0.0001 | 15.5 | 1984 | 0.2219 | 0.9530 | | 0.0001 | 16.0 | 2048 | 0.2219 | 0.9530 | | 0.0001 | 16.5 | 2112 | 0.2200 | 0.9564 | | 0.0001 | 17.0 | 2176 | 0.2324 | 0.9530 | | 0.0001 | 17.5 | 2240 | 0.2330 | 0.9530 | | 0.0001 | 18.0 | 2304 | 0.2349 | 0.9530 | | 0.0001 | 18.5 | 2368 | 0.2334 | 0.9530 | | 0.0001 | 19.0 | 2432 | 0.2365 | 0.9530 | | 0.0001 | 19.5 | 2496 | 0.2364 | 0.9530 | | 0.0001 | 20.0 | 2560 | 0.2358 | 0.9530 | | 0.0001 | 20.5 | 2624 | 0.2362 | 0.9530 | | 0.0001 | 21.0 | 2688 | 0.2388 | 0.9530 | | 0.0001 | 21.5 | 2752 | 0.2420 | 0.9530 | | 0.0001 | 22.0 | 2816 | 0.2401 | 0.9530 | | 0.0001 | 22.5 | 2880 | 0.2433 | 0.9530 | | 0.0 | 23.0 | 2944 | 0.2398 | 0.9564 | | 0.0 | 23.5 | 3008 | 0.2445 | 0.9530 | | 0.0 | 24.0 | 3072 | 0.2462 | 0.9530 | | 0.0 | 24.5 | 3136 | 0.2460 | 0.9530 | | 0.0 | 25.0 | 3200 | 0.2460 | 0.9530 | | 0.0 | 25.5 | 3264 | 0.2494 | 0.9530 | | 0.0 | 26.0 | 3328 | 0.2527 | 0.9497 | | 0.0 | 26.5 | 3392 | 0.2507 | 0.9530 | | 0.0 | 27.0 | 3456 | 0.2503 | 0.9564 | | 0.0 | 27.5 | 3520 | 0.2570 | 0.9530 | | 0.0 | 28.0 | 3584 | 0.2545 | 0.9530 | | 0.0 | 28.5 | 3648 | 0.2562 | 0.9530 | | 0.0 | 29.0 | 3712 | 0.2565 | 0.9530 | | 0.0 | 29.5 | 3776 | 0.2562 | 0.9530 | | 0.0 | 30.0 | 3840 | 0.2563 | 0.9530 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.1
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
sameh4/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0513 - Accuracy: 0.9807 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2206 | 1.0 | 190 | 0.0974 | 0.9674 | | 0.163 | 2.0 | 380 | 0.0613 | 0.9807 | | 0.1436 | 3.0 | 570 | 0.0513 | 0.9807 | ### Framework versions - Transformers 4.51.2 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
thenewsupercell/MaskedEyes_image_parts_df_VIT
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MaskedEyes_image_parts_df_VIT This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0203 - Accuracy: 0.9956 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0197 | 1.0 | 5252 | 0.0402 | 0.9907 | | 0.0008 | 2.0 | 10504 | 0.0253 | 0.9940 | | 0.0004 | 3.0 | 15756 | 0.0221 | 0.9944 | | 0.0095 | 4.0 | 21008 | 0.0203 | 0.9956 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
[ "fake", "real" ]
Docty/Mangovariety
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mango_output12 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Docty/Mangovariety dataset. It achieves the following results on the evaluation set: - Loss: 0.3909 - Accuracy: 0.9917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1337 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 85 | 1.2826 | 0.9375 | | No log | 2.0 | 170 | 0.7519 | 0.975 | | No log | 3.0 | 255 | 0.5236 | 0.9792 | | No log | 4.0 | 340 | 0.4190 | 0.9875 | | No log | 5.0 | 425 | 0.3909 | 0.9917 | ### Framework versions - Transformers 4.51.1 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.0
[ "dosehri", "sindhri", "fajri", "anwar ratool", "chaunsa (white)", "langra", "chaunsa (black)", "chaunsa (summer bahisht)" ]
SodaXII/deit-small-patch16-224_rice-leaf-disease-augmented-v4_v5_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-small-patch16-224_rice-leaf-disease-augmented-v4_v5_fft This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3237 - Accuracy: 0.9430 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 256 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9877 | 0.5 | 64 | 1.6839 | 0.4530 | | 1.1728 | 1.0 | 128 | 0.7806 | 0.7517 | | 0.4995 | 1.5 | 192 | 0.4855 | 0.8389 | | 0.2433 | 2.0 | 256 | 0.2821 | 0.9161 | | 0.0745 | 2.5 | 320 | 0.3466 | 0.9060 | | 0.0506 | 3.0 | 384 | 0.3206 | 0.9195 | | 0.0163 | 3.5 | 448 | 0.2656 | 0.9195 | | 0.0061 | 4.0 | 512 | 0.2853 | 0.9295 | | 0.002 | 4.5 | 576 | 0.2004 | 0.9430 | | 0.0009 | 5.0 | 640 | 0.2256 | 0.9396 | | 0.0006 | 5.5 | 704 | 0.2412 | 0.9362 | | 0.0006 | 6.0 | 768 | 0.2381 | 0.9362 | | 0.0005 | 6.5 | 832 | 0.2384 | 0.9362 | | 0.0005 | 7.0 | 896 | 0.2366 | 0.9430 | | 0.0004 | 7.5 | 960 | 0.2608 | 0.9329 | | 0.0003 | 8.0 | 1024 | 0.2530 | 0.9430 | | 0.0002 | 8.5 | 1088 | 0.2587 | 0.9396 | | 0.0002 | 9.0 | 1152 | 0.2602 | 0.9430 | | 0.0002 | 9.5 | 1216 | 0.2671 | 0.9396 | | 0.0002 | 10.0 | 1280 | 0.2640 | 0.9396 | | 0.0002 | 10.5 | 1344 | 0.2637 | 0.9396 | | 0.0002 | 11.0 | 1408 | 0.2644 | 0.9396 | | 0.0002 | 11.5 | 1472 | 0.2677 | 0.9396 | | 0.0001 | 12.0 | 1536 | 0.2767 | 0.9362 | | 0.0001 | 12.5 | 1600 | 0.2761 | 0.9396 | | 0.0001 | 13.0 | 1664 | 0.2833 | 0.9396 | | 0.0001 | 13.5 | 1728 | 0.2810 | 0.9396 | | 0.0001 | 14.0 | 1792 | 0.2839 | 0.9396 | | 0.0001 | 14.5 | 1856 | 0.2860 | 0.9396 | | 0.0001 | 15.0 | 1920 | 0.2846 | 0.9396 | | 0.0001 | 15.5 | 1984 | 0.2850 | 0.9396 | | 0.0001 | 16.0 | 2048 | 0.2850 | 0.9396 | | 0.0001 | 16.5 | 2112 | 0.2854 | 0.9396 | | 0.0001 | 17.0 | 2176 | 0.2950 | 0.9396 | | 0.0001 | 17.5 | 2240 | 0.2976 | 0.9396 | | 0.0001 | 18.0 | 2304 | 0.2972 | 0.9396 | | 0.0001 | 18.5 | 2368 | 0.2984 | 0.9396 | | 0.0001 | 19.0 | 2432 | 0.2980 | 0.9396 | | 0.0001 | 19.5 | 2496 | 0.3003 | 0.9396 | | 0.0001 | 20.0 | 2560 | 0.3002 | 0.9396 | | 0.0001 | 20.5 | 2624 | 0.3001 | 0.9396 | | 0.0001 | 21.0 | 2688 | 0.3030 | 0.9396 | | 0.0 | 21.5 | 2752 | 0.3009 | 0.9396 | | 0.0 | 22.0 | 2816 | 0.3065 | 0.9396 | | 0.0 | 22.5 | 2880 | 0.3093 | 0.9396 | | 0.0 | 23.0 | 2944 | 0.3082 | 0.9396 | | 0.0 | 23.5 | 3008 | 0.3105 | 0.9396 | | 0.0 | 24.0 | 3072 | 0.3112 | 0.9396 | | 0.0 | 24.5 | 3136 | 0.3115 | 0.9396 | | 0.0 | 25.0 | 3200 | 0.3117 | 0.9396 | | 0.0 | 25.5 | 3264 | 0.3091 | 0.9396 | | 0.0 | 26.0 | 3328 | 0.3159 | 0.9430 | | 0.0 | 26.5 | 3392 | 0.3166 | 0.9430 | | 0.0 | 27.0 | 3456 | 0.3231 | 0.9396 | | 0.0 | 27.5 | 3520 | 0.3229 | 0.9396 | | 0.0 | 28.0 | 3584 | 0.3223 | 0.9430 | | 0.0 | 28.5 | 3648 | 0.3246 | 0.9396 | | 0.0 | 29.0 | 3712 | 0.3229 | 0.9430 | | 0.0 | 29.5 | 3776 | 0.3234 | 0.9430 | | 0.0 | 30.0 | 3840 | 0.3237 | 0.9430 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.1
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
maceythm/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2378 - Accuracy: 0.9296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3813 | 1.0 | 370 | 0.3180 | 0.9242 | | 0.1966 | 2.0 | 740 | 0.2371 | 0.9378 | | 0.1661 | 3.0 | 1110 | 0.2204 | 0.9378 | | 0.1356 | 4.0 | 1480 | 0.2035 | 0.9391 | | 0.1079 | 5.0 | 1850 | 0.2025 | 0.9405 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1 # CLIP This model performs a zero-shot classification on the Oxford-Pet dataset based on the CLIP model from openAI. It achieves the following results: - Accuracy: 0.8800 - Precision: 0.8768 - Recall: 0.8800
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
loretyan/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.1833 - Accuracy: 0.9418 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3917 | 1.0 | 370 | 0.2881 | 0.9229 | | 0.2136 | 2.0 | 740 | 0.2137 | 0.9391 | | 0.1801 | 3.0 | 1110 | 0.1909 | 0.9472 | | 0.1315 | 4.0 | 1480 | 0.1859 | 0.9432 | | 0.1473 | 5.0 | 1850 | 0.1826 | 0.9445 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
Betim24/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2035 - Model Preparation Time: 0.0031 - Accuracy: 0.9405 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----------------------:|:--------:| | 0.3865 | 1.0 | 370 | 0.3079 | 0.0031 | 0.9350 | | 0.2151 | 2.0 | 740 | 0.2385 | 0.0031 | 0.9296 | | 0.151 | 3.0 | 1110 | 0.2147 | 0.0031 | 0.9337 | | 0.151 | 4.0 | 1480 | 0.2069 | 0.0031 | 0.9378 | | 0.1258 | 5.0 | 1850 | 0.2038 | 0.0031 | 0.9405 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1 ### Report results Accuracy: 0.8800 Precision: 0.8768 Recall: 0.8800
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
reyraa/sn72-roadwork
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
selintyrs/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2313 - Accuracy: 0.9269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3776 | 1.0 | 370 | 0.3322 | 0.9161 | | 0.19 | 2.0 | 740 | 0.2578 | 0.9188 | | 0.1655 | 3.0 | 1110 | 0.2404 | 0.9229 | | 0.1461 | 4.0 | 1480 | 0.2318 | 0.9256 | | 0.1214 | 5.0 | 1850 | 0.2319 | 0.9269 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1 ### Zero Shot Evaluation - Accuracy: 0.8800 - Precision: 0.8768 - Recall: 0.8800
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
gwx20211433/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0671 - Accuracy: 0.9778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1995 | 1.0 | 190 | 0.1169 | 0.9644 | | 0.1323 | 2.0 | 380 | 0.0691 | 0.9789 | | 0.1259 | 3.0 | 570 | 0.0671 | 0.9778 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.0
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
SodaXII/swin-tiny-patch4-window7-224_rice-leaf-disease-augmented-v4_v5_fft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224_rice-leaf-disease-augmented-v4_v5_fft This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3980 - Accuracy: 0.9362 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 256 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9379 | 0.5 | 64 | 1.5473 | 0.5336 | | 1.0179 | 1.0 | 128 | 0.5529 | 0.8255 | | 0.4389 | 1.5 | 192 | 0.3367 | 0.8792 | | 0.2443 | 2.0 | 256 | 0.2354 | 0.9228 | | 0.1151 | 2.5 | 320 | 0.2422 | 0.9329 | | 0.0648 | 3.0 | 384 | 0.2351 | 0.9228 | | 0.0325 | 3.5 | 448 | 0.3316 | 0.9094 | | 0.0235 | 4.0 | 512 | 0.2389 | 0.9329 | | 0.0148 | 4.5 | 576 | 0.2317 | 0.9362 | | 0.0081 | 5.0 | 640 | 0.2005 | 0.9362 | | 0.0054 | 5.5 | 704 | 0.2198 | 0.9396 | | 0.0043 | 6.0 | 768 | 0.2171 | 0.9430 | | 0.0038 | 6.5 | 832 | 0.2200 | 0.9396 | | 0.003 | 7.0 | 896 | 0.2383 | 0.9362 | | 0.0183 | 7.5 | 960 | 0.2142 | 0.9262 | | 0.0126 | 8.0 | 1024 | 0.2931 | 0.9295 | | 0.0093 | 8.5 | 1088 | 0.2718 | 0.9396 | | 0.0035 | 9.0 | 1152 | 0.3975 | 0.9195 | | 0.0019 | 9.5 | 1216 | 0.2839 | 0.9295 | | 0.0015 | 10.0 | 1280 | 0.2764 | 0.9329 | | 0.001 | 10.5 | 1344 | 0.3293 | 0.9195 | | 0.0013 | 11.0 | 1408 | 0.3007 | 0.9295 | | 0.0011 | 11.5 | 1472 | 0.3193 | 0.9362 | | 0.0122 | 12.0 | 1536 | 0.2385 | 0.9463 | | 0.0269 | 12.5 | 1600 | 0.3694 | 0.9295 | | 0.0092 | 13.0 | 1664 | 0.3212 | 0.9295 | | 0.0032 | 13.5 | 1728 | 0.2687 | 0.9362 | | 0.0023 | 14.0 | 1792 | 0.4258 | 0.9128 | | 0.0015 | 14.5 | 1856 | 0.3478 | 0.9329 | | 0.0007 | 15.0 | 1920 | 0.3445 | 0.9329 | | 0.0005 | 15.5 | 1984 | 0.3371 | 0.9329 | | 0.001 | 16.0 | 2048 | 0.3407 | 0.9329 | | 0.0079 | 16.5 | 2112 | 0.4630 | 0.9195 | | 0.0098 | 17.0 | 2176 | 0.3260 | 0.9362 | | 0.016 | 17.5 | 2240 | 0.5011 | 0.9195 | | 0.0085 | 18.0 | 2304 | 0.3927 | 0.9362 | | 0.0057 | 18.5 | 2368 | 0.3910 | 0.9295 | | 0.0016 | 19.0 | 2432 | 0.3433 | 0.9262 | | 0.001 | 19.5 | 2496 | 0.4334 | 0.9228 | | 0.0007 | 20.0 | 2560 | 0.4074 | 0.9228 | | 0.0005 | 20.5 | 2624 | 0.4064 | 0.9228 | | 0.0018 | 21.0 | 2688 | 0.5353 | 0.9228 | | 0.0097 | 21.5 | 2752 | 0.4666 | 0.9128 | | 0.0127 | 22.0 | 2816 | 0.3886 | 0.9228 | | 0.0036 | 22.5 | 2880 | 0.2585 | 0.9430 | | 0.0036 | 23.0 | 2944 | 0.4433 | 0.9329 | | 0.001 | 23.5 | 3008 | 0.4032 | 0.9329 | | 0.0006 | 24.0 | 3072 | 0.4428 | 0.9295 | | 0.0012 | 24.5 | 3136 | 0.4001 | 0.9228 | | 0.0007 | 25.0 | 3200 | 0.3978 | 0.9262 | | 0.0006 | 25.5 | 3264 | 0.5251 | 0.9161 | | 0.0054 | 26.0 | 3328 | 0.5131 | 0.9228 | | 0.005 | 26.5 | 3392 | 0.4757 | 0.9027 | | 0.004 | 27.0 | 3456 | 0.4730 | 0.9195 | | 0.0025 | 27.5 | 3520 | 0.4133 | 0.9228 | | 0.0012 | 28.0 | 3584 | 0.4215 | 0.9228 | | 0.0007 | 28.5 | 3648 | 0.4058 | 0.9329 | | 0.0004 | 29.0 | 3712 | 0.4024 | 0.9262 | | 0.0003 | 29.5 | 3776 | 0.3974 | 0.9362 | | 0.0002 | 30.0 | 3840 | 0.3980 | 0.9362 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.1
[ "bacterial leaf blight", "brown spot", "healthy rice leaf", "leaf blast", "leaf scald", "narrow brown leaf spot", "rice hispa", "sheath blight" ]
Marc-Hagenbusch/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2208 - Accuracy: 0.9310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3968 | 1.0 | 370 | 0.2800 | 0.9364 | | 0.2184 | 2.0 | 740 | 0.2124 | 0.9378 | | 0.1725 | 3.0 | 1110 | 0.1944 | 0.9418 | | 0.1481 | 4.0 | 1480 | 0.1815 | 0.9445 | | 0.1286 | 5.0 | 1850 | 0.1782 | 0.9445 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
lautenad/vit-base-oxford-iiit-pets
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-oxford-iiit-pets This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set: - Loss: 0.2250 - Accuracy: 0.9296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3534 | 1.0 | 370 | 0.2940 | 0.9229 | | 0.1939 | 2.0 | 740 | 0.2241 | 0.9350 | | 0.1827 | 3.0 | 1110 | 0.2055 | 0.9378 | | 0.1317 | 4.0 | 1480 | 0.2013 | 0.9418 | | 0.1364 | 5.0 | 1850 | 0.1979 | 0.9350 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "siamese", "birman", "shiba inu", "staffordshire bull terrier", "basset hound", "bombay", "japanese chin", "chihuahua", "german shorthaired", "pomeranian", "beagle", "english cocker spaniel", "american pit bull terrier", "ragdoll", "persian", "egyptian mau", "miniature pinscher", "sphynx", "maine coon", "keeshond", "yorkshire terrier", "havanese", "leonberger", "wheaten terrier", "american bulldog", "english setter", "boxer", "newfoundland", "bengal", "samoyed", "british shorthair", "great pyrenees", "abyssinian", "pug", "saint bernard", "russian blue", "scottish terrier" ]
galeio-research/OceanSAR-1-tengeop
# Model Card for OceanSAR-1-TenGeoP ## Model Details <img src="OceanSAR-1-logo.png" width=400> ### Model Description OceanSAR-1-TenGeoP is a linear probing head for classifying ocean geophysical phenomena, built on top of the OceanSAR-1 foundation model. It leverages the powerful features extracted by OceanSAR-1 to accurately identify 10 different geophysical phenomena in Synthetic Aperture Radar (SAR) imagery. - **Developed by:** Thomas Kerdreux, Alexandre Tuel @ [Galeio](http://galeio.fr) - **Deployed by:** Antoine Audras @ [Galeio](http://galeio.fr) - **Model type:** Linear Classification Head on Vision Foundation Model - **License:** Apache License 2.0 - **Base model:** OceanSAR-1 (ResNet50/ViT variants) - **Training data:** Sentinel-1 Wave Mode (WV) SAR images with labeled geophysical phenomena ## Uses ### Direct Use This model is designed for automated classification of geophysical phenomena in SAR imagery over ocean surfaces. It can be used for: - Rapid identification of ocean features in SAR data - Monitoring of maritime environments - Automated analysis of large SAR datasets - Ocean science and research applications ### Performance Results The model achieves state-of-the-art performance on TenGeoP classification, with performance varying by backbone architecture: | Backbone | TenGeoP Accuracy (%) | |----------|---------------------| | ResNet50 | 75.5 | | ViT-S/16 | 78.6 | | ViT-S/8 | 82.1 | | ViT-B/8 | 83.6 | ## How to Use ```python import torch from transformers import AutoModelForImageClassification # Load the foundation model and classification head oceansar = AutoModelForImageClassification.from_pretrained("galeio-research/OceanSAR-1-tengeop") # Prepare your SAR image (should be single-channel VV polarization) dummy_image = torch.randn(1, 1, 256, 256) # (B, C, H, W) # Extract features and classify geophysical phenomena with torch.no_grad(): outputs = oceansar(dummy_image) predicted_class = torch.argmax(outputs.logits, dim=1).item() ``` ## Training Details ### Training Data - **Dataset:** Sentinel-1 Wave Mode (WV) SAR images with labeled geophysical phenomena - **Labels:** 10 classes of ocean geophysical phenomena - **Size:** Balanced dataset across all classes - **Preprocessing:** Same as base OceanSAR-1 model ## Evaluation ### Metrics TenGeoP classification performance is evaluated using accuracy (%), achieving: - 75.5% accuracy with ResNet50 backbone - 78.6% accuracy with ViT-S/16 backbone - 82.1% accuracy with ViT-S/8 backbone - 83.6% accuracy with ViT-B/8 backbone ### Comparison to Other Backbones The model outperforms existing approaches: - CROMA (ViT-B/8): 65.4% accuracy - MoCo (ResNet50): 60.9% accuracy - DeCUR (ResNet50): 58.3% accuracy - DOFA (ViT-B/16): 58.4% accuracy - DOFA (ViT-L/16): 63.4% accuracy - SoftCon (ViT-S/14): 73.2% accuracy - SoftCon (ViT-B/14): 74.8% accuracy ## Technical Specifications ### Hardware Requirements - Same as base model - Minimal additional computational cost for inference ### Dependencies - PyTorch >= 1.8.0 - Transformers >= 4.30.0 - Base OceanSAR-1 model ### Input Specifications - Same as base OceanSAR-1 model - Single channel (VV polarization) SAR images - 256x256 pixel resolution ## Citation **BibTeX:** ```bibtex @article{kerdreux2025efficientselfsupervisedlearningearth, title={Efficient Self-Supervised Learning for Earth Observation via Dynamic Dataset Curation}, author={Kerdreux, Thomas and Tuel, Alexandre and Febvre, Quentin and Mouche, Alexis and Chapron, Bertrand}, journal={arXiv preprint arXiv:2504.06962}, year={2025}, eprint={2504.06962}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2504.06962}, } ``` ## Acknowledgements This work was granted access to the HPC resources of IDRIS and TGCC under the allocation 2025-[A0171015666] made by GENCI.
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9" ]