model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
moreover18/hf_images_model1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hf_images_model1 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.2058 - Accuracy: 0.9178 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7057 | 0.04 | 10 | 0.7027 | 0.4644 | | 0.6808 | 0.09 | 20 | 0.6615 | 0.6590 | | 0.6278 | 0.13 | 30 | 0.5969 | 0.7441 | | 0.5674 | 0.17 | 40 | 0.5134 | 0.8183 | | 0.4761 | 0.21 | 50 | 0.4146 | 0.875 | | 0.3777 | 0.26 | 60 | 0.3362 | 0.8796 | | 0.303 | 0.3 | 70 | 0.2906 | 0.8854 | | 0.2385 | 0.34 | 80 | 0.2694 | 0.8937 | | 0.2452 | 0.39 | 90 | 0.2515 | 0.9012 | | 0.2771 | 0.43 | 100 | 0.2441 | 0.9050 | | 0.2332 | 0.47 | 110 | 0.2510 | 0.8975 | | 0.2495 | 0.51 | 120 | 0.2398 | 0.9052 | | 0.2611 | 0.56 | 130 | 0.2384 | 0.9063 | | 0.2292 | 0.6 | 140 | 0.2931 | 0.8865 | | 0.2518 | 0.64 | 150 | 0.2537 | 0.8994 | | 0.211 | 0.69 | 160 | 0.2619 | 0.8953 | | 0.2514 | 0.73 | 170 | 0.2236 | 0.9090 | | 0.2272 | 0.77 | 180 | 0.2254 | 0.9085 | | 0.2263 | 0.81 | 190 | 0.2141 | 0.9181 | | 0.2524 | 0.86 | 200 | 0.2038 | 0.9194 | | 0.2024 | 0.9 | 210 | 0.2038 | 0.9165 | | 0.2355 | 0.94 | 220 | 0.2215 | 0.9103 | | 0.2431 | 0.99 | 230 | 0.2116 | 0.9178 | | 0.1921 | 1.03 | 240 | 0.2105 | 0.9111 | | 0.1845 | 1.07 | 250 | 0.2107 | 0.9117 | | 0.1838 | 1.11 | 260 | 0.2070 | 0.9119 | | 0.1824 | 1.16 | 270 | 0.2110 | 0.9130 | | 0.1706 | 1.2 | 280 | 0.2177 | 0.9154 | | 0.1826 | 1.24 | 290 | 0.2058 | 0.9160 | | 0.1816 | 1.28 | 300 | 0.2081 | 0.9176 | | 0.1901 | 1.33 | 310 | 0.2187 | 0.9149 | | 0.2112 | 1.37 | 320 | 0.2107 | 0.9181 | | 0.22 | 1.41 | 330 | 0.2065 | 0.9173 | | 0.2105 | 1.46 | 340 | 0.2090 | 0.9170 | | 0.2016 | 1.5 | 350 | 0.2044 | 0.9141 | | 0.2055 | 1.54 | 360 | 0.2029 | 0.9173 | | 0.1507 | 1.58 | 370 | 0.2103 | 0.9192 | | 0.1705 | 1.63 | 380 | 0.1960 | 0.9184 | | 0.1605 | 1.67 | 390 | 0.2070 | 0.9154 | | 0.2011 | 1.71 | 400 | 0.2096 | 0.9160 | | 0.1832 | 1.76 | 410 | 0.2023 | 0.9176 | | 0.1756 | 1.8 | 420 | 0.2005 | 0.9189 | | 0.1874 | 1.84 | 430 | 0.2050 | 0.9135 | | 0.1497 | 1.88 | 440 | 0.1936 | 0.9240 | | 0.1891 | 1.93 | 450 | 0.1991 | 0.9208 | | 0.1595 | 1.97 | 460 | 0.2014 | 0.9194 | | 0.2028 | 2.01 | 470 | 0.1994 | 0.9184 | | 0.1794 | 2.06 | 480 | 0.2068 | 0.9146 | | 0.1404 | 2.1 | 490 | 0.2046 | 0.9181 | | 0.1615 | 2.14 | 500 | 0.1955 | 0.9243 | | 0.1555 | 2.18 | 510 | 0.2027 | 0.9202 | | 0.151 | 2.23 | 520 | 0.1893 | 0.9261 | | 0.1676 | 2.27 | 530 | 0.2046 | 0.9192 | | 0.1744 | 2.31 | 540 | 0.1967 | 0.9218 | | 0.1644 | 2.36 | 550 | 0.1970 | 0.9226 | | 0.2048 | 2.4 | 560 | 0.1930 | 0.9243 | | 0.1649 | 2.44 | 570 | 0.1986 | 0.9218 | | 0.1435 | 2.48 | 580 | 0.1956 | 0.9213 | | 0.1598 | 2.53 | 590 | 0.1986 | 0.9197 | | 0.1513 | 2.57 | 600 | 0.2020 | 0.9173 | | 0.1769 | 2.61 | 610 | 0.2005 | 0.9170 | | 0.1488 | 2.66 | 620 | 0.2033 | 0.9197 | | 0.1636 | 2.7 | 630 | 0.1964 | 0.9216 | | 0.1583 | 2.74 | 640 | 0.1985 | 0.9189 | | 0.1294 | 2.78 | 650 | 0.2109 | 0.9151 | | 0.1585 | 2.83 | 660 | 0.2000 | 0.9186 | | 0.1531 | 2.87 | 670 | 0.2078 | 0.9178 | | 0.1294 | 2.91 | 680 | 0.1891 | 0.9272 | | 0.1612 | 2.96 | 690 | 0.2058 | 0.9178 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
[ "not_people", "people" ]
PatcharapornPS/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # PatcharapornPS/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4062 - Validation Loss: 0.3379 - Train Accuracy: 0.922 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.7669 | 1.6626 | 0.799 | 0 | | 1.2218 | 0.8541 | 0.872 | 1 | | 0.7264 | 0.5341 | 0.903 | 2 | | 0.4953 | 0.4510 | 0.894 | 3 | | 0.4062 | 0.3379 | 0.922 | 4 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
dima806/wildfire_types_image_detection
Returns wildfire type given an image with about 90% accuracy. See https://www.kaggle.com/code/dima806/wildfire-image-detection-vit for more details. ``` Classification report: precision recall f1-score support Both_smoke_and_fire 0.9623 0.9091 0.9350 253 Fire_confounding_elements 0.9306 0.8976 0.9138 254 Forested_areas_without_confounding_elements 0.9215 0.8780 0.8992 254 Smoke_confounding_elements 0.8370 0.8898 0.8626 254 Smoke_from_fires 0.8755 0.9409 0.9070 254 accuracy 0.9031 1269 macro avg 0.9054 0.9031 0.9035 1269 weighted avg 0.9053 0.9031 0.9035 1269 ```
[ "both_smoke_and_fire", "fire_confounding_elements", "forested_areas_without_confounding_elements", "smoke_confounding_elements", "smoke_from_fires" ]
bdpc/resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_kd
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_kd This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6048 - Accuracy: 0.7867 - Brier Loss: 0.3046 - Nll: 2.0167 - F1 Micro: 0.7868 - F1 Macro: 0.7867 - Ece: 0.0468 - Aurc: 0.0597 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 250 | 4.1589 | 0.1305 | 0.9320 | 7.8922 | 0.1305 | 0.0928 | 0.0637 | 0.8337 | | 4.1546 | 2.0 | 500 | 3.6898 | 0.3515 | 0.8840 | 4.7696 | 0.3515 | 0.3150 | 0.2354 | 0.5486 | | 4.1546 | 3.0 | 750 | 2.3450 | 0.4863 | 0.6606 | 3.2068 | 0.4863 | 0.4495 | 0.0978 | 0.2927 | | 2.419 | 4.0 | 1000 | 1.5206 | 0.6125 | 0.5126 | 2.7884 | 0.6125 | 0.5996 | 0.0512 | 0.1677 | | 2.419 | 5.0 | 1250 | 1.2545 | 0.6593 | 0.4574 | 2.6041 | 0.6593 | 0.6524 | 0.0483 | 0.1337 | | 1.1615 | 6.0 | 1500 | 0.9718 | 0.704 | 0.4062 | 2.4047 | 0.704 | 0.7017 | 0.0506 | 0.1043 | | 1.1615 | 7.0 | 1750 | 0.8636 | 0.73 | 0.3760 | 2.1975 | 0.7300 | 0.7304 | 0.0522 | 0.0902 | | 0.7217 | 8.0 | 2000 | 0.7892 | 0.737 | 0.3632 | 2.1583 | 0.737 | 0.7377 | 0.0551 | 0.0835 | | 0.7217 | 9.0 | 2250 | 0.7438 | 0.754 | 0.3470 | 2.0559 | 0.754 | 0.7531 | 0.0534 | 0.0766 | | 0.5268 | 10.0 | 2500 | 0.7322 | 0.758 | 0.3443 | 2.1043 | 0.7580 | 0.7584 | 0.0510 | 0.0742 | | 0.5268 | 11.0 | 2750 | 0.7003 | 0.7632 | 0.3335 | 2.0510 | 0.7632 | 0.7639 | 0.0472 | 0.0697 | | 0.4197 | 12.0 | 3000 | 0.6921 | 0.7665 | 0.3325 | 2.0569 | 0.7665 | 0.7668 | 0.0568 | 0.0694 | | 0.4197 | 13.0 | 3250 | 0.7003 | 0.7618 | 0.3330 | 2.0293 | 0.7618 | 0.7618 | 0.0465 | 0.0721 | | 0.3575 | 14.0 | 3500 | 0.6681 | 0.7728 | 0.3244 | 2.0037 | 0.7728 | 0.7739 | 0.0505 | 0.0664 | | 0.3575 | 15.0 | 3750 | 0.6862 | 0.7718 | 0.3279 | 2.0294 | 0.7717 | 0.7727 | 0.0442 | 0.0693 | | 0.3181 | 16.0 | 4000 | 0.6681 | 0.7738 | 0.3246 | 2.0559 | 0.7738 | 0.7739 | 0.0509 | 0.0671 | | 0.3181 | 17.0 | 4250 | 0.6473 | 0.7775 | 0.3177 | 1.9978 | 0.7775 | 0.7784 | 0.0494 | 0.0644 | | 0.2874 | 18.0 | 4500 | 0.6448 | 0.78 | 0.3172 | 2.0396 | 0.78 | 0.7805 | 0.0495 | 0.0651 | | 0.2874 | 19.0 | 4750 | 0.6484 | 0.779 | 0.3153 | 2.0251 | 0.779 | 0.7790 | 0.0519 | 0.0636 | | 0.2691 | 20.0 | 5000 | 0.6430 | 0.7768 | 0.3164 | 2.0897 | 0.7768 | 0.7771 | 0.0489 | 0.0635 | | 0.2691 | 21.0 | 5250 | 0.6363 | 0.78 | 0.3145 | 2.0663 | 0.78 | 0.7802 | 0.0476 | 0.0640 | | 0.2509 | 22.0 | 5500 | 0.6327 | 0.782 | 0.3127 | 2.0358 | 0.782 | 0.7820 | 0.0440 | 0.0634 | | 0.2509 | 23.0 | 5750 | 0.6287 | 0.7863 | 0.3113 | 2.0157 | 0.7863 | 0.7865 | 0.0463 | 0.0630 | | 0.2393 | 24.0 | 6000 | 0.6315 | 0.7778 | 0.3137 | 2.0623 | 0.7778 | 0.7773 | 0.0492 | 0.0633 | | 0.2393 | 25.0 | 6250 | 0.6345 | 0.7775 | 0.3149 | 2.0397 | 0.7775 | 0.7773 | 0.0514 | 0.0635 | | 0.2291 | 26.0 | 6500 | 0.6233 | 0.7815 | 0.3102 | 1.9988 | 0.7815 | 0.7816 | 0.0444 | 0.0626 | | 0.2291 | 27.0 | 6750 | 0.6224 | 0.783 | 0.3095 | 2.0085 | 0.7830 | 0.7830 | 0.0502 | 0.0615 | | 0.2191 | 28.0 | 7000 | 0.6159 | 0.7835 | 0.3089 | 2.0340 | 0.7835 | 0.7834 | 0.0499 | 0.0614 | | 0.2191 | 29.0 | 7250 | 0.6203 | 0.7825 | 0.3096 | 2.0280 | 0.7825 | 0.7825 | 0.0480 | 0.0617 | | 0.2139 | 30.0 | 7500 | 0.6233 | 0.7802 | 0.3093 | 2.0660 | 0.7802 | 0.7805 | 0.0518 | 0.0609 | | 0.2139 | 31.0 | 7750 | 0.6128 | 0.785 | 0.3049 | 2.0148 | 0.785 | 0.7851 | 0.0471 | 0.0604 | | 0.2068 | 32.0 | 8000 | 0.6124 | 0.7855 | 0.3064 | 2.0336 | 0.7855 | 0.7855 | 0.0433 | 0.0604 | | 0.2068 | 33.0 | 8250 | 0.6117 | 0.7835 | 0.3068 | 2.0208 | 0.7835 | 0.7833 | 0.0469 | 0.0604 | | 0.202 | 34.0 | 8500 | 0.6105 | 0.7857 | 0.3063 | 1.9918 | 0.7857 | 0.7854 | 0.0454 | 0.0611 | | 0.202 | 35.0 | 8750 | 0.6136 | 0.7877 | 0.3088 | 2.0272 | 0.7877 | 0.7884 | 0.0444 | 0.0607 | | 0.1974 | 36.0 | 9000 | 0.6095 | 0.786 | 0.3052 | 2.0275 | 0.786 | 0.7862 | 0.0423 | 0.0600 | | 0.1974 | 37.0 | 9250 | 0.6108 | 0.786 | 0.3077 | 2.0035 | 0.786 | 0.7860 | 0.0477 | 0.0606 | | 0.1945 | 38.0 | 9500 | 0.6107 | 0.7817 | 0.3078 | 2.0042 | 0.7817 | 0.7820 | 0.0482 | 0.0611 | | 0.1945 | 39.0 | 9750 | 0.6077 | 0.7875 | 0.3051 | 1.9959 | 0.7875 | 0.7878 | 0.0510 | 0.0599 | | 0.1919 | 40.0 | 10000 | 0.6099 | 0.7863 | 0.3072 | 2.0323 | 0.7863 | 0.7866 | 0.0468 | 0.0603 | | 0.1919 | 41.0 | 10250 | 0.6046 | 0.7847 | 0.3046 | 2.0113 | 0.7847 | 0.7850 | 0.0442 | 0.0600 | | 0.1874 | 42.0 | 10500 | 0.6062 | 0.7865 | 0.3059 | 2.0055 | 0.7865 | 0.7865 | 0.0486 | 0.0598 | | 0.1874 | 43.0 | 10750 | 0.6051 | 0.787 | 0.3042 | 2.0151 | 0.787 | 0.7870 | 0.0451 | 0.0596 | | 0.1859 | 44.0 | 11000 | 0.6082 | 0.7855 | 0.3063 | 2.0123 | 0.7855 | 0.7860 | 0.0470 | 0.0600 | | 0.1859 | 45.0 | 11250 | 0.6066 | 0.7867 | 0.3047 | 2.0000 | 0.7868 | 0.7865 | 0.0479 | 0.0599 | | 0.1856 | 46.0 | 11500 | 0.6049 | 0.7863 | 0.3054 | 2.0058 | 0.7863 | 0.7861 | 0.0475 | 0.0598 | | 0.1856 | 47.0 | 11750 | 0.6041 | 0.7887 | 0.3047 | 1.9992 | 0.7887 | 0.7891 | 0.0482 | 0.0595 | | 0.1842 | 48.0 | 12000 | 0.6063 | 0.7843 | 0.3055 | 2.0346 | 0.7843 | 0.7843 | 0.0480 | 0.0601 | | 0.1842 | 49.0 | 12250 | 0.6058 | 0.786 | 0.3051 | 2.0319 | 0.786 | 0.7861 | 0.0481 | 0.0598 | | 0.1829 | 50.0 | 12500 | 0.6048 | 0.7867 | 0.3046 | 2.0167 | 0.7868 | 0.7867 | 0.0468 | 0.0597 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
jordyvl/vit_rand_rvl-cdip_N1K
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_rand_rvl-cdip_N1K This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9745 - Accuracy: 0.551 - Brier Loss: 0.8083 - Nll: 3.9609 - F1 Micro: 0.551 - F1 Macro: 0.5474 - Ece: 0.3805 - Aurc: 0.2338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 250 | 2.6207 | 0.171 | 0.9078 | 5.8097 | 0.171 | 0.1129 | 0.0606 | 0.7132 | | 2.6241 | 2.0 | 500 | 2.4608 | 0.1727 | 0.8843 | 4.0297 | 0.1727 | 0.1156 | 0.0641 | 0.6991 | | 2.6241 | 3.0 | 750 | 2.4182 | 0.2177 | 0.8659 | 4.1324 | 0.2177 | 0.1603 | 0.0802 | 0.6191 | | 2.3655 | 4.0 | 1000 | 2.2066 | 0.2828 | 0.8237 | 3.3597 | 0.2828 | 0.2456 | 0.0597 | 0.5384 | | 2.3655 | 5.0 | 1250 | 2.0873 | 0.3322 | 0.7923 | 3.2747 | 0.3322 | 0.2940 | 0.0613 | 0.4790 | | 2.0557 | 6.0 | 1500 | 1.9178 | 0.398 | 0.7392 | 3.1146 | 0.398 | 0.3639 | 0.0589 | 0.3937 | | 2.0557 | 7.0 | 1750 | 1.7861 | 0.458 | 0.7025 | 2.9045 | 0.458 | 0.4450 | 0.0778 | 0.3497 | | 1.7262 | 8.0 | 2000 | 1.7288 | 0.4535 | 0.6821 | 2.9955 | 0.4535 | 0.4322 | 0.0528 | 0.3262 | | 1.7262 | 9.0 | 2250 | 1.6881 | 0.472 | 0.6673 | 2.8844 | 0.472 | 0.4561 | 0.0563 | 0.3120 | | 1.4846 | 10.0 | 2500 | 1.6912 | 0.4688 | 0.6633 | 2.8541 | 0.4688 | 0.4540 | 0.0718 | 0.3006 | | 1.4846 | 11.0 | 2750 | 1.6094 | 0.5022 | 0.6353 | 2.8239 | 0.5022 | 0.4859 | 0.0759 | 0.2724 | | 1.1972 | 12.0 | 3000 | 1.5364 | 0.535 | 0.6084 | 2.7911 | 0.535 | 0.5162 | 0.0905 | 0.2413 | | 1.1972 | 13.0 | 3250 | 1.5683 | 0.521 | 0.6228 | 2.7486 | 0.521 | 0.5268 | 0.1003 | 0.2559 | | 0.8678 | 14.0 | 3500 | 1.6246 | 0.5325 | 0.6246 | 2.8388 | 0.5325 | 0.5295 | 0.1304 | 0.2486 | | 0.8678 | 15.0 | 3750 | 1.7502 | 0.5138 | 0.6555 | 2.9705 | 0.5138 | 0.5093 | 0.1750 | 0.2547 | | 0.5268 | 16.0 | 4000 | 1.8375 | 0.5215 | 0.6677 | 2.9906 | 0.5215 | 0.5186 | 0.2099 | 0.2535 | | 0.5268 | 17.0 | 4250 | 1.9606 | 0.524 | 0.6895 | 3.2415 | 0.524 | 0.5174 | 0.2425 | 0.2488 | | 0.2667 | 18.0 | 4500 | 2.0553 | 0.5305 | 0.6953 | 3.2430 | 0.5305 | 0.5223 | 0.2554 | 0.2434 | | 0.2667 | 19.0 | 4750 | 2.3400 | 0.5228 | 0.7369 | 3.5472 | 0.5228 | 0.5101 | 0.2871 | 0.2605 | | 0.1513 | 20.0 | 5000 | 2.3720 | 0.5192 | 0.7472 | 3.4681 | 0.5192 | 0.5178 | 0.2982 | 0.2674 | | 0.1513 | 21.0 | 5250 | 2.4935 | 0.52 | 0.7588 | 3.4578 | 0.52 | 0.5104 | 0.3101 | 0.2586 | | 0.1164 | 22.0 | 5500 | 2.4916 | 0.5155 | 0.7625 | 3.3908 | 0.5155 | 0.5090 | 0.3129 | 0.2634 | | 0.1164 | 23.0 | 5750 | 2.5740 | 0.523 | 0.7647 | 3.4298 | 0.523 | 0.5235 | 0.3220 | 0.2601 | | 0.0883 | 24.0 | 6000 | 2.5887 | 0.5305 | 0.7598 | 3.4432 | 0.5305 | 0.5307 | 0.3194 | 0.2571 | | 0.0883 | 25.0 | 6250 | 2.7429 | 0.52 | 0.7747 | 3.7692 | 0.52 | 0.5132 | 0.3291 | 0.2696 | | 0.0739 | 26.0 | 6500 | 2.7728 | 0.5235 | 0.7828 | 3.4718 | 0.5235 | 0.5271 | 0.3399 | 0.2679 | | 0.0739 | 27.0 | 6750 | 2.7862 | 0.5335 | 0.7680 | 3.5774 | 0.5335 | 0.5352 | 0.3256 | 0.2651 | | 0.0619 | 28.0 | 7000 | 2.9449 | 0.5222 | 0.7964 | 3.6659 | 0.5222 | 0.5165 | 0.3503 | 0.2697 | | 0.0619 | 29.0 | 7250 | 2.8872 | 0.5345 | 0.7714 | 3.5298 | 0.5345 | 0.5310 | 0.3376 | 0.2545 | | 0.0531 | 30.0 | 7500 | 2.9649 | 0.5232 | 0.7994 | 3.6119 | 0.5232 | 0.5191 | 0.3527 | 0.2714 | | 0.0531 | 31.0 | 7750 | 3.1024 | 0.5182 | 0.8112 | 3.6716 | 0.5182 | 0.5206 | 0.3639 | 0.2748 | | 0.0446 | 32.0 | 8000 | 3.0895 | 0.5218 | 0.8036 | 3.6731 | 0.5218 | 0.5226 | 0.3609 | 0.2669 | | 0.0446 | 33.0 | 8250 | 3.1813 | 0.5202 | 0.8130 | 3.6839 | 0.5202 | 0.5236 | 0.3675 | 0.2637 | | 0.0368 | 34.0 | 8500 | 3.2535 | 0.5335 | 0.8011 | 3.6982 | 0.5335 | 0.5302 | 0.3653 | 0.2572 | | 0.0368 | 35.0 | 8750 | 3.1969 | 0.5265 | 0.8021 | 3.7238 | 0.5265 | 0.5239 | 0.3649 | 0.2558 | | 0.0364 | 36.0 | 9000 | 3.3875 | 0.5165 | 0.8174 | 4.0335 | 0.5165 | 0.5051 | 0.3675 | 0.2645 | | 0.0364 | 37.0 | 9250 | 3.3883 | 0.5248 | 0.8168 | 3.8867 | 0.5248 | 0.5152 | 0.3768 | 0.2529 | | 0.0338 | 38.0 | 9500 | 3.3876 | 0.5255 | 0.8198 | 3.6397 | 0.5255 | 0.5278 | 0.3791 | 0.2679 | | 0.0338 | 39.0 | 9750 | 3.3675 | 0.5282 | 0.8201 | 3.7412 | 0.5282 | 0.5317 | 0.3774 | 0.2561 | | 0.0277 | 40.0 | 10000 | 3.6788 | 0.5005 | 0.8597 | 4.1427 | 0.5005 | 0.4880 | 0.3966 | 0.2757 | | 0.0277 | 41.0 | 10250 | 3.5608 | 0.522 | 0.8299 | 3.7769 | 0.522 | 0.5230 | 0.3828 | 0.2749 | | 0.0177 | 42.0 | 10500 | 3.6388 | 0.5275 | 0.8242 | 4.0808 | 0.5275 | 0.5134 | 0.3817 | 0.2508 | | 0.0177 | 43.0 | 10750 | 3.7068 | 0.532 | 0.8199 | 4.1084 | 0.532 | 0.5198 | 0.3809 | 0.2480 | | 0.018 | 44.0 | 11000 | 3.7589 | 0.5258 | 0.8315 | 3.9264 | 0.5258 | 0.5172 | 0.3877 | 0.2624 | | 0.018 | 45.0 | 11250 | 3.7492 | 0.518 | 0.8437 | 3.9257 | 0.518 | 0.5180 | 0.3951 | 0.2684 | | 0.0186 | 46.0 | 11500 | 3.7641 | 0.5275 | 0.8306 | 3.9749 | 0.5275 | 0.5277 | 0.3877 | 0.2595 | | 0.0186 | 47.0 | 11750 | 3.8842 | 0.52 | 0.8491 | 4.1807 | 0.52 | 0.5182 | 0.3949 | 0.2658 | | 0.0159 | 48.0 | 12000 | 3.8731 | 0.5292 | 0.8318 | 3.9345 | 0.5292 | 0.5250 | 0.3902 | 0.2618 | | 0.0159 | 49.0 | 12250 | 4.0101 | 0.519 | 0.8552 | 4.0796 | 0.519 | 0.5198 | 0.4025 | 0.2713 | | 0.0118 | 50.0 | 12500 | 3.8631 | 0.5255 | 0.8288 | 4.0855 | 0.5255 | 0.5245 | 0.3891 | 0.2600 | | 0.0118 | 51.0 | 12750 | 3.7895 | 0.5415 | 0.8143 | 3.9602 | 0.5415 | 0.5441 | 0.3809 | 0.2506 | | 0.0125 | 52.0 | 13000 | 3.9434 | 0.523 | 0.8385 | 4.2268 | 0.523 | 0.5136 | 0.3951 | 0.2623 | | 0.0125 | 53.0 | 13250 | 3.9239 | 0.5275 | 0.8391 | 4.0398 | 0.5275 | 0.5255 | 0.3952 | 0.2632 | | 0.0087 | 54.0 | 13500 | 3.9463 | 0.5323 | 0.8307 | 4.1080 | 0.5323 | 0.5275 | 0.3905 | 0.2580 | | 0.0087 | 55.0 | 13750 | 3.8462 | 0.5367 | 0.8210 | 3.9693 | 0.5367 | 0.5375 | 0.3825 | 0.2595 | | 0.0093 | 56.0 | 14000 | 4.0603 | 0.5208 | 0.8449 | 4.2501 | 0.5208 | 0.5181 | 0.4019 | 0.2683 | | 0.0093 | 57.0 | 14250 | 3.9614 | 0.5323 | 0.8240 | 4.1335 | 0.5323 | 0.5265 | 0.3863 | 0.2517 | | 0.0082 | 58.0 | 14500 | 3.9553 | 0.548 | 0.8125 | 4.0319 | 0.548 | 0.5412 | 0.3822 | 0.2414 | | 0.0082 | 59.0 | 14750 | 3.9586 | 0.5335 | 0.8325 | 4.0338 | 0.5335 | 0.5314 | 0.3902 | 0.2582 | | 0.0069 | 60.0 | 15000 | 4.1072 | 0.531 | 0.8422 | 4.0678 | 0.531 | 0.5250 | 0.3997 | 0.2574 | | 0.0069 | 61.0 | 15250 | 4.0455 | 0.5425 | 0.8173 | 4.0318 | 0.5425 | 0.5415 | 0.3881 | 0.2480 | | 0.0054 | 62.0 | 15500 | 4.0208 | 0.531 | 0.8325 | 4.1704 | 0.531 | 0.5261 | 0.3912 | 0.2517 | | 0.0054 | 63.0 | 15750 | 4.1167 | 0.5345 | 0.8325 | 4.2352 | 0.5345 | 0.5292 | 0.3926 | 0.2537 | | 0.0054 | 64.0 | 16000 | 4.0246 | 0.5323 | 0.8339 | 4.0084 | 0.5323 | 0.5319 | 0.3940 | 0.2536 | | 0.0054 | 65.0 | 16250 | 4.0535 | 0.5417 | 0.8203 | 4.1167 | 0.5417 | 0.5340 | 0.3875 | 0.2464 | | 0.0048 | 66.0 | 16500 | 4.1987 | 0.5325 | 0.8371 | 4.2901 | 0.5325 | 0.5215 | 0.3979 | 0.2529 | | 0.0048 | 67.0 | 16750 | 4.0956 | 0.5355 | 0.8264 | 4.3477 | 0.5355 | 0.5239 | 0.3889 | 0.2449 | | 0.004 | 68.0 | 17000 | 3.9999 | 0.5423 | 0.8186 | 4.0645 | 0.5423 | 0.5453 | 0.3877 | 0.2487 | | 0.004 | 69.0 | 17250 | 4.0824 | 0.538 | 0.8229 | 4.1670 | 0.538 | 0.5350 | 0.3887 | 0.2461 | | 0.0053 | 70.0 | 17500 | 4.2158 | 0.5305 | 0.8479 | 4.2136 | 0.5305 | 0.5287 | 0.4002 | 0.2572 | | 0.0053 | 71.0 | 17750 | 4.1586 | 0.533 | 0.8355 | 4.1576 | 0.533 | 0.5261 | 0.3942 | 0.2512 | | 0.0041 | 72.0 | 18000 | 4.0781 | 0.5375 | 0.8296 | 4.1218 | 0.5375 | 0.5341 | 0.3930 | 0.2427 | | 0.0041 | 73.0 | 18250 | 4.1389 | 0.5413 | 0.8229 | 4.0890 | 0.5413 | 0.5347 | 0.3918 | 0.2437 | | 0.0028 | 74.0 | 18500 | 4.0675 | 0.5415 | 0.8212 | 4.0429 | 0.5415 | 0.5404 | 0.3920 | 0.2415 | | 0.0028 | 75.0 | 18750 | 4.1044 | 0.5377 | 0.8294 | 4.1268 | 0.5377 | 0.5335 | 0.3955 | 0.2439 | | 0.0027 | 76.0 | 19000 | 4.0731 | 0.5435 | 0.8193 | 4.0913 | 0.5435 | 0.5396 | 0.3892 | 0.2411 | | 0.0027 | 77.0 | 19250 | 4.0768 | 0.5455 | 0.8158 | 4.0784 | 0.5455 | 0.5398 | 0.3885 | 0.2389 | | 0.0028 | 78.0 | 19500 | 4.0665 | 0.5447 | 0.8187 | 4.0719 | 0.5447 | 0.5390 | 0.3876 | 0.2392 | | 0.0028 | 79.0 | 19750 | 4.0475 | 0.5413 | 0.8204 | 4.0408 | 0.5413 | 0.5361 | 0.3927 | 0.2376 | | 0.0026 | 80.0 | 20000 | 4.0176 | 0.5457 | 0.8101 | 4.0504 | 0.5457 | 0.5424 | 0.3844 | 0.2376 | | 0.0026 | 81.0 | 20250 | 4.0408 | 0.5427 | 0.8181 | 4.0458 | 0.5427 | 0.5385 | 0.3888 | 0.2385 | | 0.0027 | 82.0 | 20500 | 4.0392 | 0.5427 | 0.8207 | 4.0317 | 0.5427 | 0.5387 | 0.3897 | 0.2392 | | 0.0027 | 83.0 | 20750 | 4.0163 | 0.545 | 0.8145 | 4.0292 | 0.545 | 0.5403 | 0.3868 | 0.2375 | | 0.0026 | 84.0 | 21000 | 4.0057 | 0.5437 | 0.8165 | 4.0096 | 0.5437 | 0.5404 | 0.3867 | 0.2380 | | 0.0026 | 85.0 | 21250 | 4.0096 | 0.544 | 0.8140 | 4.0733 | 0.544 | 0.5404 | 0.3861 | 0.2368 | | 0.0026 | 86.0 | 21500 | 3.9696 | 0.5487 | 0.8087 | 4.0527 | 0.5487 | 0.5435 | 0.3824 | 0.2352 | | 0.0026 | 87.0 | 21750 | 3.9826 | 0.5495 | 0.8103 | 4.0353 | 0.5495 | 0.5460 | 0.3820 | 0.2362 | | 0.0025 | 88.0 | 22000 | 4.0171 | 0.5455 | 0.8147 | 4.0540 | 0.5455 | 0.5402 | 0.3865 | 0.2359 | | 0.0025 | 89.0 | 22250 | 3.9745 | 0.5455 | 0.8138 | 3.9683 | 0.5455 | 0.5439 | 0.3867 | 0.2357 | | 0.0025 | 90.0 | 22500 | 3.9811 | 0.5473 | 0.8098 | 3.9749 | 0.5473 | 0.5437 | 0.3842 | 0.2346 | | 0.0025 | 91.0 | 22750 | 3.9800 | 0.5475 | 0.8122 | 3.9502 | 0.5475 | 0.5450 | 0.3839 | 0.2353 | | 0.0025 | 92.0 | 23000 | 3.9844 | 0.5473 | 0.8103 | 3.9825 | 0.5473 | 0.5425 | 0.3840 | 0.2347 | | 0.0025 | 93.0 | 23250 | 3.9876 | 0.5485 | 0.8107 | 3.9624 | 0.5485 | 0.5441 | 0.3826 | 0.2343 | | 0.0025 | 94.0 | 23500 | 3.9751 | 0.5485 | 0.8086 | 3.9791 | 0.5485 | 0.5450 | 0.3831 | 0.2337 | | 0.0025 | 95.0 | 23750 | 3.9765 | 0.548 | 0.8087 | 3.9863 | 0.548 | 0.5440 | 0.3839 | 0.2336 | | 0.0024 | 96.0 | 24000 | 3.9764 | 0.5507 | 0.8077 | 3.9676 | 0.5507 | 0.5473 | 0.3807 | 0.2339 | | 0.0024 | 97.0 | 24250 | 3.9695 | 0.549 | 0.8082 | 3.9494 | 0.549 | 0.5456 | 0.3819 | 0.2346 | | 0.0023 | 98.0 | 24500 | 3.9733 | 0.5497 | 0.8080 | 3.9599 | 0.5497 | 0.5462 | 0.3815 | 0.2338 | | 0.0023 | 99.0 | 24750 | 3.9727 | 0.5505 | 0.8081 | 3.9563 | 0.5505 | 0.5469 | 0.3807 | 0.2339 | | 0.0023 | 100.0 | 25000 | 3.9745 | 0.551 | 0.8083 | 3.9609 | 0.551 | 0.5474 | 0.3805 | 0.2338 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
PedroSampaio/swin-base-patch4-window7-224-food101-24-12
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224-food101-24-12 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.2529 - Accuracy: 0.9312 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9481 | 1.0 | 789 | 0.4713 | 0.8665 | | 0.7584 | 2.0 | 1578 | 0.3561 | 0.8985 | | 0.7081 | 3.0 | 2367 | 0.3190 | 0.9058 | | 0.5639 | 4.0 | 3157 | 0.2951 | 0.9127 | | 0.5106 | 5.0 | 3946 | 0.2863 | 0.9190 | | 0.4633 | 6.0 | 4735 | 0.2785 | 0.9211 | | 0.4188 | 7.0 | 5524 | 0.2704 | 0.9240 | | 0.3308 | 8.0 | 6314 | 0.2739 | 0.9226 | | 0.3853 | 9.0 | 7103 | 0.2634 | 0.9263 | | 0.2281 | 10.0 | 7892 | 0.2578 | 0.9283 | | 0.2648 | 11.0 | 8681 | 0.2586 | 0.9288 | | 0.2303 | 12.0 | 9468 | 0.2529 | 0.9312 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles" ]
PedroSampaio/vit-base-patch16-224-food101-24-12
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-food101-24-12 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.3328 - Accuracy: 0.9088 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1313 | 1.0 | 789 | 0.7486 | 0.8388 | | 0.735 | 2.0 | 1578 | 0.4546 | 0.8795 | | 0.7166 | 3.0 | 2367 | 0.3896 | 0.8942 | | 0.5318 | 4.0 | 3157 | 0.3739 | 0.8961 | | 0.5326 | 5.0 | 3946 | 0.3576 | 0.9013 | | 0.4753 | 6.0 | 4735 | 0.3557 | 0.9006 | | 0.3764 | 7.0 | 5524 | 0.3486 | 0.904 | | 0.3399 | 8.0 | 6314 | 0.3457 | 0.9046 | | 0.3987 | 9.0 | 7103 | 0.3378 | 0.9065 | | 0.2592 | 10.0 | 7892 | 0.3393 | 0.9070 | | 0.2661 | 11.0 | 8681 | 0.3366 | 0.9080 | | 0.2632 | 12.0 | 9468 | 0.3328 | 0.9088 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles" ]
PedroSampaio/vit-base-patch16-224-in21k-food101-24-12
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-food101-24-12 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.3533 - Accuracy: 0.9069 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7927 | 1.0 | 789 | 2.5629 | 0.7693 | | 1.256 | 2.0 | 1578 | 0.9637 | 0.8583 | | 0.94 | 3.0 | 2367 | 0.5866 | 0.8816 | | 0.6693 | 4.0 | 3157 | 0.4752 | 0.8888 | | 0.6337 | 5.0 | 3946 | 0.4282 | 0.8941 | | 0.5811 | 6.0 | 4735 | 0.4110 | 0.8949 | | 0.4661 | 7.0 | 5524 | 0.3875 | 0.8990 | | 0.4188 | 8.0 | 6314 | 0.3776 | 0.9010 | | 0.5045 | 9.0 | 7103 | 0.3633 | 0.9049 | | 0.3437 | 10.0 | 7892 | 0.3611 | 0.9058 | | 0.3494 | 11.0 | 8681 | 0.3568 | 0.9060 | | 0.3381 | 12.0 | 9468 | 0.3533 | 0.9069 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles" ]
PedroSampaio/swin-base-patch4-window7-224-in22k-food101-24-12
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224-in22k-food101-24-12 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.2524 - Accuracy: 0.9312 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8657 | 1.0 | 789 | 0.4698 | 0.8663 | | 0.7506 | 2.0 | 1578 | 0.3419 | 0.9006 | | 0.6379 | 3.0 | 2367 | 0.3061 | 0.9116 | | 0.5223 | 4.0 | 3157 | 0.2906 | 0.9149 | | 0.4989 | 5.0 | 3946 | 0.2783 | 0.9205 | | 0.4163 | 6.0 | 4735 | 0.2732 | 0.9225 | | 0.3954 | 7.0 | 5524 | 0.2675 | 0.9255 | | 0.3466 | 8.0 | 6314 | 0.2710 | 0.9240 | | 0.3666 | 9.0 | 7103 | 0.2625 | 0.9275 | | 0.2085 | 10.0 | 7892 | 0.2578 | 0.9295 | | 0.263 | 11.0 | 8681 | 0.2563 | 0.9302 | | 0.2171 | 12.0 | 9468 | 0.2524 | 0.9312 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles" ]
Mahendra42/swin-tiny-patch4-window7-224_RCC_Classifierv4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224_RCC_Classifierv4 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.7347 - F1: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:---:| | 0.0296 | 1.0 | 105 | 3.2075 | 0.0 | | 0.0377 | 2.0 | 210 | 2.6132 | 0.0 | | 0.0104 | 3.0 | 315 | 2.2246 | 0.0 | | 0.0177 | 4.0 | 420 | 2.6363 | 0.0 | | 0.007 | 5.0 | 525 | 2.6364 | 0.0 | | 0.0082 | 6.0 | 630 | 2.6554 | 0.0 | | 0.0078 | 7.0 | 735 | 2.6351 | 0.0 | | 0.0015 | 8.0 | 840 | 2.6925 | 0.0 | | 0.0073 | 9.0 | 945 | 2.7134 | 0.0 | | 0.0018 | 10.0 | 1050 | 2.7347 | 0.0 | ### Framework versions - Transformers 4.34.1 - Pytorch 1.12.1 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "clear cell rcc", "non clear cell" ]
dima806/fruit_100_types_image_detection
Returns fruit type given an image with about 85% accuracy. See https://www.kaggle.com/code/dima806/fruit-100-types-image-detection-vit for more details. ``` Classification report: precision recall f1-score support abiu 0.7799 0.9056 0.8380 180 acai 0.8118 0.8389 0.8251 180 acerola 0.8701 0.8556 0.8627 180 ackee 0.9451 0.9556 0.9503 180 ambarella 0.5696 0.7278 0.6390 180 apple 0.9027 0.9278 0.9151 180 apricot 0.7046 0.9278 0.8010 180 avocado 0.9297 0.9556 0.9425 180 banana 0.9781 0.9944 0.9862 180 barbadine 0.9074 0.5444 0.6806 180 barberry 0.8122 0.8889 0.8488 180 betel_nut 0.9420 0.7222 0.8176 180 bitter_gourd 0.9888 0.9833 0.9861 180 black_berry 0.5260 0.9000 0.6639 180 black_mullberry 0.9641 0.8944 0.9280 180 brazil_nut 0.9298 0.8833 0.9060 180 camu_camu 0.8325 0.9111 0.8700 180 cashew 0.9889 0.9889 0.9889 180 cempedak 0.9706 0.5500 0.7021 180 chenet 0.7422 0.9278 0.8247 180 cherimoya 0.5869 0.6944 0.6361 180 chico 0.5940 0.4389 0.5048 180 chokeberry 0.8444 0.8444 0.8444 180 cluster_fig 0.9236 0.8056 0.8605 180 coconut 0.9167 0.9778 0.9462 180 corn_kernel 0.9781 0.9944 0.9862 180 cranberry 0.9067 0.7556 0.8242 180 cupuacu 0.8846 0.8944 0.8895 180 custard_apple 0.5000 0.0056 0.0110 180 damson 0.8687 0.9556 0.9101 180 dewberry 0.7869 0.2667 0.3983 180 dragonfruit 0.9890 0.9944 0.9917 180 durian 0.9730 1.0000 0.9863 180 eggplant 0.9833 0.9833 0.9833 180 elderberry 0.9553 0.9500 0.9526 180 emblic 0.8927 0.8778 0.8852 180 feijoa 0.9111 0.9111 0.9111 180 fig 0.8696 1.0000 0.9302 180 finger_lime 0.9647 0.9111 0.9371 180 gooseberry 0.8966 0.8667 0.8814 180 goumi 0.8020 0.9000 0.8482 180 grape 0.9661 0.9500 0.9580 180 grapefruit 0.8696 0.7778 0.8211 180 greengage 0.8434 0.7778 0.8092 180 grenadilla 0.6457 0.8000 0.7146 180 guava 0.8122 0.8889 0.8488 180 hard_kiwi 0.8367 0.9111 0.8723 180 hawthorn 0.8246 0.7833 0.8034 180 hog_plum 0.8667 0.0722 0.1333 180 horned_melon 0.9943 0.9722 0.9831 180 indian_strawberry 0.5427 0.4944 0.5174 180 jaboticaba 0.9480 0.9111 0.9292 180 jackfruit 0.6917 0.9722 0.8083 180 jalapeno 0.9728 0.9944 0.9835 180 jamaica_cherry 0.9136 0.8222 0.8655 180 jambul 0.8750 0.8556 0.8652 180 jocote 0.7365 0.6056 0.6646 180 jujube 0.8554 0.7889 0.8208 180 kaffir_lime 0.9672 0.9833 0.9752 180 kumquat 0.8000 0.9333 0.8615 180 lablab 0.9835 0.9944 0.9890 180 langsat 0.8656 0.8944 0.8798 180 longan 0.9016 0.9667 0.9330 180 mabolo 0.9405 0.8778 0.9080 180 malay_apple 0.6173 0.5556 0.5848 180 mandarine 0.7811 0.8722 0.8241 180 mango 0.8071 0.8833 0.8435 180 mangosteen 0.9609 0.9556 0.9582 180 medlar 0.9503 0.9556 0.9529 180 mock_strawberry 0.5568 0.5722 0.5644 180 morinda 0.9727 0.9889 0.9807 180 mountain_soursop 0.9496 0.7333 0.8276 180 oil_palm 0.9053 0.9556 0.9297 180 olive 0.9704 0.9111 0.9398 180 otaheite_apple 0.5736 0.6278 0.5995 180 papaya 0.7882 0.8889 0.8355 180 passion_fruit 0.7720 0.8278 0.7989 180 pawpaw 0.8428 0.7444 0.7906 180 pea 0.9375 1.0000 0.9677 180 pineapple 1.0000 1.0000 1.0000 180 plumcot 0.8525 0.5778 0.6887 180 pomegranate 0.9418 0.9889 0.9648 180 prikly_pear 0.9834 0.9889 0.9861 180 quince 0.9399 0.9556 0.9477 180 rambutan 1.0000 1.0000 1.0000 180 raspberry 0.9206 0.9667 0.9431 180 redcurrant 0.9040 0.9944 0.9471 180 rose_hip 0.8595 0.8833 0.8712 180 rose_leaf_bramble 0.9050 0.9000 0.9025 180 salak 0.8947 0.9444 0.9189 180 santol 0.8870 0.8722 0.8796 180 sapodilla 0.5727 0.7222 0.6388 180 sea_buckthorn 0.9780 0.9889 0.9834 180 strawberry_guava 0.8407 0.8500 0.8453 180 sugar_apple 0.4711 0.9500 0.6298 180 taxus_baccata 0.9676 0.9944 0.9808 180 ugli_fruit 0.9202 0.8333 0.8746 180 white_currant 1.0000 1.0000 1.0000 180 yali_pear 0.9448 0.9500 0.9474 180 yellow_plum 0.7552 0.8056 0.7796 180 accuracy 0.8498 18000 macro avg 0.8570 0.8498 0.8417 18000 weighted avg 0.8570 0.8498 0.8417 18000 ```
[ "abiu", "acai", "acerola", "ackee", "ambarella", "apple", "apricot", "avocado", "banana", "barbadine", "barberry", "betel_nut", "bitter_gourd", "black_berry", "black_mullberry", "brazil_nut", "camu_camu", "cashew", "cempedak", "chenet", "cherimoya", "chico", "chokeberry", "cluster_fig", "coconut", "corn_kernel", "cranberry", "cupuacu", "custard_apple", "damson", "dewberry", "dragonfruit", "durian", "eggplant", "elderberry", "emblic", "feijoa", "fig", "finger_lime", "gooseberry", "goumi", "grape", "grapefruit", "greengage", "grenadilla", "guava", "hard_kiwi", "hawthorn", "hog_plum", "horned_melon", "indian_strawberry", "jaboticaba", "jackfruit", "jalapeno", "jamaica_cherry", "jambul", "jocote", "jujube", "kaffir_lime", "kumquat", "lablab", "langsat", "longan", "mabolo", "malay_apple", "mandarine", "mango", "mangosteen", "medlar", "mock_strawberry", "morinda", "mountain_soursop", "oil_palm", "olive", "otaheite_apple", "papaya", "passion_fruit", "pawpaw", "pea", "pineapple", "plumcot", "pomegranate", "prikly_pear", "quince", "rambutan", "raspberry", "redcurrant", "rose_hip", "rose_leaf_bramble", "salak", "santol", "sapodilla", "sea_buckthorn", "strawberry_guava", "sugar_apple", "taxus_baccata", "ugli_fruit", "white_currant", "yali_pear", "yellow_plum" ]
jerryteps/resnet-18
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-18 This model was trained from scratch on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9476 - Accuracy: 0.6473 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.472 | 1.0 | 252 | 1.3291 | 0.4887 | | 1.2941 | 2.0 | 505 | 1.1145 | 0.5793 | | 1.2117 | 3.0 | 757 | 1.0483 | 0.6043 | | 1.1616 | 4.0 | 1010 | 1.0137 | 0.6233 | | 1.1654 | 5.0 | 1262 | 0.9975 | 0.6291 | | 1.1297 | 6.0 | 1515 | 0.9766 | 0.6414 | | 1.0645 | 7.0 | 1767 | 0.9668 | 0.6372 | | 1.0692 | 8.0 | 2020 | 0.9603 | 0.6450 | | 1.0711 | 9.0 | 2272 | 0.9521 | 0.6425 | | 1.0344 | 9.98 | 2520 | 0.9476 | 0.6473 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.13.3
[ "angry", "disgusted", "fearful", "happy", "neutral", "sad", "surprised" ]
dwiedarioo/vit-base-patch16-224-in21k-euroSat
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dwiedarioo/vit-base-patch16-224-in21k-euroSat This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0088 - Train Accuracy: 0.9996 - Train Top-3-accuracy: 1.0 - Validation Loss: 0.0258 - Validation Accuracy: 0.9948 - Validation Top-3-accuracy: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 2880, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 0.3131 | 0.9169 | 0.9908 | 0.0886 | 0.9849 | 1.0 | 0 | | 0.0503 | 0.9920 | 0.9999 | 0.0427 | 0.9920 | 0.9997 | 1 | | 0.0219 | 0.9972 | 1.0 | 0.0299 | 0.9935 | 1.0 | 2 | | 0.0112 | 0.9992 | 1.0 | 0.0261 | 0.9954 | 1.0 | 3 | | 0.0088 | 0.9996 | 1.0 | 0.0258 | 0.9948 | 1.0 | 4 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "meningioma_tumor", "normal", "glioma_tumor", "pituitary_tumor" ]
kjlkjl/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1912 - Accuracy: 0.9312 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5343 | 1.0 | 422 | 0.2732 | 0.902 | | 0.4702 | 2.0 | 844 | 0.2152 | 0.9238 | | 0.391 | 3.0 | 1266 | 0.1912 | 0.9312 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
[ "t - shirt / top", "trouser", "pullover", "dress", "coat", "sandal", "shirt", "sneaker", "bag", "ankle boot" ]
JLB-JLB/seizure_vit_jlb_231108_iir_adjusted
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # seizure_vit_jlb_231108_iir_adjusted This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the JLB-JLB/seizure_eeg_iirFilter_greyscale_224x224_6secWindow_adjusted dataset. It achieves the following results on the evaluation set: - Loss: 0.4198 - Roc Auc: 0.7773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Roc Auc | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.3803 | 0.34 | 1000 | 0.4734 | 0.7746 | | 0.3456 | 0.68 | 2000 | 0.4863 | 0.7782 | | 0.2831 | 1.02 | 3000 | 0.4817 | 0.7897 | | 0.2781 | 1.36 | 4000 | 0.5418 | 0.7656 | | 0.2355 | 1.7 | 5000 | 0.5398 | 0.7786 | | 0.1978 | 2.04 | 6000 | 0.6121 | 0.7649 | | 0.149 | 2.38 | 7000 | 0.6402 | 0.7706 | | 0.1766 | 2.72 | 8000 | 0.6768 | 0.7610 | | 0.1496 | 3.06 | 9000 | 0.6239 | 0.7733 | | 0.155 | 3.4 | 10000 | 0.7333 | 0.7602 | | 0.1238 | 3.75 | 11000 | 0.6513 | 0.7726 | | 0.1054 | 4.09 | 12000 | 0.7551 | 0.7667 | | 0.1076 | 4.43 | 13000 | 0.8132 | 0.7627 | | 0.1321 | 4.77 | 14000 | 0.8152 | 0.7587 | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/640b79908a08d0ca79456a04/K8ORF3q_Eyp_q2VjHSg_F.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/640b79908a08d0ca79456a04/Hi6zx6Abb_Y4AbpEveBHX.png) ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "bckg", "seiz" ]
dzhao114/vit-base-patch16-224-finetuned-turquoise
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-turquoise This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0223 - Accuracy: 0.995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5564 | 0.98 | 14 | 0.1073 | 0.975 | | 0.1181 | 1.96 | 28 | 0.0223 | 0.995 | | 0.0275 | 2.95 | 42 | 0.0127 | 0.995 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.13.3
[ "fake_turquoise", "turquoise" ]
tonyassi/camera-lens-focal-length
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Camera Lens Focal Length This model predicts the focal length that the camera lens used to capture an image. It takes in an image and returns one of the following labels: - ULTRA-WIDE - WIDE - MEDIUM - LONG-LENS - TELEPHOTO ### How to use ```python from transformers import pipeline pipe = pipeline("image-classification", model="tonyassi/camera-lens-focal-length") result = pipe('image.png') print(result) ``` ## Dataset Trained on a total of 5000 images. 1000 images from each label. Images were taken from popular Hollywood movies. ### ULTRA-WIDE ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/x9bE-CkXdKSXNhJ60yG9Q.jpeg) ### WIDE ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/1AG65hOknZ6Tr2o-55urM.jpeg) ### MEDIUM ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/3-JNBG3vZ5KdgM46Sq683.jpeg) ### LONG-LENS ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/p7KXjTX5D6hydnS4K1U01.jpeg) ### TELEPHOTO ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/u-EF60BTNcUqfFRfcz2zM.jpeg) ## Model description This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "long-lens", "medium", "telephoto", "ultra-wide", "wide" ]
hkivancoral/hushem_40x_deit_small_f1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_small_f1 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7923 - Accuracy: 0.8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1638 | 0.99 | 53 | 0.4948 | 0.8222 | | 0.018 | 1.99 | 107 | 0.8208 | 0.7556 | | 0.0086 | 3.0 | 161 | 0.6473 | 0.8667 | | 0.0011 | 4.0 | 215 | 0.7960 | 0.7556 | | 0.0003 | 4.99 | 268 | 0.8013 | 0.7556 | | 0.0001 | 5.99 | 322 | 0.8035 | 0.8 | | 0.0001 | 7.0 | 376 | 0.7952 | 0.8 | | 0.0001 | 8.0 | 430 | 0.7939 | 0.8 | | 0.0001 | 8.99 | 483 | 0.7931 | 0.8 | | 0.0001 | 9.86 | 530 | 0.7923 | 0.8 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
gaborcselle/font-identifier
# font-identifier This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the imagefolder dataset. Result: Loss: 0.1172; Accuracy: 0.9633 Try with any screenshot of a font, or any of the examples in [the 'samples' subfolder of this repo](https://huggingface.co/gaborcselle/font-identifier/tree/main/hf_samples). ## Model description Identify the font used in an image. Visual classifier based on ResNet18. I built this project in 1 day, with a minute-by-minute journal [on Twitter/X](https://twitter.com/gabor/status/1722300841691103467), [on Pebble.social](https://pebble.social/@gabor/111376050835874755), and [on Threads.net](https://www.threads.net/@gaborcselle/post/CzZJpJCpxTz). The code used to build this model is in this github rep ## Intended uses & limitations Identify any of 48 standard fonts from the training data. ## Training and evaluation data Trained and eval'd on the [gaborcselle/font-examples](https://huggingface.co/datasets/gaborcselle/font-examples) dataset (80/20 split). ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.0243 | 0.98 | 30 | 3.9884 | 0.0204 | | 0.8309 | 10.99 | 338 | 0.5536 | 0.8551 | | 0.3917 | 20.0 | 615 | 0.2353 | 0.9388 | | 0.2298 | 30.99 | 953 | 0.1326 | 0.9633 | | 0.1804 | 40.0 | 1230 | 0.1421 | 0.9571 | | 0.1987 | 46.99 | 1445 | 0.1250 | 0.9673 | | 0.1728 | 48.0 | 1476 | 0.1293 | 0.9633 | | 0.1337 | 48.78 | 1500 | 0.1172 | 0.9633 | ### Confusion Matrix Confusion matrix on test data. ![image](font-identifier_confusion-matrix.png) ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.14.1
[ "agbalumo-regular", "alfaslabone-regular", "courier", "georgia", "helvetica", "ibmplexsans-regular", "inter-regular", "kaushanscript-regular", "lato-regular", "lobster-regular", "lora-regular", "merriweather-regular", "architectsdaughter-regular", "niconne-regular", "opensans-bold", "opensans-italic", "opensans-light", "pacifico-regular", "pixelifysans-regular", "playfairdisplay-regular", "poppins-regular", "rakkas-regular", "roboto-regular", "arial", "robotomono-regular", "robotoslab-regular", "rubik-regular", "spacemono-regular", "tahoma", "tahoma bold", "times new roman", "times new roman bold", "times new roman bold italic", "times new roman italic", "arial black", "titilliumweb-regular", "trebuchet ms", "trebuchet ms bold", "trebuchet ms bold italic", "trebuchet ms italic", "verdana", "verdana bold", "verdana bold italic", "verdana italic", "arial bold", "arial bold italic", "avenir", "bangers-regular", "blackopsone-regular" ]
1aurent/phikon-distil-mobilenet_v2-kather2016
# Model card for phikon-distil-mobilenet_v2-kather2016 This model is a distilled version of [owkin/phikon](https://huggingface.co/owkin/phikon) to a MobileNet-v2 on the [1aurent/Kather-texture-2016](https://huggingface.co/datasets/1aurent/Kather-texture-2016) dataset. ## Model Usage ### Image Classification ```python from transformers import AutoModelForImageClassification, AutoImageProcessor from urllib.request import urlopen from PIL import Image # get example histology image img = Image.open( urlopen( "https://datasets-server.huggingface.co/assets/1aurent/Kather-texture-2016/--/default/train/0/image/image.jpg" ) ) # load image_processor and model from the hub model_name = "1aurent/phikon-distil-mobilenet_v2-kather2016" image_processor = AutoImageProcessor.from_pretrained(model_name) model = AutoModelForImageClassification.from_pretrained(model_name) inputs = image_processor(img, return_tensors="pt") outputs = model(**inputs) ``` ## Citation ```bibtex @article{Filiot2023.07.21.23292757, author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti}, title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling}, elocation-id = {2023.07.21.23292757}, year = {2023}, doi = {10.1101/2023.07.21.23292757}, publisher = {Cold Spring Harbor Laboratory Press}, url = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757}, eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf}, journal = {medRxiv} } ```
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7" ]
1aurent/phikon-distil-vit-tiny-patch16-224-kather2016
# Model card for phikon-distil-vit-tiny-patch16-224-kather2016 This model is a distilled version of [owkin/phikon](https://huggingface.co/owkin/phikon) to a TinyViT on the [1aurent/Kather-texture-2016](https://huggingface.co/datasets/1aurent/Kather-texture-2016) dataset. ## Model Usage ### Image Classification ```python from transformers import AutoModelForImageClassification, AutoImageProcessor from urllib.request import urlopen from PIL import Image # get example histology image img = Image.open( urlopen( "https://datasets-server.huggingface.co/assets/1aurent/Kather-texture-2016/--/default/train/0/image/image.jpg" ) ) # load image_processor and model from the hub model_name = "1aurent/phikon-distil-vit-tiny-patch16-224-kather2016" image_processor = AutoImageProcessor.from_pretrained(model_name) model = AutoModelForImageClassification.from_pretrained(model_name) inputs = image_processor(img, return_tensors="pt") outputs = model(**inputs) ``` ## Citation ```bibtex @article{Filiot2023.07.21.23292757, author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti}, title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling}, elocation-id = {2023.07.21.23292757}, year = {2023}, doi = {10.1101/2023.07.21.23292757}, publisher = {Cold Spring Harbor Laboratory Press}, url = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757}, eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf}, journal = {medRxiv} } ```
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7" ]
martyyz/vit-base-patch16-224-finetuned-flower
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.1.0+cu118 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "daisy", "dandelion", "roses", "sunflowers", "tulips" ]
ISEARobots/vit-base-patch16-224-finetuned-flower
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.1.0+cu118 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "daisy", "dandelion", "roses", "sunflowers", "tulips" ]
arieg/spec_cls_80
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/spec_cls_80 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.7760 - Validation Loss: 2.7406 - Train Accuracy: 0.975 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 7200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 4.2523 | 4.0977 | 0.5312 | 0 | | 3.8658 | 3.7068 | 0.8562 | 1 | | 3.4605 | 3.3486 | 0.9375 | 2 | | 3.0940 | 3.0254 | 0.9563 | 3 | | 2.7760 | 2.7406 | 0.975 | 4 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "56248", "56249", "56470", "56471", "56472", "56474", "56493", "56495", "56496", "56497", "56498", "56499", "56273", "56516", "56517", "56518", "56519", "56520", "56521", "56639", "56640", "56641", "56645", "56274", "56646", "56648", "56649", "56650", "56651", "56686", "56687", "56688", "56689", "56690", "56275", "56691", "56692", "56693", "56694", "56695", "56696", "56795", "56796", "56797", "56798", "56465", "56799", "56800", "56801", "56802", "56803", "56804", "56805", "56888", "57164", "57175", "56466", "57176", "57177", "57178", "57179", "57180", "57344", "57360", "57371", "57417", "57418", "56467", "57440", "57442", "57500", "57569", "57626", "57627", "57628", "57629", "57630", "57639", "56468", "56469" ]
arieg/spec_cls_80_v2
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/spec_cls_80_v2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0698 - Validation Loss: 1.0517 - Train Accuracy: 1.0 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 14400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 4.2243 | 4.0115 | 0.575 | 0 | | 3.6964 | 3.4678 | 0.9125 | 1 | | 3.1703 | 2.9932 | 0.9938 | 2 | | 2.7155 | 2.5826 | 0.9938 | 3 | | 2.3313 | 2.2229 | 1.0 | 4 | | 2.0025 | 1.9208 | 1.0 | 5 | | 1.7153 | 1.6639 | 1.0 | 6 | | 1.4721 | 1.4462 | 1.0 | 7 | | 1.2586 | 1.2279 | 1.0 | 8 | | 1.0698 | 1.0517 | 1.0 | 9 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "56248", "56249", "56470", "56471", "56472", "56474", "56493", "56495", "56496", "56497", "56498", "56499", "56273", "56516", "56517", "56518", "56519", "56520", "56521", "56639", "56640", "56641", "56645", "56274", "56646", "56648", "56649", "56650", "56651", "56686", "56687", "56688", "56689", "56690", "56275", "56691", "56692", "56693", "56694", "56695", "56696", "56795", "56796", "56797", "56798", "56465", "56799", "56800", "56801", "56802", "56803", "56804", "56805", "56888", "57164", "57175", "56466", "57176", "57177", "57178", "57179", "57180", "57344", "57360", "57371", "57417", "57418", "56467", "57440", "57442", "57500", "57569", "57626", "57627", "57628", "57629", "57630", "57639", "56468", "56469" ]
arieg/spec_cls_80_v4
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/spec_cls_80_v4 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5655 - Validation Loss: 1.5375 - Train Accuracy: 0.9875 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 7200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 3.9963 | 3.4778 | 0.8625 | 0 | | 3.0199 | 2.7171 | 0.9563 | 1 | | 2.3593 | 2.2002 | 0.9875 | 2 | | 1.9034 | 1.8255 | 0.9938 | 3 | | 1.5655 | 1.5375 | 0.9875 | 4 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "56248", "56249", "56470", "56471", "56472", "56474", "56493", "56495", "56496", "56497", "56498", "56499", "56273", "56516", "56517", "56518", "56519", "56520", "56521", "56639", "56640", "56641", "56645", "56274", "56646", "56648", "56649", "56650", "56651", "56686", "56687", "56688", "56689", "56690", "56275", "56691", "56692", "56693", "56694", "56695", "56696", "56795", "56796", "56797", "56798", "56465", "56799", "56800", "56801", "56802", "56803", "56804", "56805", "56888", "57164", "57175", "56466", "57176", "57177", "57178", "57179", "57180", "57344", "57360", "57371", "57417", "57418", "56467", "57440", "57442", "57500", "57569", "57626", "57627", "57628", "57629", "57630", "57639", "56468", "56469" ]
arieg/food
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/food This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0895 - Validation Loss: 1.1136 - Train Accuracy: 0.9938 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 7200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.9763 | 1.9595 | 1.0 | 0 | | 1.7042 | 1.7030 | 0.9938 | 1 | | 1.4680 | 1.4819 | 0.9938 | 2 | | 1.2665 | 1.2830 | 0.9938 | 3 | | 1.0895 | 1.1136 | 0.9938 | 4 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "56248", "56249", "56470", "56471", "56472", "56474", "56493", "56495", "56496", "56497", "56498", "56499", "56273", "56516", "56517", "56518", "56519", "56520", "56521", "56639", "56640", "56641", "56645", "56274", "56646", "56648", "56649", "56650", "56651", "56686", "56687", "56688", "56689", "56690", "56275", "56691", "56692", "56693", "56694", "56695", "56696", "56795", "56796", "56797", "56798", "56465", "56799", "56800", "56801", "56802", "56803", "56804", "56805", "56888", "57164", "57175", "56466", "57176", "57177", "57178", "57179", "57180", "57344", "57360", "57371", "57417", "57418", "56467", "57440", "57442", "57500", "57569", "57626", "57627", "57628", "57629", "57630", "57639", "56468", "56469" ]
hkivancoral/hushem_40x_deit_tiny_deneme_f1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_deneme_f1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.9116 - Accuracy: 0.8222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0902 | 1.0 | 107 | 0.7605 | 0.7556 | | 0.033 | 2.0 | 214 | 1.1867 | 0.7556 | | 0.01 | 2.99 | 321 | 1.4752 | 0.7111 | | 0.0002 | 4.0 | 429 | 0.8179 | 0.8444 | | 0.0025 | 5.0 | 536 | 0.9159 | 0.7778 | | 0.0 | 6.0 | 643 | 0.8372 | 0.8 | | 0.0 | 6.99 | 750 | 0.8831 | 0.8 | | 0.0 | 8.0 | 858 | 0.9010 | 0.8222 | | 0.0 | 9.0 | 965 | 0.9097 | 0.8222 | | 0.0 | 9.98 | 1070 | 0.9116 | 0.8222 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
Am22000/classifier_image
DONE
[ "invoice", "random" ]
hkivancoral/hushem_40x_deit_tiny_deneme_f2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_40x_deit_tiny_deneme_f2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3750 - Accuracy: 0.8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1308 | 1.0 | 107 | 0.7189 | 0.8 | | 0.0468 | 2.0 | 214 | 1.0561 | 0.7556 | | 0.0127 | 2.99 | 321 | 1.2955 | 0.7333 | | 0.0392 | 4.0 | 429 | 1.0074 | 0.8222 | | 0.0002 | 5.0 | 536 | 1.2425 | 0.8222 | | 0.0 | 6.0 | 643 | 1.3825 | 0.7778 | | 0.0 | 6.99 | 750 | 1.3699 | 0.7778 | | 0.0 | 8.0 | 858 | 1.3717 | 0.7778 | | 0.0 | 9.0 | 965 | 1.3739 | 0.8 | | 0.0 | 9.98 | 1070 | 1.3750 | 0.8 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
xanore/results
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Intro Just a ML-2 HSE course homework done by Zaryvnykh Amaliya, DSBA201 # Results This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0381 - Accuracy: 0.9867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1337 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0984 | 0.98 | 26 | 0.0847 | 0.9725 | | 0.0493 | 2.0 | 53 | 0.0480 | 0.9842 | | 0.0407 | 2.97 | 79 | 0.0456 | 0.9867 | | 0.033 | 3.99 | 106 | 0.0400 | 0.9858 | | 0.0261 | 4.89 | 130 | 0.0388 | 0.9892 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "cat", "dog" ]
dwiedarioo/vit-base-patch16-224-in21k-brainmri
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dwiedarioo/vit-base-patch16-224-in21k-brainmri This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2848 - Train Accuracy: 0.9969 - Train Top-3-accuracy: 0.9992 - Validation Loss: 0.3786 - Validation Accuracy: 0.9590 - Validation Top-3-accuracy: 0.9892 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1230, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 2.2199 | 0.4215 | 0.6564 | 1.8634 | 0.5702 | 0.8099 | 0 | | 1.5448 | 0.6976 | 0.8797 | 1.3110 | 0.7603 | 0.9028 | 1 | | 1.0494 | 0.8694 | 0.9519 | 0.9507 | 0.8855 | 0.9590 | 2 | | 0.7408 | 0.9381 | 0.9824 | 0.7499 | 0.9114 | 0.9806 | 3 | | 0.5428 | 0.9756 | 0.9939 | 0.5831 | 0.9460 | 0.9849 | 4 | | 0.4169 | 0.9901 | 0.9977 | 0.4895 | 0.9525 | 0.9914 | 5 | | 0.3371 | 0.9947 | 0.9977 | 0.4194 | 0.9611 | 0.9892 | 6 | | 0.2848 | 0.9969 | 0.9992 | 0.3786 | 0.9590 | 0.9892 | 7 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "carcinoma", "papiloma", "astrocitoma", "glioblastoma", "meningioma", "tuberculoma", "schwannoma", "neurocitoma", "granuloma", "_normal", "ganglioglioma", "germinoma", "ependimoma", "oligodendroglioma", "meduloblastoma" ]
Santipab/Braincode-BEiT-2e-5-lion-NSCLC
This is model use for competition in the Brain Code Camp 2023.
[ "adenocarcinoma", "large.cell", "normal", "squamous.cell" ]
platzi/platzi-vit-model-edgar-elias
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-edgar-elias This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0969 - Accuracy: 0.9774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1391 | 3.85 | 500 | 0.0969 | 0.9774 | ### Framework versions - Transformers 4.29.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.13.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
xanore/swin-tiny-patch4-window7-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0220 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0963 | 1.0 | 56 | 0.0343 | 0.9875 | | 0.0481 | 1.99 | 112 | 0.0239 | 0.9912 | | 0.0338 | 2.99 | 168 | 0.0220 | 0.9925 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "cat", "dog" ]
geetanshi/image_classification
DONE!!!
[ "invoice", "random" ]
Siddharta314/beans-model-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # our-model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0134 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1469 | 3.85 | 500 | 0.0134 | 0.9925 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
arieg/4_100_2
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/4_100_2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1097 - Validation Loss: 0.1024 - Train Accuracy: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.9324 | 0.5258 | 1.0 | 0 | | 0.3769 | 0.2497 | 1.0 | 1 | | 0.1975 | 0.1603 | 1.0 | 2 | | 0.1373 | 0.1214 | 1.0 | 3 | | 0.1097 | 0.1024 | 1.0 | 4 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "10", "140", "2", "5" ]
arieg/4_100_s
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/4_100_s This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0361 - Validation Loss: 0.0352 - Train Accuracy: 1.0 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 7200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.9729 | 0.5902 | 1.0 | 0 | | 0.4190 | 0.2874 | 1.0 | 1 | | 0.2212 | 0.1722 | 1.0 | 2 | | 0.1512 | 0.1305 | 1.0 | 3 | | 0.1192 | 0.1058 | 1.0 | 4 | | 0.1007 | 0.0926 | 1.0 | 5 | | 0.0885 | 0.0827 | 1.0 | 6 | | 0.0796 | 0.0753 | 1.0 | 7 | | 0.0726 | 0.0689 | 1.0 | 8 | | 0.0668 | 0.0636 | 1.0 | 9 | | 0.0620 | 0.0594 | 1.0 | 10 | | 0.0578 | 0.0554 | 1.0 | 11 | | 0.0541 | 0.0524 | 1.0 | 12 | | 0.0507 | 0.0494 | 1.0 | 13 | | 0.0477 | 0.0459 | 1.0 | 14 | | 0.0450 | 0.0436 | 1.0 | 15 | | 0.0425 | 0.0413 | 1.0 | 16 | | 0.0402 | 0.0392 | 1.0 | 17 | | 0.0380 | 0.0371 | 1.0 | 18 | | 0.0361 | 0.0352 | 1.0 | 19 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "10", "140", "2", "5" ]
ailuropod4/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0800 - Accuracy: 0.9715 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2878 | 1.0 | 95 | 0.1676 | 0.9474 | | 0.2106 | 2.0 | 190 | 0.0828 | 0.9722 | | 0.1761 | 3.0 | 285 | 0.0800 | 0.9715 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5661 - Accuracy: 0.4444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 2.2715 | 0.2667 | | No log | 2.0 | 3 | 2.0269 | 0.4 | | No log | 2.67 | 4 | 1.6111 | 0.2889 | | No log | 4.0 | 6 | 1.4755 | 0.2444 | | No log | 4.67 | 7 | 1.3818 | 0.4667 | | No log | 6.0 | 9 | 1.3523 | 0.3111 | | 1.6844 | 6.67 | 10 | 1.4010 | 0.2444 | | 1.6844 | 8.0 | 12 | 1.2634 | 0.4444 | | 1.6844 | 8.67 | 13 | 1.3983 | 0.4222 | | 1.6844 | 10.0 | 15 | 1.7897 | 0.3778 | | 1.6844 | 10.67 | 16 | 1.7305 | 0.3111 | | 1.6844 | 12.0 | 18 | 1.3560 | 0.4667 | | 1.6844 | 12.67 | 19 | 1.8545 | 0.4222 | | 1.001 | 14.0 | 21 | 2.1000 | 0.3778 | | 1.001 | 14.67 | 22 | 1.2257 | 0.4889 | | 1.001 | 16.0 | 24 | 1.2741 | 0.4444 | | 1.001 | 16.67 | 25 | 1.9098 | 0.3556 | | 1.001 | 18.0 | 27 | 1.4981 | 0.3778 | | 1.001 | 18.67 | 28 | 1.0949 | 0.4222 | | 0.7366 | 20.0 | 30 | 1.1640 | 0.4222 | | 0.7366 | 20.67 | 31 | 1.5156 | 0.3556 | | 0.7366 | 22.0 | 33 | 1.8559 | 0.3556 | | 0.7366 | 22.67 | 34 | 1.5735 | 0.4444 | | 0.7366 | 24.0 | 36 | 1.3202 | 0.4222 | | 0.7366 | 24.67 | 37 | 1.3837 | 0.4222 | | 0.7366 | 26.0 | 39 | 1.6707 | 0.4 | | 0.4908 | 26.67 | 40 | 1.8712 | 0.3778 | | 0.4908 | 28.0 | 42 | 2.1885 | 0.3556 | | 0.4908 | 28.67 | 43 | 2.0505 | 0.3556 | | 0.4908 | 30.0 | 45 | 1.6855 | 0.4 | | 0.4908 | 30.67 | 46 | 1.5304 | 0.4222 | | 0.4908 | 32.0 | 48 | 1.5067 | 0.3778 | | 0.4908 | 32.67 | 49 | 1.5442 | 0.4222 | | 0.3287 | 33.33 | 50 | 1.5661 | 0.4444 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4814 - Accuracy: 0.5333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 4.5182 | 0.2444 | | No log | 2.0 | 3 | 1.5416 | 0.2444 | | No log | 2.67 | 4 | 1.5662 | 0.2667 | | No log | 4.0 | 6 | 1.4453 | 0.2444 | | No log | 4.67 | 7 | 1.4082 | 0.2444 | | No log | 6.0 | 9 | 1.3188 | 0.4222 | | 1.9051 | 6.67 | 10 | 1.3266 | 0.3556 | | 1.9051 | 8.0 | 12 | 1.2375 | 0.4667 | | 1.9051 | 8.67 | 13 | 1.3632 | 0.3778 | | 1.9051 | 10.0 | 15 | 1.2064 | 0.4 | | 1.9051 | 10.67 | 16 | 1.5392 | 0.2889 | | 1.9051 | 12.0 | 18 | 1.1260 | 0.4889 | | 1.9051 | 12.67 | 19 | 1.0999 | 0.4667 | | 1.1808 | 14.0 | 21 | 1.2445 | 0.4222 | | 1.1808 | 14.67 | 22 | 1.2069 | 0.4444 | | 1.1808 | 16.0 | 24 | 1.0381 | 0.4889 | | 1.1808 | 16.67 | 25 | 1.0992 | 0.5111 | | 1.1808 | 18.0 | 27 | 1.1085 | 0.5333 | | 1.1808 | 18.67 | 28 | 1.0609 | 0.5111 | | 0.899 | 20.0 | 30 | 1.1754 | 0.5333 | | 0.899 | 20.67 | 31 | 1.1214 | 0.5333 | | 0.899 | 22.0 | 33 | 1.2625 | 0.4889 | | 0.899 | 22.67 | 34 | 1.2586 | 0.5111 | | 0.899 | 24.0 | 36 | 1.3423 | 0.4667 | | 0.899 | 24.67 | 37 | 1.4290 | 0.4667 | | 0.899 | 26.0 | 39 | 1.3722 | 0.5333 | | 0.4924 | 26.67 | 40 | 1.4024 | 0.5111 | | 0.4924 | 28.0 | 42 | 1.3396 | 0.5111 | | 0.4924 | 28.67 | 43 | 1.4100 | 0.4444 | | 0.4924 | 30.0 | 45 | 1.5561 | 0.4889 | | 0.4924 | 30.67 | 46 | 1.5223 | 0.5556 | | 0.4924 | 32.0 | 48 | 1.4581 | 0.5778 | | 0.4924 | 32.67 | 49 | 1.4627 | 0.5556 | | 0.1685 | 33.33 | 50 | 1.4814 | 0.5333 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6696 - Accuracy: 0.5814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 4.2125 | 0.2558 | | No log | 2.0 | 3 | 1.4682 | 0.2558 | | No log | 2.67 | 4 | 1.6910 | 0.2558 | | No log | 4.0 | 6 | 1.4476 | 0.2558 | | No log | 4.67 | 7 | 1.3895 | 0.2558 | | No log | 6.0 | 9 | 1.3751 | 0.2558 | | 1.9073 | 6.67 | 10 | 1.3741 | 0.3953 | | 1.9073 | 8.0 | 12 | 1.3957 | 0.3488 | | 1.9073 | 8.67 | 13 | 1.3369 | 0.4419 | | 1.9073 | 10.0 | 15 | 1.2847 | 0.4186 | | 1.9073 | 10.67 | 16 | 1.3400 | 0.3953 | | 1.9073 | 12.0 | 18 | 1.2676 | 0.3953 | | 1.9073 | 12.67 | 19 | 1.2806 | 0.3721 | | 1.1656 | 14.0 | 21 | 1.3652 | 0.3023 | | 1.1656 | 14.67 | 22 | 1.3370 | 0.4419 | | 1.1656 | 16.0 | 24 | 1.5165 | 0.3721 | | 1.1656 | 16.67 | 25 | 1.5828 | 0.3953 | | 1.1656 | 18.0 | 27 | 1.3210 | 0.3953 | | 1.1656 | 18.67 | 28 | 1.3473 | 0.4419 | | 0.9249 | 20.0 | 30 | 1.4346 | 0.4651 | | 0.9249 | 20.67 | 31 | 1.3840 | 0.3953 | | 0.9249 | 22.0 | 33 | 1.3578 | 0.4884 | | 0.9249 | 22.67 | 34 | 1.3339 | 0.4884 | | 0.9249 | 24.0 | 36 | 1.3509 | 0.4884 | | 0.9249 | 24.67 | 37 | 1.3931 | 0.4884 | | 0.9249 | 26.0 | 39 | 1.5691 | 0.5116 | | 0.5495 | 26.67 | 40 | 1.5953 | 0.5349 | | 0.5495 | 28.0 | 42 | 1.6688 | 0.5814 | | 0.5495 | 28.67 | 43 | 1.6795 | 0.5581 | | 0.5495 | 30.0 | 45 | 1.6839 | 0.5814 | | 0.5495 | 30.67 | 46 | 1.6666 | 0.5814 | | 0.5495 | 32.0 | 48 | 1.6555 | 0.5814 | | 0.5495 | 32.67 | 49 | 1.6646 | 0.5814 | | 0.2333 | 33.33 | 50 | 1.6696 | 0.5814 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7197 - Accuracy: 0.7619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 4.5038 | 0.2619 | | No log | 2.0 | 3 | 1.5021 | 0.2381 | | No log | 2.67 | 4 | 1.6655 | 0.2619 | | No log | 4.0 | 6 | 1.3927 | 0.2381 | | No log | 4.67 | 7 | 1.4664 | 0.2381 | | No log | 6.0 | 9 | 1.4341 | 0.2381 | | 1.9815 | 6.67 | 10 | 1.3866 | 0.5238 | | 1.9815 | 8.0 | 12 | 1.4168 | 0.2381 | | 1.9815 | 8.67 | 13 | 1.3770 | 0.2381 | | 1.9815 | 10.0 | 15 | 1.3099 | 0.2619 | | 1.9815 | 10.67 | 16 | 1.3229 | 0.2381 | | 1.9815 | 12.0 | 18 | 1.2134 | 0.5 | | 1.9815 | 12.67 | 19 | 1.1451 | 0.5238 | | 1.3526 | 14.0 | 21 | 1.1341 | 0.6429 | | 1.3526 | 14.67 | 22 | 0.9936 | 0.5952 | | 1.3526 | 16.0 | 24 | 0.8768 | 0.6905 | | 1.3526 | 16.67 | 25 | 0.9003 | 0.7143 | | 1.3526 | 18.0 | 27 | 0.7438 | 0.7857 | | 1.3526 | 18.67 | 28 | 0.6744 | 0.7143 | | 1.0291 | 20.0 | 30 | 0.6946 | 0.7381 | | 1.0291 | 20.67 | 31 | 0.6723 | 0.7381 | | 1.0291 | 22.0 | 33 | 0.7030 | 0.7619 | | 1.0291 | 22.67 | 34 | 0.6565 | 0.7857 | | 1.0291 | 24.0 | 36 | 0.6394 | 0.7619 | | 1.0291 | 24.67 | 37 | 0.7519 | 0.7143 | | 1.0291 | 26.0 | 39 | 0.7489 | 0.6667 | | 0.712 | 26.67 | 40 | 0.5267 | 0.8095 | | 0.712 | 28.0 | 42 | 0.6166 | 0.7619 | | 0.712 | 28.67 | 43 | 0.7873 | 0.7143 | | 0.712 | 30.0 | 45 | 0.8388 | 0.7619 | | 0.712 | 30.67 | 46 | 0.7831 | 0.7381 | | 0.712 | 32.0 | 48 | 0.7151 | 0.7619 | | 0.712 | 32.67 | 49 | 0.7126 | 0.7619 | | 0.4557 | 33.33 | 50 | 0.7197 | 0.7619 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1657 - Accuracy: 0.6585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 4.7722 | 0.2439 | | No log | 2.0 | 3 | 1.4567 | 0.2439 | | No log | 2.67 | 4 | 1.8233 | 0.2683 | | No log | 4.0 | 6 | 1.3918 | 0.2439 | | No log | 4.67 | 7 | 1.4247 | 0.2195 | | No log | 6.0 | 9 | 1.3988 | 0.2439 | | 1.9646 | 6.67 | 10 | 1.3700 | 0.3415 | | 1.9646 | 8.0 | 12 | 1.3164 | 0.3902 | | 1.9646 | 8.67 | 13 | 1.2953 | 0.3902 | | 1.9646 | 10.0 | 15 | 1.0825 | 0.5366 | | 1.9646 | 10.67 | 16 | 0.9280 | 0.7561 | | 1.9646 | 12.0 | 18 | 0.9474 | 0.5610 | | 1.9646 | 12.67 | 19 | 0.9791 | 0.5122 | | 1.1934 | 14.0 | 21 | 1.3039 | 0.3902 | | 1.1934 | 14.67 | 22 | 1.3242 | 0.3902 | | 1.1934 | 16.0 | 24 | 0.8880 | 0.6341 | | 1.1934 | 16.67 | 25 | 0.8367 | 0.6341 | | 1.1934 | 18.0 | 27 | 0.8476 | 0.6098 | | 1.1934 | 18.67 | 28 | 0.9406 | 0.5854 | | 0.8297 | 20.0 | 30 | 1.1819 | 0.4878 | | 0.8297 | 20.67 | 31 | 0.9194 | 0.5610 | | 0.8297 | 22.0 | 33 | 0.7486 | 0.6829 | | 0.8297 | 22.67 | 34 | 1.1493 | 0.6341 | | 0.8297 | 24.0 | 36 | 1.2217 | 0.5854 | | 0.8297 | 24.67 | 37 | 0.7746 | 0.6829 | | 0.8297 | 26.0 | 39 | 0.8320 | 0.6585 | | 0.5433 | 26.67 | 40 | 1.2210 | 0.5610 | | 0.5433 | 28.0 | 42 | 1.3782 | 0.5366 | | 0.5433 | 28.67 | 43 | 1.1529 | 0.6098 | | 0.5433 | 30.0 | 45 | 1.0361 | 0.6585 | | 0.5433 | 30.67 | 46 | 1.1089 | 0.6585 | | 0.5433 | 32.0 | 48 | 1.1802 | 0.6098 | | 0.5433 | 32.67 | 49 | 1.1774 | 0.6585 | | 0.2758 | 33.33 | 50 | 1.1657 | 0.6585 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr0001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3365 - Accuracy: 0.5778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.6054 | 0.2444 | | No log | 2.0 | 3 | 1.3436 | 0.3556 | | No log | 2.67 | 4 | 1.3392 | 0.2889 | | No log | 4.0 | 6 | 1.3661 | 0.2444 | | No log | 4.67 | 7 | 1.3117 | 0.3333 | | No log | 6.0 | 9 | 1.4031 | 0.2889 | | 1.1803 | 6.67 | 10 | 1.2845 | 0.4222 | | 1.1803 | 8.0 | 12 | 1.3559 | 0.3333 | | 1.1803 | 8.67 | 13 | 1.3178 | 0.4 | | 1.1803 | 10.0 | 15 | 1.1302 | 0.5778 | | 1.1803 | 10.67 | 16 | 1.2145 | 0.5556 | | 1.1803 | 12.0 | 18 | 1.3484 | 0.4 | | 1.1803 | 12.67 | 19 | 1.1709 | 0.5333 | | 0.3935 | 14.0 | 21 | 1.1495 | 0.5556 | | 0.3935 | 14.67 | 22 | 1.2656 | 0.4889 | | 0.3935 | 16.0 | 24 | 1.1929 | 0.5333 | | 0.3935 | 16.67 | 25 | 1.1205 | 0.5556 | | 0.3935 | 18.0 | 27 | 1.1729 | 0.5333 | | 0.3935 | 18.67 | 28 | 1.2656 | 0.5111 | | 0.0911 | 20.0 | 30 | 1.3172 | 0.5556 | | 0.0911 | 20.67 | 31 | 1.2343 | 0.5556 | | 0.0911 | 22.0 | 33 | 1.1439 | 0.6 | | 0.0911 | 22.67 | 34 | 1.1167 | 0.6222 | | 0.0911 | 24.0 | 36 | 1.1537 | 0.6 | | 0.0911 | 24.67 | 37 | 1.2658 | 0.5778 | | 0.0911 | 26.0 | 39 | 1.3705 | 0.5556 | | 0.0269 | 26.67 | 40 | 1.3468 | 0.5778 | | 0.0269 | 28.0 | 42 | 1.2914 | 0.6 | | 0.0269 | 28.67 | 43 | 1.2807 | 0.6 | | 0.0269 | 30.0 | 45 | 1.2833 | 0.6 | | 0.0269 | 30.67 | 46 | 1.3004 | 0.5778 | | 0.0269 | 32.0 | 48 | 1.3271 | 0.5778 | | 0.0269 | 32.67 | 49 | 1.3342 | 0.5778 | | 0.0102 | 33.33 | 50 | 1.3365 | 0.5778 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr0001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.9509 - Accuracy: 0.4667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.8647 | 0.2444 | | No log | 2.0 | 3 | 1.3951 | 0.2667 | | No log | 2.67 | 4 | 1.3922 | 0.2889 | | No log | 4.0 | 6 | 1.4419 | 0.2889 | | No log | 4.67 | 7 | 1.4528 | 0.2889 | | No log | 6.0 | 9 | 1.4637 | 0.3111 | | 1.1875 | 6.67 | 10 | 1.3956 | 0.3333 | | 1.1875 | 8.0 | 12 | 1.3768 | 0.4 | | 1.1875 | 8.67 | 13 | 1.4359 | 0.3778 | | 1.1875 | 10.0 | 15 | 1.4704 | 0.4 | | 1.1875 | 10.67 | 16 | 1.4280 | 0.3778 | | 1.1875 | 12.0 | 18 | 1.3838 | 0.4667 | | 1.1875 | 12.67 | 19 | 1.4103 | 0.4444 | | 0.4412 | 14.0 | 21 | 1.5312 | 0.4222 | | 0.4412 | 14.67 | 22 | 1.6068 | 0.4444 | | 0.4412 | 16.0 | 24 | 1.5834 | 0.4222 | | 0.4412 | 16.67 | 25 | 1.5809 | 0.4222 | | 0.4412 | 18.0 | 27 | 1.5887 | 0.4444 | | 0.4412 | 18.67 | 28 | 1.6461 | 0.4222 | | 0.0689 | 20.0 | 30 | 1.7954 | 0.4222 | | 0.0689 | 20.67 | 31 | 1.8270 | 0.4444 | | 0.0689 | 22.0 | 33 | 1.8461 | 0.4667 | | 0.0689 | 22.67 | 34 | 1.8602 | 0.4667 | | 0.0689 | 24.0 | 36 | 1.8842 | 0.4444 | | 0.0689 | 24.67 | 37 | 1.8985 | 0.4444 | | 0.0689 | 26.0 | 39 | 1.9148 | 0.4444 | | 0.0084 | 26.67 | 40 | 1.9205 | 0.4222 | | 0.0084 | 28.0 | 42 | 1.9310 | 0.4444 | | 0.0084 | 28.67 | 43 | 1.9361 | 0.4444 | | 0.0084 | 30.0 | 45 | 1.9427 | 0.4444 | | 0.0084 | 30.67 | 46 | 1.9452 | 0.4667 | | 0.0084 | 32.0 | 48 | 1.9490 | 0.4667 | | 0.0084 | 32.67 | 49 | 1.9503 | 0.4667 | | 0.0036 | 33.33 | 50 | 1.9509 | 0.4667 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr0001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8641 - Accuracy: 0.6977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.6041 | 0.2558 | | No log | 2.0 | 3 | 1.2890 | 0.3953 | | No log | 2.67 | 4 | 1.2944 | 0.3023 | | No log | 4.0 | 6 | 1.2013 | 0.4186 | | No log | 4.67 | 7 | 1.1135 | 0.4186 | | No log | 6.0 | 9 | 1.0796 | 0.5349 | | 1.2559 | 6.67 | 10 | 1.0570 | 0.5581 | | 1.2559 | 8.0 | 12 | 1.1038 | 0.4884 | | 1.2559 | 8.67 | 13 | 1.0764 | 0.4884 | | 1.2559 | 10.0 | 15 | 0.9749 | 0.5349 | | 1.2559 | 10.67 | 16 | 0.9354 | 0.5581 | | 1.2559 | 12.0 | 18 | 0.9274 | 0.6279 | | 1.2559 | 12.67 | 19 | 0.9435 | 0.6512 | | 0.4315 | 14.0 | 21 | 0.9225 | 0.6512 | | 0.4315 | 14.67 | 22 | 0.9168 | 0.6279 | | 0.4315 | 16.0 | 24 | 0.8830 | 0.6279 | | 0.4315 | 16.67 | 25 | 0.8956 | 0.6512 | | 0.4315 | 18.0 | 27 | 0.9038 | 0.6744 | | 0.4315 | 18.67 | 28 | 0.8913 | 0.6744 | | 0.058 | 20.0 | 30 | 0.8683 | 0.6512 | | 0.058 | 20.67 | 31 | 0.8553 | 0.6744 | | 0.058 | 22.0 | 33 | 0.8508 | 0.6977 | | 0.058 | 22.67 | 34 | 0.8546 | 0.6977 | | 0.058 | 24.0 | 36 | 0.8627 | 0.6977 | | 0.058 | 24.67 | 37 | 0.8639 | 0.6977 | | 0.058 | 26.0 | 39 | 0.8636 | 0.7209 | | 0.0086 | 26.67 | 40 | 0.8627 | 0.7209 | | 0.0086 | 28.0 | 42 | 0.8622 | 0.7209 | | 0.0086 | 28.67 | 43 | 0.8622 | 0.6977 | | 0.0086 | 30.0 | 45 | 0.8629 | 0.6977 | | 0.0086 | 30.67 | 46 | 0.8632 | 0.6977 | | 0.0086 | 32.0 | 48 | 0.8638 | 0.6977 | | 0.0086 | 32.67 | 49 | 0.8640 | 0.6977 | | 0.004 | 33.33 | 50 | 0.8641 | 0.6977 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr0001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5599 - Accuracy: 0.8333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.6749 | 0.3095 | | No log | 2.0 | 3 | 1.3545 | 0.3333 | | No log | 2.67 | 4 | 1.3451 | 0.2857 | | No log | 4.0 | 6 | 1.2535 | 0.5238 | | No log | 4.67 | 7 | 1.2290 | 0.4286 | | No log | 6.0 | 9 | 1.1555 | 0.5 | | 1.2457 | 6.67 | 10 | 1.0938 | 0.5 | | 1.2457 | 8.0 | 12 | 0.9608 | 0.4762 | | 1.2457 | 8.67 | 13 | 0.8825 | 0.5952 | | 1.2457 | 10.0 | 15 | 0.7678 | 0.7143 | | 1.2457 | 10.67 | 16 | 0.7184 | 0.7857 | | 1.2457 | 12.0 | 18 | 0.6658 | 0.7619 | | 1.2457 | 12.67 | 19 | 0.6361 | 0.7619 | | 0.4167 | 14.0 | 21 | 0.6247 | 0.8095 | | 0.4167 | 14.67 | 22 | 0.6111 | 0.7857 | | 0.4167 | 16.0 | 24 | 0.5896 | 0.7857 | | 0.4167 | 16.67 | 25 | 0.5886 | 0.7381 | | 0.4167 | 18.0 | 27 | 0.6107 | 0.7619 | | 0.4167 | 18.67 | 28 | 0.6198 | 0.7619 | | 0.0627 | 20.0 | 30 | 0.6194 | 0.7619 | | 0.0627 | 20.67 | 31 | 0.6092 | 0.7619 | | 0.0627 | 22.0 | 33 | 0.5917 | 0.7857 | | 0.0627 | 22.67 | 34 | 0.5871 | 0.7857 | | 0.0627 | 24.0 | 36 | 0.5872 | 0.8095 | | 0.0627 | 24.67 | 37 | 0.5896 | 0.8095 | | 0.0627 | 26.0 | 39 | 0.5921 | 0.8095 | | 0.0081 | 26.67 | 40 | 0.5908 | 0.8095 | | 0.0081 | 28.0 | 42 | 0.5818 | 0.8095 | | 0.0081 | 28.67 | 43 | 0.5772 | 0.8095 | | 0.0081 | 30.0 | 45 | 0.5685 | 0.8095 | | 0.0081 | 30.67 | 46 | 0.5654 | 0.8095 | | 0.0081 | 32.0 | 48 | 0.5614 | 0.8333 | | 0.0081 | 32.67 | 49 | 0.5603 | 0.8333 | | 0.0038 | 33.33 | 50 | 0.5599 | 0.8333 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr0001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8924 - Accuracy: 0.6585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.9330 | 0.2439 | | No log | 2.0 | 3 | 1.4362 | 0.3659 | | No log | 2.67 | 4 | 1.3806 | 0.3902 | | No log | 4.0 | 6 | 1.3304 | 0.4634 | | No log | 4.67 | 7 | 1.3017 | 0.4390 | | No log | 6.0 | 9 | 1.1836 | 0.4878 | | 1.2323 | 6.67 | 10 | 1.1688 | 0.5610 | | 1.2323 | 8.0 | 12 | 1.1361 | 0.5366 | | 1.2323 | 8.67 | 13 | 1.1291 | 0.5366 | | 1.2323 | 10.0 | 15 | 1.0782 | 0.6098 | | 1.2323 | 10.67 | 16 | 1.0358 | 0.6585 | | 1.2323 | 12.0 | 18 | 1.0020 | 0.6098 | | 1.2323 | 12.67 | 19 | 1.0059 | 0.6098 | | 0.3527 | 14.0 | 21 | 0.9293 | 0.6098 | | 0.3527 | 14.67 | 22 | 0.9162 | 0.6341 | | 0.3527 | 16.0 | 24 | 0.9233 | 0.6098 | | 0.3527 | 16.67 | 25 | 0.9213 | 0.6098 | | 0.3527 | 18.0 | 27 | 0.9193 | 0.6098 | | 0.3527 | 18.67 | 28 | 0.9345 | 0.6098 | | 0.04 | 20.0 | 30 | 0.8872 | 0.6585 | | 0.04 | 20.67 | 31 | 0.8549 | 0.6829 | | 0.04 | 22.0 | 33 | 0.8221 | 0.6829 | | 0.04 | 22.67 | 34 | 0.8117 | 0.7073 | | 0.04 | 24.0 | 36 | 0.8041 | 0.7561 | | 0.04 | 24.67 | 37 | 0.8128 | 0.7561 | | 0.04 | 26.0 | 39 | 0.8413 | 0.6829 | | 0.0062 | 26.67 | 40 | 0.8565 | 0.6585 | | 0.0062 | 28.0 | 42 | 0.8789 | 0.6585 | | 0.0062 | 28.67 | 43 | 0.8864 | 0.6585 | | 0.0062 | 30.0 | 45 | 0.8920 | 0.6585 | | 0.0062 | 30.67 | 46 | 0.8925 | 0.6585 | | 0.0062 | 32.0 | 48 | 0.8929 | 0.6585 | | 0.0062 | 32.67 | 49 | 0.8927 | 0.6585 | | 0.0031 | 33.33 | 50 | 0.8924 | 0.6585 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
danielcfox/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # danielcfox/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3752 - Validation Loss: 0.3389 - Train Accuracy: 0.917 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.7929 | 1.6468 | 0.827 | 0 | | 1.2217 | 0.7691 | 0.92 | 1 | | 0.7054 | 0.5002 | 0.916 | 2 | | 0.4851 | 0.3574 | 0.927 | 3 | | 0.3752 | 0.3389 | 0.917 | 4 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr00001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3005 - Accuracy: 0.4222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.4838 | 0.2222 | | No log | 2.0 | 3 | 1.4436 | 0.2222 | | No log | 2.67 | 4 | 1.4334 | 0.1778 | | No log | 4.0 | 6 | 1.4190 | 0.2667 | | No log | 4.67 | 7 | 1.4121 | 0.2889 | | No log | 6.0 | 9 | 1.3991 | 0.3111 | | 1.3869 | 6.67 | 10 | 1.3926 | 0.3333 | | 1.3869 | 8.0 | 12 | 1.3807 | 0.3556 | | 1.3869 | 8.67 | 13 | 1.3748 | 0.3556 | | 1.3869 | 10.0 | 15 | 1.3643 | 0.3778 | | 1.3869 | 10.67 | 16 | 1.3598 | 0.3778 | | 1.3869 | 12.0 | 18 | 1.3511 | 0.4 | | 1.3869 | 12.67 | 19 | 1.3478 | 0.3778 | | 1.1228 | 14.0 | 21 | 1.3405 | 0.4 | | 1.1228 | 14.67 | 22 | 1.3380 | 0.4 | | 1.1228 | 16.0 | 24 | 1.3323 | 0.4222 | | 1.1228 | 16.67 | 25 | 1.3292 | 0.4222 | | 1.1228 | 18.0 | 27 | 1.3250 | 0.4222 | | 1.1228 | 18.67 | 28 | 1.3231 | 0.4222 | | 0.9505 | 20.0 | 30 | 1.3201 | 0.4222 | | 0.9505 | 20.67 | 31 | 1.3189 | 0.4222 | | 0.9505 | 22.0 | 33 | 1.3162 | 0.4222 | | 0.9505 | 22.67 | 34 | 1.3147 | 0.4222 | | 0.9505 | 24.0 | 36 | 1.3120 | 0.4222 | | 0.9505 | 24.67 | 37 | 1.3113 | 0.4222 | | 0.9505 | 26.0 | 39 | 1.3090 | 0.4222 | | 0.8411 | 26.67 | 40 | 1.3078 | 0.4222 | | 0.8411 | 28.0 | 42 | 1.3057 | 0.4222 | | 0.8411 | 28.67 | 43 | 1.3047 | 0.4222 | | 0.8411 | 30.0 | 45 | 1.3028 | 0.4222 | | 0.8411 | 30.67 | 46 | 1.3020 | 0.4222 | | 0.8411 | 32.0 | 48 | 1.3010 | 0.4222 | | 0.8411 | 32.67 | 49 | 1.3007 | 0.4222 | | 0.7881 | 33.33 | 50 | 1.3005 | 0.4222 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr00001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4306 - Accuracy: 0.2889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.6419 | 0.1111 | | No log | 2.0 | 3 | 1.5215 | 0.1556 | | No log | 2.67 | 4 | 1.5000 | 0.1778 | | No log | 4.0 | 6 | 1.4887 | 0.2444 | | No log | 4.67 | 7 | 1.4849 | 0.2222 | | No log | 6.0 | 9 | 1.4754 | 0.2444 | | 1.3642 | 6.67 | 10 | 1.4684 | 0.2667 | | 1.3642 | 8.0 | 12 | 1.4571 | 0.2667 | | 1.3642 | 8.67 | 13 | 1.4523 | 0.2667 | | 1.3642 | 10.0 | 15 | 1.4422 | 0.2667 | | 1.3642 | 10.67 | 16 | 1.4392 | 0.2444 | | 1.3642 | 12.0 | 18 | 1.4341 | 0.2444 | | 1.3642 | 12.67 | 19 | 1.4327 | 0.2444 | | 1.1012 | 14.0 | 21 | 1.4319 | 0.2667 | | 1.1012 | 14.67 | 22 | 1.4329 | 0.2667 | | 1.1012 | 16.0 | 24 | 1.4330 | 0.2889 | | 1.1012 | 16.67 | 25 | 1.4333 | 0.2889 | | 1.1012 | 18.0 | 27 | 1.4342 | 0.2889 | | 1.1012 | 18.67 | 28 | 1.4339 | 0.2889 | | 0.9232 | 20.0 | 30 | 1.4351 | 0.2889 | | 0.9232 | 20.67 | 31 | 1.4354 | 0.2889 | | 0.9232 | 22.0 | 33 | 1.4352 | 0.2889 | | 0.9232 | 22.67 | 34 | 1.4353 | 0.2889 | | 0.9232 | 24.0 | 36 | 1.4349 | 0.2889 | | 0.9232 | 24.67 | 37 | 1.4347 | 0.2889 | | 0.9232 | 26.0 | 39 | 1.4341 | 0.2889 | | 0.8235 | 26.67 | 40 | 1.4334 | 0.2889 | | 0.8235 | 28.0 | 42 | 1.4325 | 0.2889 | | 0.8235 | 28.67 | 43 | 1.4325 | 0.2889 | | 0.8235 | 30.0 | 45 | 1.4317 | 0.2889 | | 0.8235 | 30.67 | 46 | 1.4312 | 0.2889 | | 0.8235 | 32.0 | 48 | 1.4307 | 0.2889 | | 0.8235 | 32.67 | 49 | 1.4307 | 0.2889 | | 0.7752 | 33.33 | 50 | 1.4306 | 0.2889 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr00001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0802 - Accuracy: 0.4651 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.5307 | 0.2093 | | No log | 2.0 | 3 | 1.3769 | 0.3023 | | No log | 2.67 | 4 | 1.3327 | 0.3721 | | No log | 4.0 | 6 | 1.2794 | 0.4419 | | No log | 4.67 | 7 | 1.2620 | 0.4419 | | No log | 6.0 | 9 | 1.2352 | 0.4884 | | 1.4092 | 6.67 | 10 | 1.2244 | 0.4884 | | 1.4092 | 8.0 | 12 | 1.2093 | 0.4884 | | 1.4092 | 8.67 | 13 | 1.2029 | 0.4884 | | 1.4092 | 10.0 | 15 | 1.1956 | 0.4651 | | 1.4092 | 10.67 | 16 | 1.1914 | 0.4651 | | 1.4092 | 12.0 | 18 | 1.1838 | 0.4651 | | 1.4092 | 12.67 | 19 | 1.1805 | 0.4651 | | 1.1598 | 14.0 | 21 | 1.1690 | 0.4419 | | 1.1598 | 14.67 | 22 | 1.1624 | 0.4419 | | 1.1598 | 16.0 | 24 | 1.1483 | 0.4186 | | 1.1598 | 16.67 | 25 | 1.1431 | 0.4186 | | 1.1598 | 18.0 | 27 | 1.1284 | 0.4186 | | 1.1598 | 18.67 | 28 | 1.1216 | 0.4419 | | 0.9892 | 20.0 | 30 | 1.1096 | 0.4419 | | 0.9892 | 20.67 | 31 | 1.1035 | 0.4651 | | 0.9892 | 22.0 | 33 | 1.0952 | 0.4651 | | 0.9892 | 22.67 | 34 | 1.0922 | 0.4651 | | 0.9892 | 24.0 | 36 | 1.0880 | 0.4651 | | 0.9892 | 24.67 | 37 | 1.0863 | 0.4651 | | 0.9892 | 26.0 | 39 | 1.0835 | 0.4651 | | 0.8902 | 26.67 | 40 | 1.0825 | 0.4651 | | 0.8902 | 28.0 | 42 | 1.0818 | 0.4651 | | 0.8902 | 28.67 | 43 | 1.0817 | 0.4651 | | 0.8902 | 30.0 | 45 | 1.0810 | 0.4651 | | 0.8902 | 30.67 | 46 | 1.0810 | 0.4651 | | 0.8902 | 32.0 | 48 | 1.0805 | 0.4651 | | 0.8902 | 32.67 | 49 | 1.0803 | 0.4651 | | 0.8497 | 33.33 | 50 | 1.0802 | 0.4651 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr00001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1208 - Accuracy: 0.5952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.5521 | 0.1429 | | No log | 2.0 | 3 | 1.4205 | 0.2857 | | No log | 2.67 | 4 | 1.3862 | 0.3571 | | No log | 4.0 | 6 | 1.3478 | 0.5238 | | No log | 4.67 | 7 | 1.3332 | 0.5238 | | No log | 6.0 | 9 | 1.3093 | 0.5238 | | 1.4089 | 6.67 | 10 | 1.2970 | 0.5476 | | 1.4089 | 8.0 | 12 | 1.2777 | 0.5714 | | 1.4089 | 8.67 | 13 | 1.2689 | 0.5714 | | 1.4089 | 10.0 | 15 | 1.2544 | 0.5714 | | 1.4089 | 10.67 | 16 | 1.2478 | 0.5714 | | 1.4089 | 12.0 | 18 | 1.2338 | 0.5714 | | 1.4089 | 12.67 | 19 | 1.2267 | 0.5714 | | 1.1506 | 14.0 | 21 | 1.2124 | 0.5714 | | 1.1506 | 14.67 | 22 | 1.2049 | 0.5714 | | 1.1506 | 16.0 | 24 | 1.1908 | 0.5714 | | 1.1506 | 16.67 | 25 | 1.1843 | 0.5952 | | 1.1506 | 18.0 | 27 | 1.1717 | 0.5952 | | 1.1506 | 18.67 | 28 | 1.1659 | 0.5952 | | 0.986 | 20.0 | 30 | 1.1576 | 0.5952 | | 0.986 | 20.67 | 31 | 1.1537 | 0.5952 | | 0.986 | 22.0 | 33 | 1.1470 | 0.5952 | | 0.986 | 22.67 | 34 | 1.1439 | 0.5952 | | 0.986 | 24.0 | 36 | 1.1385 | 0.5714 | | 0.986 | 24.67 | 37 | 1.1362 | 0.5952 | | 0.986 | 26.0 | 39 | 1.1320 | 0.5952 | | 0.8708 | 26.67 | 40 | 1.1301 | 0.5952 | | 0.8708 | 28.0 | 42 | 1.1268 | 0.5952 | | 0.8708 | 28.67 | 43 | 1.1256 | 0.5952 | | 0.8708 | 30.0 | 45 | 1.1234 | 0.5952 | | 0.8708 | 30.67 | 46 | 1.1226 | 0.5952 | | 0.8708 | 32.0 | 48 | 1.1214 | 0.5952 | | 0.8708 | 32.67 | 49 | 1.1210 | 0.5952 | | 0.8182 | 33.33 | 50 | 1.1208 | 0.5952 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_adamax_lr00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_adamax_lr00001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2718 - Accuracy: 0.4634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.6411 | 0.1220 | | No log | 2.0 | 3 | 1.5318 | 0.1951 | | No log | 2.67 | 4 | 1.5063 | 0.1707 | | No log | 4.0 | 6 | 1.4764 | 0.3171 | | No log | 4.67 | 7 | 1.4625 | 0.3171 | | No log | 6.0 | 9 | 1.4352 | 0.3902 | | 1.379 | 6.67 | 10 | 1.4210 | 0.4390 | | 1.379 | 8.0 | 12 | 1.3993 | 0.4390 | | 1.379 | 8.67 | 13 | 1.3906 | 0.4390 | | 1.379 | 10.0 | 15 | 1.3747 | 0.4146 | | 1.379 | 10.67 | 16 | 1.3676 | 0.4146 | | 1.379 | 12.0 | 18 | 1.3554 | 0.4146 | | 1.379 | 12.67 | 19 | 1.3500 | 0.4146 | | 1.107 | 14.0 | 21 | 1.3389 | 0.4146 | | 1.107 | 14.67 | 22 | 1.3348 | 0.4146 | | 1.107 | 16.0 | 24 | 1.3265 | 0.4390 | | 1.107 | 16.67 | 25 | 1.3236 | 0.4634 | | 1.107 | 18.0 | 27 | 1.3162 | 0.4634 | | 1.107 | 18.67 | 28 | 1.3129 | 0.4390 | | 0.9495 | 20.0 | 30 | 1.3051 | 0.4390 | | 0.9495 | 20.67 | 31 | 1.3019 | 0.4390 | | 0.9495 | 22.0 | 33 | 1.2961 | 0.4390 | | 0.9495 | 22.67 | 34 | 1.2934 | 0.4634 | | 0.9495 | 24.0 | 36 | 1.2879 | 0.4390 | | 0.9495 | 24.67 | 37 | 1.2851 | 0.4390 | | 0.9495 | 26.0 | 39 | 1.2815 | 0.4390 | | 0.8401 | 26.67 | 40 | 1.2802 | 0.4390 | | 0.8401 | 28.0 | 42 | 1.2775 | 0.4390 | | 0.8401 | 28.67 | 43 | 1.2761 | 0.4390 | | 0.8401 | 30.0 | 45 | 1.2740 | 0.4390 | | 0.8401 | 30.67 | 46 | 1.2734 | 0.4390 | | 0.8401 | 32.0 | 48 | 1.2723 | 0.4634 | | 0.8401 | 32.67 | 49 | 1.2719 | 0.4634 | | 0.7816 | 33.33 | 50 | 1.2718 | 0.4634 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4305 - Accuracy: 0.3111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6118 | 0.1778 | | 1.6924 | 2.0 | 12 | 1.5735 | 0.2222 | | 1.6924 | 3.0 | 18 | 1.5416 | 0.2222 | | 1.5478 | 4.0 | 24 | 1.5201 | 0.2444 | | 1.5093 | 5.0 | 30 | 1.4995 | 0.2889 | | 1.5093 | 6.0 | 36 | 1.4836 | 0.2889 | | 1.4614 | 7.0 | 42 | 1.4737 | 0.2889 | | 1.4614 | 8.0 | 48 | 1.4656 | 0.2889 | | 1.3895 | 9.0 | 54 | 1.4578 | 0.2222 | | 1.4002 | 10.0 | 60 | 1.4519 | 0.2667 | | 1.4002 | 11.0 | 66 | 1.4464 | 0.2667 | | 1.3595 | 12.0 | 72 | 1.4429 | 0.2667 | | 1.3595 | 13.0 | 78 | 1.4392 | 0.2667 | | 1.3506 | 14.0 | 84 | 1.4366 | 0.2222 | | 1.2804 | 15.0 | 90 | 1.4347 | 0.2 | | 1.2804 | 16.0 | 96 | 1.4330 | 0.2 | | 1.2746 | 17.0 | 102 | 1.4333 | 0.2667 | | 1.2746 | 18.0 | 108 | 1.4332 | 0.2667 | | 1.2774 | 19.0 | 114 | 1.4327 | 0.2667 | | 1.2547 | 20.0 | 120 | 1.4313 | 0.2667 | | 1.2547 | 21.0 | 126 | 1.4295 | 0.2667 | | 1.2313 | 22.0 | 132 | 1.4282 | 0.2889 | | 1.2313 | 23.0 | 138 | 1.4285 | 0.2889 | | 1.2194 | 24.0 | 144 | 1.4285 | 0.2889 | | 1.2083 | 25.0 | 150 | 1.4272 | 0.2889 | | 1.2083 | 26.0 | 156 | 1.4286 | 0.3111 | | 1.1973 | 27.0 | 162 | 1.4278 | 0.3111 | | 1.1973 | 28.0 | 168 | 1.4278 | 0.3111 | | 1.1964 | 29.0 | 174 | 1.4276 | 0.3111 | | 1.2006 | 30.0 | 180 | 1.4293 | 0.3111 | | 1.2006 | 31.0 | 186 | 1.4290 | 0.3111 | | 1.1662 | 32.0 | 192 | 1.4295 | 0.3111 | | 1.1662 | 33.0 | 198 | 1.4297 | 0.3111 | | 1.1889 | 34.0 | 204 | 1.4294 | 0.3111 | | 1.1683 | 35.0 | 210 | 1.4293 | 0.3111 | | 1.1683 | 36.0 | 216 | 1.4299 | 0.3111 | | 1.1652 | 37.0 | 222 | 1.4302 | 0.3111 | | 1.1652 | 38.0 | 228 | 1.4307 | 0.3111 | | 1.1321 | 39.0 | 234 | 1.4308 | 0.3111 | | 1.1584 | 40.0 | 240 | 1.4306 | 0.3111 | | 1.1584 | 41.0 | 246 | 1.4304 | 0.3111 | | 1.1553 | 42.0 | 252 | 1.4305 | 0.3111 | | 1.1553 | 43.0 | 258 | 1.4305 | 0.3111 | | 1.168 | 44.0 | 264 | 1.4305 | 0.3111 | | 1.1533 | 45.0 | 270 | 1.4305 | 0.3111 | | 1.1533 | 46.0 | 276 | 1.4305 | 0.3111 | | 1.1682 | 47.0 | 282 | 1.4305 | 0.3111 | | 1.1682 | 48.0 | 288 | 1.4305 | 0.3111 | | 1.1255 | 49.0 | 294 | 1.4305 | 0.3111 | | 1.1698 | 50.0 | 300 | 1.4305 | 0.3111 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4182 - Accuracy: 0.3556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6125 | 0.1333 | | 1.5852 | 2.0 | 12 | 1.5871 | 0.1333 | | 1.5852 | 3.0 | 18 | 1.5653 | 0.1556 | | 1.5028 | 4.0 | 24 | 1.5474 | 0.1556 | | 1.4795 | 5.0 | 30 | 1.5322 | 0.1556 | | 1.4795 | 6.0 | 36 | 1.5188 | 0.1556 | | 1.4252 | 7.0 | 42 | 1.5071 | 0.1556 | | 1.4252 | 8.0 | 48 | 1.4989 | 0.1556 | | 1.3707 | 9.0 | 54 | 1.4901 | 0.1778 | | 1.365 | 10.0 | 60 | 1.4824 | 0.2 | | 1.365 | 11.0 | 66 | 1.4748 | 0.2 | | 1.3235 | 12.0 | 72 | 1.4694 | 0.2444 | | 1.3235 | 13.0 | 78 | 1.4635 | 0.2444 | | 1.3233 | 14.0 | 84 | 1.4596 | 0.2444 | | 1.2774 | 15.0 | 90 | 1.4554 | 0.2444 | | 1.2774 | 16.0 | 96 | 1.4518 | 0.2444 | | 1.2584 | 17.0 | 102 | 1.4482 | 0.2667 | | 1.2584 | 18.0 | 108 | 1.4450 | 0.2667 | | 1.2788 | 19.0 | 114 | 1.4423 | 0.2667 | | 1.2388 | 20.0 | 120 | 1.4398 | 0.2667 | | 1.2388 | 21.0 | 126 | 1.4370 | 0.2889 | | 1.2317 | 22.0 | 132 | 1.4351 | 0.2667 | | 1.2317 | 23.0 | 138 | 1.4327 | 0.2889 | | 1.2286 | 24.0 | 144 | 1.4312 | 0.2889 | | 1.2033 | 25.0 | 150 | 1.4298 | 0.2889 | | 1.2033 | 26.0 | 156 | 1.4283 | 0.3111 | | 1.1965 | 27.0 | 162 | 1.4267 | 0.3111 | | 1.1965 | 28.0 | 168 | 1.4258 | 0.3111 | | 1.1963 | 29.0 | 174 | 1.4246 | 0.3111 | | 1.1946 | 30.0 | 180 | 1.4236 | 0.3111 | | 1.1946 | 31.0 | 186 | 1.4227 | 0.3333 | | 1.1805 | 32.0 | 192 | 1.4218 | 0.3556 | | 1.1805 | 33.0 | 198 | 1.4211 | 0.3556 | | 1.1439 | 34.0 | 204 | 1.4203 | 0.3556 | | 1.1699 | 35.0 | 210 | 1.4197 | 0.3556 | | 1.1699 | 36.0 | 216 | 1.4193 | 0.3556 | | 1.156 | 37.0 | 222 | 1.4190 | 0.3556 | | 1.156 | 38.0 | 228 | 1.4187 | 0.3556 | | 1.1475 | 39.0 | 234 | 1.4185 | 0.3556 | | 1.1517 | 40.0 | 240 | 1.4183 | 0.3556 | | 1.1517 | 41.0 | 246 | 1.4182 | 0.3556 | | 1.1468 | 42.0 | 252 | 1.4182 | 0.3556 | | 1.1468 | 43.0 | 258 | 1.4182 | 0.3556 | | 1.1597 | 44.0 | 264 | 1.4182 | 0.3556 | | 1.1542 | 45.0 | 270 | 1.4182 | 0.3556 | | 1.1542 | 46.0 | 276 | 1.4182 | 0.3556 | | 1.1604 | 47.0 | 282 | 1.4182 | 0.3556 | | 1.1604 | 48.0 | 288 | 1.4182 | 0.3556 | | 1.1576 | 49.0 | 294 | 1.4182 | 0.3556 | | 1.143 | 50.0 | 300 | 1.4182 | 0.3556 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3137 - Accuracy: 0.4186 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5163 | 0.3023 | | 1.6001 | 2.0 | 12 | 1.4936 | 0.3023 | | 1.6001 | 3.0 | 18 | 1.4729 | 0.3023 | | 1.5411 | 4.0 | 24 | 1.4550 | 0.3023 | | 1.4977 | 5.0 | 30 | 1.4401 | 0.3023 | | 1.4977 | 6.0 | 36 | 1.4267 | 0.3023 | | 1.4396 | 7.0 | 42 | 1.4159 | 0.3023 | | 1.4396 | 8.0 | 48 | 1.4066 | 0.3023 | | 1.4314 | 9.0 | 54 | 1.3991 | 0.3023 | | 1.3704 | 10.0 | 60 | 1.3909 | 0.3023 | | 1.3704 | 11.0 | 66 | 1.3847 | 0.3023 | | 1.3552 | 12.0 | 72 | 1.3793 | 0.3023 | | 1.3552 | 13.0 | 78 | 1.3735 | 0.3256 | | 1.3421 | 14.0 | 84 | 1.3686 | 0.3488 | | 1.3202 | 15.0 | 90 | 1.3638 | 0.3488 | | 1.3202 | 16.0 | 96 | 1.3593 | 0.3721 | | 1.2948 | 17.0 | 102 | 1.3558 | 0.3953 | | 1.2948 | 18.0 | 108 | 1.3518 | 0.3953 | | 1.2928 | 19.0 | 114 | 1.3488 | 0.3953 | | 1.2647 | 20.0 | 120 | 1.3454 | 0.3953 | | 1.2647 | 21.0 | 126 | 1.3427 | 0.3953 | | 1.2556 | 22.0 | 132 | 1.3402 | 0.3953 | | 1.2556 | 23.0 | 138 | 1.3379 | 0.3953 | | 1.253 | 24.0 | 144 | 1.3353 | 0.3953 | | 1.2437 | 25.0 | 150 | 1.3327 | 0.3953 | | 1.2437 | 26.0 | 156 | 1.3306 | 0.4186 | | 1.2239 | 27.0 | 162 | 1.3289 | 0.3953 | | 1.2239 | 28.0 | 168 | 1.3270 | 0.3953 | | 1.2275 | 29.0 | 174 | 1.3251 | 0.3953 | | 1.2028 | 30.0 | 180 | 1.3234 | 0.3953 | | 1.2028 | 31.0 | 186 | 1.3221 | 0.3953 | | 1.202 | 32.0 | 192 | 1.3205 | 0.3953 | | 1.202 | 33.0 | 198 | 1.3191 | 0.3953 | | 1.194 | 34.0 | 204 | 1.3178 | 0.3953 | | 1.1993 | 35.0 | 210 | 1.3169 | 0.4186 | | 1.1993 | 36.0 | 216 | 1.3160 | 0.4186 | | 1.1904 | 37.0 | 222 | 1.3153 | 0.4186 | | 1.1904 | 38.0 | 228 | 1.3147 | 0.4186 | | 1.1785 | 39.0 | 234 | 1.3142 | 0.4186 | | 1.2086 | 40.0 | 240 | 1.3139 | 0.4186 | | 1.2086 | 41.0 | 246 | 1.3138 | 0.4186 | | 1.1893 | 42.0 | 252 | 1.3137 | 0.4186 | | 1.1893 | 43.0 | 258 | 1.3137 | 0.4186 | | 1.2 | 44.0 | 264 | 1.3137 | 0.4186 | | 1.1775 | 45.0 | 270 | 1.3137 | 0.4186 | | 1.1775 | 46.0 | 276 | 1.3137 | 0.4186 | | 1.1852 | 47.0 | 282 | 1.3137 | 0.4186 | | 1.1852 | 48.0 | 288 | 1.3137 | 0.4186 | | 1.1783 | 49.0 | 294 | 1.3137 | 0.4186 | | 1.1702 | 50.0 | 300 | 1.3137 | 0.4186 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2453 - Accuracy: 0.4762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5316 | 0.1905 | | 1.5831 | 2.0 | 12 | 1.5015 | 0.1905 | | 1.5831 | 3.0 | 18 | 1.4762 | 0.1667 | | 1.5346 | 4.0 | 24 | 1.4541 | 0.1905 | | 1.5081 | 5.0 | 30 | 1.4366 | 0.2381 | | 1.5081 | 6.0 | 36 | 1.4200 | 0.2857 | | 1.4598 | 7.0 | 42 | 1.4054 | 0.2857 | | 1.4598 | 8.0 | 48 | 1.3912 | 0.2857 | | 1.4326 | 9.0 | 54 | 1.3788 | 0.3095 | | 1.3952 | 10.0 | 60 | 1.3675 | 0.3571 | | 1.3952 | 11.0 | 66 | 1.3571 | 0.3810 | | 1.3596 | 12.0 | 72 | 1.3480 | 0.3810 | | 1.3596 | 13.0 | 78 | 1.3393 | 0.3810 | | 1.363 | 14.0 | 84 | 1.3316 | 0.3810 | | 1.3301 | 15.0 | 90 | 1.3251 | 0.4048 | | 1.3301 | 16.0 | 96 | 1.3178 | 0.4048 | | 1.3095 | 17.0 | 102 | 1.3113 | 0.4048 | | 1.3095 | 18.0 | 108 | 1.3061 | 0.4048 | | 1.3044 | 19.0 | 114 | 1.3014 | 0.4048 | | 1.2995 | 20.0 | 120 | 1.2970 | 0.4048 | | 1.2995 | 21.0 | 126 | 1.2921 | 0.4048 | | 1.2717 | 22.0 | 132 | 1.2882 | 0.4048 | | 1.2717 | 23.0 | 138 | 1.2838 | 0.4048 | | 1.2926 | 24.0 | 144 | 1.2801 | 0.4048 | | 1.2458 | 25.0 | 150 | 1.2760 | 0.4048 | | 1.2458 | 26.0 | 156 | 1.2723 | 0.4286 | | 1.2592 | 27.0 | 162 | 1.2686 | 0.4286 | | 1.2592 | 28.0 | 168 | 1.2659 | 0.4286 | | 1.2355 | 29.0 | 174 | 1.2631 | 0.4286 | | 1.2526 | 30.0 | 180 | 1.2605 | 0.4286 | | 1.2526 | 31.0 | 186 | 1.2579 | 0.4524 | | 1.2439 | 32.0 | 192 | 1.2557 | 0.4524 | | 1.2439 | 33.0 | 198 | 1.2536 | 0.4524 | | 1.1949 | 34.0 | 204 | 1.2519 | 0.4524 | | 1.2285 | 35.0 | 210 | 1.2501 | 0.4524 | | 1.2285 | 36.0 | 216 | 1.2488 | 0.4524 | | 1.2118 | 37.0 | 222 | 1.2477 | 0.4524 | | 1.2118 | 38.0 | 228 | 1.2468 | 0.4762 | | 1.2136 | 39.0 | 234 | 1.2462 | 0.4762 | | 1.2259 | 40.0 | 240 | 1.2457 | 0.4762 | | 1.2259 | 41.0 | 246 | 1.2454 | 0.4762 | | 1.2204 | 42.0 | 252 | 1.2453 | 0.4762 | | 1.2204 | 43.0 | 258 | 1.2453 | 0.4762 | | 1.2061 | 44.0 | 264 | 1.2453 | 0.4762 | | 1.2146 | 45.0 | 270 | 1.2453 | 0.4762 | | 1.2146 | 46.0 | 276 | 1.2453 | 0.4762 | | 1.2137 | 47.0 | 282 | 1.2453 | 0.4762 | | 1.2137 | 48.0 | 288 | 1.2453 | 0.4762 | | 1.2227 | 49.0 | 294 | 1.2453 | 0.4762 | | 1.2027 | 50.0 | 300 | 1.2453 | 0.4762 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.3032 - Accuracy: 0.3902 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6123 | 0.1220 | | 1.5785 | 2.0 | 12 | 1.5813 | 0.1463 | | 1.5785 | 3.0 | 18 | 1.5542 | 0.1463 | | 1.5395 | 4.0 | 24 | 1.5297 | 0.1463 | | 1.4749 | 5.0 | 30 | 1.5083 | 0.1951 | | 1.4749 | 6.0 | 36 | 1.4884 | 0.1951 | | 1.4296 | 7.0 | 42 | 1.4718 | 0.1707 | | 1.4296 | 8.0 | 48 | 1.4578 | 0.1463 | | 1.4059 | 9.0 | 54 | 1.4447 | 0.1463 | | 1.3876 | 10.0 | 60 | 1.4316 | 0.2195 | | 1.3876 | 11.0 | 66 | 1.4209 | 0.2195 | | 1.3523 | 12.0 | 72 | 1.4102 | 0.2195 | | 1.3523 | 13.0 | 78 | 1.4009 | 0.2439 | | 1.3412 | 14.0 | 84 | 1.3926 | 0.2439 | | 1.3216 | 15.0 | 90 | 1.3847 | 0.2927 | | 1.3216 | 16.0 | 96 | 1.3782 | 0.3171 | | 1.2923 | 17.0 | 102 | 1.3713 | 0.3415 | | 1.2923 | 18.0 | 108 | 1.3652 | 0.3415 | | 1.305 | 19.0 | 114 | 1.3592 | 0.3415 | | 1.2722 | 20.0 | 120 | 1.3536 | 0.3415 | | 1.2722 | 21.0 | 126 | 1.3490 | 0.3659 | | 1.2479 | 22.0 | 132 | 1.3441 | 0.3659 | | 1.2479 | 23.0 | 138 | 1.3399 | 0.3659 | | 1.2818 | 24.0 | 144 | 1.3360 | 0.3659 | | 1.2363 | 25.0 | 150 | 1.3318 | 0.3659 | | 1.2363 | 26.0 | 156 | 1.3281 | 0.3659 | | 1.2375 | 27.0 | 162 | 1.3249 | 0.3659 | | 1.2375 | 28.0 | 168 | 1.3220 | 0.3659 | | 1.2164 | 29.0 | 174 | 1.3194 | 0.3659 | | 1.2359 | 30.0 | 180 | 1.3171 | 0.3902 | | 1.2359 | 31.0 | 186 | 1.3148 | 0.3902 | | 1.2121 | 32.0 | 192 | 1.3127 | 0.3902 | | 1.2121 | 33.0 | 198 | 1.3110 | 0.3902 | | 1.2131 | 34.0 | 204 | 1.3092 | 0.3902 | | 1.1973 | 35.0 | 210 | 1.3077 | 0.3902 | | 1.1973 | 36.0 | 216 | 1.3064 | 0.3902 | | 1.1836 | 37.0 | 222 | 1.3054 | 0.3902 | | 1.1836 | 38.0 | 228 | 1.3046 | 0.3902 | | 1.2087 | 39.0 | 234 | 1.3039 | 0.3902 | | 1.2019 | 40.0 | 240 | 1.3035 | 0.3902 | | 1.2019 | 41.0 | 246 | 1.3033 | 0.3902 | | 1.2033 | 42.0 | 252 | 1.3032 | 0.3902 | | 1.2033 | 43.0 | 258 | 1.3032 | 0.3902 | | 1.1754 | 44.0 | 264 | 1.3032 | 0.3902 | | 1.1907 | 45.0 | 270 | 1.3032 | 0.3902 | | 1.1907 | 46.0 | 276 | 1.3032 | 0.3902 | | 1.2082 | 47.0 | 282 | 1.3032 | 0.3902 | | 1.2082 | 48.0 | 288 | 1.3032 | 0.3902 | | 1.1699 | 49.0 | 294 | 1.3032 | 0.3902 | | 1.2038 | 50.0 | 300 | 1.3032 | 0.3902 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
alicelouis/Swin2e-4Lion
from transformers import AutoImageProcessor, SwinForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("alicelouis/Swin2e-4Lion") model = SwinForImageClassification.from_pretrained("alicelouis/Swin2e-4Lion") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx])
[ "adenocarcinoma", "large.cell", "normal", "squamous.cell" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr0001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5010 - Accuracy: 0.2444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5317 | 0.2667 | | 1.5768 | 2.0 | 12 | 1.5295 | 0.2444 | | 1.5768 | 3.0 | 18 | 1.5277 | 0.2444 | | 1.5217 | 4.0 | 24 | 1.5259 | 0.2444 | | 1.5806 | 5.0 | 30 | 1.5243 | 0.2444 | | 1.5806 | 6.0 | 36 | 1.5226 | 0.2444 | | 1.5608 | 7.0 | 42 | 1.5211 | 0.2444 | | 1.5608 | 8.0 | 48 | 1.5198 | 0.2444 | | 1.5538 | 9.0 | 54 | 1.5184 | 0.2444 | | 1.5354 | 10.0 | 60 | 1.5172 | 0.2444 | | 1.5354 | 11.0 | 66 | 1.5159 | 0.2444 | | 1.5529 | 12.0 | 72 | 1.5148 | 0.2444 | | 1.5529 | 13.0 | 78 | 1.5137 | 0.2444 | | 1.5094 | 14.0 | 84 | 1.5127 | 0.2444 | | 1.5228 | 15.0 | 90 | 1.5118 | 0.2444 | | 1.5228 | 16.0 | 96 | 1.5108 | 0.2444 | | 1.5295 | 17.0 | 102 | 1.5100 | 0.2444 | | 1.5295 | 18.0 | 108 | 1.5092 | 0.2444 | | 1.5298 | 19.0 | 114 | 1.5084 | 0.2444 | | 1.5372 | 20.0 | 120 | 1.5077 | 0.2444 | | 1.5372 | 21.0 | 126 | 1.5071 | 0.2444 | | 1.5336 | 22.0 | 132 | 1.5065 | 0.2444 | | 1.5336 | 23.0 | 138 | 1.5059 | 0.2444 | | 1.5077 | 24.0 | 144 | 1.5053 | 0.2444 | | 1.5022 | 25.0 | 150 | 1.5049 | 0.2444 | | 1.5022 | 26.0 | 156 | 1.5044 | 0.2444 | | 1.5158 | 27.0 | 162 | 1.5040 | 0.2444 | | 1.5158 | 28.0 | 168 | 1.5036 | 0.2444 | | 1.4961 | 29.0 | 174 | 1.5032 | 0.2444 | | 1.5155 | 30.0 | 180 | 1.5029 | 0.2444 | | 1.5155 | 31.0 | 186 | 1.5025 | 0.2444 | | 1.5093 | 32.0 | 192 | 1.5022 | 0.2444 | | 1.5093 | 33.0 | 198 | 1.5020 | 0.2444 | | 1.4596 | 34.0 | 204 | 1.5017 | 0.2444 | | 1.4894 | 35.0 | 210 | 1.5015 | 0.2444 | | 1.4894 | 36.0 | 216 | 1.5014 | 0.2444 | | 1.5058 | 37.0 | 222 | 1.5012 | 0.2444 | | 1.5058 | 38.0 | 228 | 1.5011 | 0.2444 | | 1.4675 | 39.0 | 234 | 1.5010 | 0.2444 | | 1.4822 | 40.0 | 240 | 1.5010 | 0.2444 | | 1.4822 | 41.0 | 246 | 1.5010 | 0.2444 | | 1.5008 | 42.0 | 252 | 1.5010 | 0.2444 | | 1.5008 | 43.0 | 258 | 1.5010 | 0.2444 | | 1.5075 | 44.0 | 264 | 1.5010 | 0.2444 | | 1.5338 | 45.0 | 270 | 1.5010 | 0.2444 | | 1.5338 | 46.0 | 276 | 1.5010 | 0.2444 | | 1.5016 | 47.0 | 282 | 1.5010 | 0.2444 | | 1.5016 | 48.0 | 288 | 1.5010 | 0.2444 | | 1.4777 | 49.0 | 294 | 1.5010 | 0.2444 | | 1.4813 | 50.0 | 300 | 1.5010 | 0.2444 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr0001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5853 - Accuracy: 0.1333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6380 | 0.1333 | | 1.612 | 2.0 | 12 | 1.6350 | 0.1333 | | 1.612 | 3.0 | 18 | 1.6320 | 0.1333 | | 1.5793 | 4.0 | 24 | 1.6293 | 0.1333 | | 1.6085 | 5.0 | 30 | 1.6268 | 0.1333 | | 1.6085 | 6.0 | 36 | 1.6242 | 0.1333 | | 1.5833 | 7.0 | 42 | 1.6217 | 0.1333 | | 1.5833 | 8.0 | 48 | 1.6197 | 0.1333 | | 1.5532 | 9.0 | 54 | 1.6175 | 0.1333 | | 1.5785 | 10.0 | 60 | 1.6153 | 0.1333 | | 1.5785 | 11.0 | 66 | 1.6132 | 0.1333 | | 1.5506 | 12.0 | 72 | 1.6113 | 0.1333 | | 1.5506 | 13.0 | 78 | 1.6095 | 0.1333 | | 1.5868 | 14.0 | 84 | 1.6078 | 0.1333 | | 1.532 | 15.0 | 90 | 1.6062 | 0.1333 | | 1.532 | 16.0 | 96 | 1.6045 | 0.1333 | | 1.5321 | 17.0 | 102 | 1.6029 | 0.1333 | | 1.5321 | 18.0 | 108 | 1.6015 | 0.1333 | | 1.5965 | 19.0 | 114 | 1.6001 | 0.1333 | | 1.5428 | 20.0 | 120 | 1.5987 | 0.1333 | | 1.5428 | 21.0 | 126 | 1.5975 | 0.1333 | | 1.5622 | 22.0 | 132 | 1.5964 | 0.1333 | | 1.5622 | 23.0 | 138 | 1.5951 | 0.1333 | | 1.5259 | 24.0 | 144 | 1.5941 | 0.1333 | | 1.5339 | 25.0 | 150 | 1.5932 | 0.1333 | | 1.5339 | 26.0 | 156 | 1.5923 | 0.1333 | | 1.5237 | 27.0 | 162 | 1.5914 | 0.1333 | | 1.5237 | 28.0 | 168 | 1.5907 | 0.1333 | | 1.5539 | 29.0 | 174 | 1.5899 | 0.1333 | | 1.5487 | 30.0 | 180 | 1.5891 | 0.1333 | | 1.5487 | 31.0 | 186 | 1.5885 | 0.1333 | | 1.5317 | 32.0 | 192 | 1.5879 | 0.1333 | | 1.5317 | 33.0 | 198 | 1.5874 | 0.1333 | | 1.4989 | 34.0 | 204 | 1.5869 | 0.1333 | | 1.5301 | 35.0 | 210 | 1.5865 | 0.1333 | | 1.5301 | 36.0 | 216 | 1.5862 | 0.1333 | | 1.5061 | 37.0 | 222 | 1.5859 | 0.1333 | | 1.5061 | 38.0 | 228 | 1.5857 | 0.1333 | | 1.5205 | 39.0 | 234 | 1.5855 | 0.1333 | | 1.5267 | 40.0 | 240 | 1.5854 | 0.1333 | | 1.5267 | 41.0 | 246 | 1.5854 | 0.1333 | | 1.5211 | 42.0 | 252 | 1.5853 | 0.1333 | | 1.5211 | 43.0 | 258 | 1.5853 | 0.1333 | | 1.524 | 44.0 | 264 | 1.5853 | 0.1333 | | 1.5163 | 45.0 | 270 | 1.5853 | 0.1333 | | 1.5163 | 46.0 | 276 | 1.5853 | 0.1333 | | 1.5253 | 47.0 | 282 | 1.5853 | 0.1333 | | 1.5253 | 48.0 | 288 | 1.5853 | 0.1333 | | 1.5384 | 49.0 | 294 | 1.5853 | 0.1333 | | 1.5175 | 50.0 | 300 | 1.5853 | 0.1333 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_sgd_adamax_lr0001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4906 - Accuracy: 0.3023 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5410 | 0.3023 | | 1.6256 | 2.0 | 12 | 1.5383 | 0.3023 | | 1.6256 | 3.0 | 18 | 1.5355 | 0.3023 | | 1.6209 | 4.0 | 24 | 1.5329 | 0.3023 | | 1.6213 | 5.0 | 30 | 1.5304 | 0.3023 | | 1.6213 | 6.0 | 36 | 1.5279 | 0.3023 | | 1.6054 | 7.0 | 42 | 1.5256 | 0.3023 | | 1.6054 | 8.0 | 48 | 1.5235 | 0.3023 | | 1.622 | 9.0 | 54 | 1.5216 | 0.3023 | | 1.5637 | 10.0 | 60 | 1.5195 | 0.3023 | | 1.5637 | 11.0 | 66 | 1.5177 | 0.3023 | | 1.5868 | 12.0 | 72 | 1.5160 | 0.3023 | | 1.5868 | 13.0 | 78 | 1.5142 | 0.3023 | | 1.5951 | 14.0 | 84 | 1.5125 | 0.3023 | | 1.5787 | 15.0 | 90 | 1.5108 | 0.3023 | | 1.5787 | 16.0 | 96 | 1.5091 | 0.3023 | | 1.5727 | 17.0 | 102 | 1.5077 | 0.3023 | | 1.5727 | 18.0 | 108 | 1.5063 | 0.3023 | | 1.5858 | 19.0 | 114 | 1.5049 | 0.3023 | | 1.5652 | 20.0 | 120 | 1.5036 | 0.3023 | | 1.5652 | 21.0 | 126 | 1.5024 | 0.3023 | | 1.5577 | 22.0 | 132 | 1.5012 | 0.3023 | | 1.5577 | 23.0 | 138 | 1.5001 | 0.3023 | | 1.5855 | 24.0 | 144 | 1.4991 | 0.3023 | | 1.5594 | 25.0 | 150 | 1.4981 | 0.3023 | | 1.5594 | 26.0 | 156 | 1.4972 | 0.3023 | | 1.5496 | 27.0 | 162 | 1.4964 | 0.3023 | | 1.5496 | 28.0 | 168 | 1.4956 | 0.3023 | | 1.5543 | 29.0 | 174 | 1.4949 | 0.3023 | | 1.5415 | 30.0 | 180 | 1.4943 | 0.3023 | | 1.5415 | 31.0 | 186 | 1.4938 | 0.3023 | | 1.5408 | 32.0 | 192 | 1.4932 | 0.3023 | | 1.5408 | 33.0 | 198 | 1.4926 | 0.3023 | | 1.5602 | 34.0 | 204 | 1.4922 | 0.3023 | | 1.5429 | 35.0 | 210 | 1.4918 | 0.3023 | | 1.5429 | 36.0 | 216 | 1.4914 | 0.3023 | | 1.5494 | 37.0 | 222 | 1.4912 | 0.3023 | | 1.5494 | 38.0 | 228 | 1.4909 | 0.3023 | | 1.5361 | 39.0 | 234 | 1.4908 | 0.3023 | | 1.5628 | 40.0 | 240 | 1.4906 | 0.3023 | | 1.5628 | 41.0 | 246 | 1.4906 | 0.3023 | | 1.5458 | 42.0 | 252 | 1.4906 | 0.3023 | | 1.5458 | 43.0 | 258 | 1.4906 | 0.3023 | | 1.5716 | 44.0 | 264 | 1.4906 | 0.3023 | | 1.5384 | 45.0 | 270 | 1.4906 | 0.3023 | | 1.5384 | 46.0 | 276 | 1.4906 | 0.3023 | | 1.5475 | 47.0 | 282 | 1.4906 | 0.3023 | | 1.5475 | 48.0 | 288 | 1.4906 | 0.3023 | | 1.5338 | 49.0 | 294 | 1.4906 | 0.3023 | | 1.5337 | 50.0 | 300 | 1.4906 | 0.3023 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr0001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4980 - Accuracy: 0.1667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5601 | 0.1905 | | 1.6045 | 2.0 | 12 | 1.5566 | 0.1905 | | 1.6045 | 3.0 | 18 | 1.5532 | 0.1905 | | 1.6142 | 4.0 | 24 | 1.5500 | 0.1905 | | 1.6266 | 5.0 | 30 | 1.5471 | 0.1905 | | 1.6266 | 6.0 | 36 | 1.5442 | 0.1905 | | 1.6101 | 7.0 | 42 | 1.5414 | 0.1905 | | 1.6101 | 8.0 | 48 | 1.5385 | 0.1905 | | 1.6089 | 9.0 | 54 | 1.5358 | 0.1905 | | 1.5908 | 10.0 | 60 | 1.5333 | 0.1905 | | 1.5908 | 11.0 | 66 | 1.5308 | 0.1905 | | 1.5657 | 12.0 | 72 | 1.5284 | 0.1905 | | 1.5657 | 13.0 | 78 | 1.5261 | 0.1905 | | 1.6049 | 14.0 | 84 | 1.5240 | 0.1905 | | 1.5586 | 15.0 | 90 | 1.5222 | 0.1905 | | 1.5586 | 16.0 | 96 | 1.5201 | 0.1905 | | 1.5639 | 17.0 | 102 | 1.5182 | 0.1905 | | 1.5639 | 18.0 | 108 | 1.5166 | 0.1905 | | 1.5536 | 19.0 | 114 | 1.5151 | 0.1905 | | 1.5821 | 20.0 | 120 | 1.5136 | 0.1905 | | 1.5821 | 21.0 | 126 | 1.5122 | 0.1905 | | 1.5341 | 22.0 | 132 | 1.5109 | 0.1905 | | 1.5341 | 23.0 | 138 | 1.5096 | 0.1905 | | 1.6078 | 24.0 | 144 | 1.5084 | 0.1905 | | 1.5121 | 25.0 | 150 | 1.5073 | 0.1905 | | 1.5121 | 26.0 | 156 | 1.5061 | 0.1905 | | 1.5521 | 27.0 | 162 | 1.5050 | 0.1905 | | 1.5521 | 28.0 | 168 | 1.5041 | 0.1905 | | 1.5505 | 29.0 | 174 | 1.5033 | 0.1905 | | 1.5712 | 30.0 | 180 | 1.5025 | 0.1905 | | 1.5712 | 31.0 | 186 | 1.5017 | 0.1905 | | 1.5865 | 32.0 | 192 | 1.5010 | 0.1905 | | 1.5865 | 33.0 | 198 | 1.5005 | 0.1905 | | 1.4766 | 34.0 | 204 | 1.4999 | 0.1905 | | 1.5501 | 35.0 | 210 | 1.4994 | 0.1905 | | 1.5501 | 36.0 | 216 | 1.4990 | 0.1905 | | 1.5465 | 37.0 | 222 | 1.4987 | 0.1667 | | 1.5465 | 38.0 | 228 | 1.4984 | 0.1667 | | 1.5254 | 39.0 | 234 | 1.4982 | 0.1667 | | 1.575 | 40.0 | 240 | 1.4980 | 0.1667 | | 1.575 | 41.0 | 246 | 1.4980 | 0.1667 | | 1.5455 | 42.0 | 252 | 1.4980 | 0.1667 | | 1.5455 | 43.0 | 258 | 1.4980 | 0.1667 | | 1.5648 | 44.0 | 264 | 1.4980 | 0.1667 | | 1.5279 | 45.0 | 270 | 1.4980 | 0.1667 | | 1.5279 | 46.0 | 276 | 1.4980 | 0.1667 | | 1.5492 | 47.0 | 282 | 1.4980 | 0.1667 | | 1.5492 | 48.0 | 288 | 1.4980 | 0.1667 | | 1.5479 | 49.0 | 294 | 1.4980 | 0.1667 | | 1.5321 | 50.0 | 300 | 1.4980 | 0.1667 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr0001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5789 - Accuracy: 0.1463 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6455 | 0.1220 | | 1.6035 | 2.0 | 12 | 1.6420 | 0.1220 | | 1.6035 | 3.0 | 18 | 1.6386 | 0.1463 | | 1.6142 | 4.0 | 24 | 1.6353 | 0.1463 | | 1.5857 | 5.0 | 30 | 1.6321 | 0.1463 | | 1.5857 | 6.0 | 36 | 1.6289 | 0.1463 | | 1.5718 | 7.0 | 42 | 1.6259 | 0.1463 | | 1.5718 | 8.0 | 48 | 1.6232 | 0.1463 | | 1.5833 | 9.0 | 54 | 1.6206 | 0.1463 | | 1.5737 | 10.0 | 60 | 1.6178 | 0.1463 | | 1.5737 | 11.0 | 66 | 1.6153 | 0.1463 | | 1.5614 | 12.0 | 72 | 1.6128 | 0.1220 | | 1.5614 | 13.0 | 78 | 1.6104 | 0.1220 | | 1.5648 | 14.0 | 84 | 1.6081 | 0.1220 | | 1.5575 | 15.0 | 90 | 1.6060 | 0.1220 | | 1.5575 | 16.0 | 96 | 1.6040 | 0.1220 | | 1.5452 | 17.0 | 102 | 1.6020 | 0.1220 | | 1.5452 | 18.0 | 108 | 1.6002 | 0.1220 | | 1.5768 | 19.0 | 114 | 1.5984 | 0.1220 | | 1.5464 | 20.0 | 120 | 1.5966 | 0.1220 | | 1.5464 | 21.0 | 126 | 1.5950 | 0.1220 | | 1.5149 | 22.0 | 132 | 1.5934 | 0.1220 | | 1.5149 | 23.0 | 138 | 1.5920 | 0.1220 | | 1.6056 | 24.0 | 144 | 1.5905 | 0.1220 | | 1.5161 | 25.0 | 150 | 1.5892 | 0.1220 | | 1.5161 | 26.0 | 156 | 1.5879 | 0.1220 | | 1.519 | 27.0 | 162 | 1.5868 | 0.1220 | | 1.519 | 28.0 | 168 | 1.5857 | 0.1220 | | 1.5531 | 29.0 | 174 | 1.5848 | 0.1220 | | 1.5347 | 30.0 | 180 | 1.5839 | 0.1220 | | 1.5347 | 31.0 | 186 | 1.5831 | 0.1220 | | 1.5238 | 32.0 | 192 | 1.5824 | 0.1220 | | 1.5238 | 33.0 | 198 | 1.5817 | 0.1463 | | 1.5463 | 34.0 | 204 | 1.5811 | 0.1463 | | 1.5219 | 35.0 | 210 | 1.5805 | 0.1463 | | 1.5219 | 36.0 | 216 | 1.5800 | 0.1463 | | 1.5056 | 37.0 | 222 | 1.5797 | 0.1463 | | 1.5056 | 38.0 | 228 | 1.5794 | 0.1463 | | 1.5505 | 39.0 | 234 | 1.5791 | 0.1463 | | 1.5261 | 40.0 | 240 | 1.5790 | 0.1463 | | 1.5261 | 41.0 | 246 | 1.5789 | 0.1463 | | 1.5175 | 42.0 | 252 | 1.5789 | 0.1463 | | 1.5175 | 43.0 | 258 | 1.5789 | 0.1463 | | 1.5317 | 44.0 | 264 | 1.5789 | 0.1463 | | 1.5241 | 45.0 | 270 | 1.5789 | 0.1463 | | 1.5241 | 46.0 | 276 | 1.5789 | 0.1463 | | 1.5533 | 47.0 | 282 | 1.5789 | 0.1463 | | 1.5533 | 48.0 | 288 | 1.5789 | 0.1463 | | 1.4945 | 49.0 | 294 | 1.5789 | 0.1463 | | 1.5379 | 50.0 | 300 | 1.5789 | 0.1463 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
moreover18/vit-base-patch16-224-in21k-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-eurosat This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1770 - Accuracy: 0.9361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.687 | 0.04 | 10 | 0.6778 | 0.6026 | | 0.6605 | 0.09 | 20 | 0.6359 | 0.7564 | | 0.6074 | 0.13 | 30 | 0.5734 | 0.7786 | | 0.5464 | 0.17 | 40 | 0.4877 | 0.8267 | | 0.4606 | 0.21 | 50 | 0.3836 | 0.8914 | | 0.379 | 0.26 | 60 | 0.3269 | 0.8877 | | 0.2746 | 0.3 | 70 | 0.2403 | 0.9198 | | 0.2974 | 0.34 | 80 | 0.2931 | 0.8890 | | 0.2459 | 0.39 | 90 | 0.2596 | 0.9016 | | 0.2507 | 0.43 | 100 | 0.2366 | 0.9123 | | 0.2627 | 0.47 | 110 | 0.2084 | 0.9224 | | 0.2481 | 0.51 | 120 | 0.2050 | 0.9270 | | 0.2372 | 0.56 | 130 | 0.2077 | 0.9267 | | 0.2468 | 0.6 | 140 | 0.2111 | 0.9230 | | 0.2272 | 0.64 | 150 | 0.1964 | 0.9267 | | 0.2568 | 0.68 | 160 | 0.1975 | 0.9270 | | 0.2608 | 0.73 | 170 | 0.2485 | 0.9048 | | 0.2641 | 0.77 | 180 | 0.2143 | 0.9227 | | 0.2347 | 0.81 | 190 | 0.1921 | 0.9307 | | 0.2231 | 0.86 | 200 | 0.1882 | 0.9315 | | 0.2147 | 0.9 | 210 | 0.1865 | 0.9329 | | 0.2028 | 0.94 | 220 | 0.1901 | 0.9294 | | 0.1792 | 0.98 | 230 | 0.1868 | 0.9297 | | 0.2471 | 1.03 | 240 | 0.2104 | 0.9190 | | 0.1896 | 1.07 | 250 | 0.1840 | 0.9321 | | 0.2181 | 1.11 | 260 | 0.1800 | 0.9318 | | 0.1861 | 1.16 | 270 | 0.1815 | 0.9305 | | 0.1761 | 1.2 | 280 | 0.1886 | 0.9299 | | 0.1703 | 1.24 | 290 | 0.1802 | 0.9315 | | 0.184 | 1.28 | 300 | 0.1845 | 0.9321 | | 0.1864 | 1.33 | 310 | 0.1791 | 0.9342 | | 0.1857 | 1.37 | 320 | 0.1760 | 0.9347 | | 0.1558 | 1.41 | 330 | 0.1798 | 0.9318 | | 0.1852 | 1.45 | 340 | 0.1810 | 0.9323 | | 0.183 | 1.5 | 350 | 0.1775 | 0.9321 | | 0.2055 | 1.54 | 360 | 0.1789 | 0.9337 | | 0.207 | 1.58 | 370 | 0.2082 | 0.9208 | | 0.2264 | 1.63 | 380 | 0.1733 | 0.9339 | | 0.1954 | 1.67 | 390 | 0.1772 | 0.9337 | | 0.1676 | 1.71 | 400 | 0.1840 | 0.9302 | | 0.1727 | 1.75 | 410 | 0.1784 | 0.9305 | | 0.204 | 1.8 | 420 | 0.1731 | 0.9353 | | 0.1805 | 1.84 | 430 | 0.1805 | 0.9310 | | 0.1732 | 1.88 | 440 | 0.1773 | 0.9337 | | 0.1831 | 1.93 | 450 | 0.1768 | 0.9337 | | 0.1906 | 1.97 | 460 | 0.1967 | 0.9259 | | 0.1785 | 2.01 | 470 | 0.1765 | 0.9331 | | 0.1566 | 2.05 | 480 | 0.1749 | 0.9361 | | 0.1612 | 2.1 | 490 | 0.1718 | 0.9342 | | 0.1504 | 2.14 | 500 | 0.1770 | 0.9361 | | 0.1704 | 2.18 | 510 | 0.1721 | 0.9363 | | 0.1597 | 2.22 | 520 | 0.1711 | 0.9345 | | 0.1283 | 2.27 | 530 | 0.1775 | 0.9361 | | 0.1697 | 2.31 | 540 | 0.1722 | 0.9361 | | 0.1541 | 2.35 | 550 | 0.1729 | 0.9366 | | 0.1466 | 2.4 | 560 | 0.1708 | 0.9369 | | 0.1604 | 2.44 | 570 | 0.1720 | 0.9371 | | 0.1798 | 2.48 | 580 | 0.1718 | 0.9382 | | 0.134 | 2.52 | 590 | 0.1733 | 0.9371 | | 0.1215 | 2.57 | 600 | 0.1749 | 0.9369 | | 0.1284 | 2.61 | 610 | 0.1760 | 0.9358 | | 0.1449 | 2.65 | 620 | 0.1745 | 0.9361 | | 0.214 | 2.7 | 630 | 0.1729 | 0.9382 | | 0.1684 | 2.74 | 640 | 0.1724 | 0.9369 | | 0.143 | 2.78 | 650 | 0.1737 | 0.9377 | | 0.1491 | 2.82 | 660 | 0.1753 | 0.9366 | | 0.1636 | 2.87 | 670 | 0.1743 | 0.9371 | | 0.1672 | 2.91 | 680 | 0.1724 | 0.9377 | | 0.1501 | 2.95 | 690 | 0.1720 | 0.9374 | ### Framework versions - Transformers 4.35.0 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.14.1
[ "not_people", "people" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6888 - Accuracy: 0.3778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 5.6136 | 0.2667 | | 3.3911 | 2.0 | 12 | 1.9975 | 0.2444 | | 3.3911 | 3.0 | 18 | 1.6108 | 0.2444 | | 1.8768 | 4.0 | 24 | 1.5122 | 0.2667 | | 1.5956 | 5.0 | 30 | 1.5150 | 0.2444 | | 1.5956 | 6.0 | 36 | 1.8016 | 0.2444 | | 1.4916 | 7.0 | 42 | 1.5690 | 0.4444 | | 1.4916 | 8.0 | 48 | 1.4991 | 0.2667 | | 1.4756 | 9.0 | 54 | 1.4574 | 0.2444 | | 1.4478 | 10.0 | 60 | 1.4022 | 0.2444 | | 1.4478 | 11.0 | 66 | 1.4406 | 0.2667 | | 1.421 | 12.0 | 72 | 1.3666 | 0.2444 | | 1.421 | 13.0 | 78 | 1.3200 | 0.2667 | | 1.4101 | 14.0 | 84 | 1.4311 | 0.2444 | | 1.366 | 15.0 | 90 | 1.5240 | 0.4 | | 1.366 | 16.0 | 96 | 1.1533 | 0.5333 | | 1.3311 | 17.0 | 102 | 1.1480 | 0.4667 | | 1.3311 | 18.0 | 108 | 1.5207 | 0.2444 | | 1.1912 | 19.0 | 114 | 1.6588 | 0.2889 | | 1.1923 | 20.0 | 120 | 1.4947 | 0.4667 | | 1.1923 | 21.0 | 126 | 1.3281 | 0.2444 | | 1.1796 | 22.0 | 132 | 1.3569 | 0.3333 | | 1.1796 | 23.0 | 138 | 1.7298 | 0.2444 | | 1.1031 | 24.0 | 144 | 1.6401 | 0.3556 | | 1.2056 | 25.0 | 150 | 1.3732 | 0.2889 | | 1.2056 | 26.0 | 156 | 1.8651 | 0.3333 | | 1.1039 | 27.0 | 162 | 1.2494 | 0.4667 | | 1.1039 | 28.0 | 168 | 1.4459 | 0.2889 | | 1.0659 | 29.0 | 174 | 1.4875 | 0.3333 | | 1.0534 | 30.0 | 180 | 1.4599 | 0.3556 | | 1.0534 | 31.0 | 186 | 1.3781 | 0.3556 | | 1.0466 | 32.0 | 192 | 1.7266 | 0.4 | | 1.0466 | 33.0 | 198 | 1.5340 | 0.4222 | | 0.973 | 34.0 | 204 | 1.5429 | 0.3111 | | 1.0226 | 35.0 | 210 | 1.6233 | 0.3111 | | 1.0226 | 36.0 | 216 | 1.7204 | 0.3111 | | 0.9676 | 37.0 | 222 | 1.7918 | 0.4 | | 0.9676 | 38.0 | 228 | 1.6933 | 0.3333 | | 0.8379 | 39.0 | 234 | 1.6708 | 0.4222 | | 0.8938 | 40.0 | 240 | 1.6748 | 0.4 | | 0.8938 | 41.0 | 246 | 1.6963 | 0.3778 | | 0.8462 | 42.0 | 252 | 1.6888 | 0.3778 | | 0.8462 | 43.0 | 258 | 1.6888 | 0.3778 | | 0.8764 | 44.0 | 264 | 1.6888 | 0.3778 | | 0.8676 | 45.0 | 270 | 1.6888 | 0.3778 | | 0.8676 | 46.0 | 276 | 1.6888 | 0.3778 | | 0.8418 | 47.0 | 282 | 1.6888 | 0.3778 | | 0.8418 | 48.0 | 288 | 1.6888 | 0.3778 | | 0.8647 | 49.0 | 294 | 1.6888 | 0.3778 | | 0.872 | 50.0 | 300 | 1.6888 | 0.3778 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2600 - Accuracy: 0.3556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 2.0874 | 0.2444 | | 4.6196 | 2.0 | 12 | 2.3422 | 0.2444 | | 4.6196 | 3.0 | 18 | 1.7914 | 0.2444 | | 1.8086 | 4.0 | 24 | 1.6082 | 0.2667 | | 1.5901 | 5.0 | 30 | 1.5144 | 0.2444 | | 1.5901 | 6.0 | 36 | 1.6190 | 0.2444 | | 1.5211 | 7.0 | 42 | 1.5231 | 0.2444 | | 1.5211 | 8.0 | 48 | 1.5027 | 0.2444 | | 1.4477 | 9.0 | 54 | 1.4266 | 0.2444 | | 1.4394 | 10.0 | 60 | 1.4345 | 0.2444 | | 1.4394 | 11.0 | 66 | 1.3152 | 0.4444 | | 1.3604 | 12.0 | 72 | 1.3376 | 0.2444 | | 1.3604 | 13.0 | 78 | 1.3260 | 0.2667 | | 1.3864 | 14.0 | 84 | 1.5120 | 0.2444 | | 1.3555 | 15.0 | 90 | 1.2685 | 0.3556 | | 1.3555 | 16.0 | 96 | 1.1751 | 0.4444 | | 1.3501 | 17.0 | 102 | 1.2648 | 0.4444 | | 1.3501 | 18.0 | 108 | 1.3992 | 0.3778 | | 1.2496 | 19.0 | 114 | 1.4208 | 0.2889 | | 1.2587 | 20.0 | 120 | 1.1782 | 0.4444 | | 1.2587 | 21.0 | 126 | 1.2882 | 0.4444 | | 1.2321 | 22.0 | 132 | 1.3142 | 0.4444 | | 1.2321 | 23.0 | 138 | 1.1784 | 0.4222 | | 1.1985 | 24.0 | 144 | 1.2247 | 0.3778 | | 1.234 | 25.0 | 150 | 1.2329 | 0.3778 | | 1.234 | 26.0 | 156 | 1.2482 | 0.3556 | | 1.1913 | 27.0 | 162 | 1.4153 | 0.3111 | | 1.1913 | 28.0 | 168 | 1.2994 | 0.3333 | | 1.1911 | 29.0 | 174 | 1.1400 | 0.4667 | | 1.1955 | 30.0 | 180 | 1.2156 | 0.3778 | | 1.1955 | 31.0 | 186 | 1.2232 | 0.4 | | 1.1751 | 32.0 | 192 | 1.3853 | 0.2889 | | 1.1751 | 33.0 | 198 | 1.2309 | 0.3333 | | 1.1328 | 34.0 | 204 | 1.2338 | 0.3333 | | 1.195 | 35.0 | 210 | 1.2383 | 0.3333 | | 1.195 | 36.0 | 216 | 1.2991 | 0.3778 | | 1.1661 | 37.0 | 222 | 1.3228 | 0.3556 | | 1.1661 | 38.0 | 228 | 1.2550 | 0.3778 | | 1.0748 | 39.0 | 234 | 1.2591 | 0.3556 | | 1.1122 | 40.0 | 240 | 1.2234 | 0.3778 | | 1.1122 | 41.0 | 246 | 1.2608 | 0.3556 | | 1.102 | 42.0 | 252 | 1.2600 | 0.3556 | | 1.102 | 43.0 | 258 | 1.2600 | 0.3556 | | 1.0792 | 44.0 | 264 | 1.2600 | 0.3556 | | 1.1126 | 45.0 | 270 | 1.2600 | 0.3556 | | 1.1126 | 46.0 | 276 | 1.2600 | 0.3556 | | 1.0704 | 47.0 | 282 | 1.2600 | 0.3556 | | 1.0704 | 48.0 | 288 | 1.2600 | 0.3556 | | 1.1302 | 49.0 | 294 | 1.2600 | 0.3556 | | 1.0797 | 50.0 | 300 | 1.2600 | 0.3556 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7042 - Accuracy: 0.2791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 3.1061 | 0.2326 | | 4.0184 | 2.0 | 12 | 1.7666 | 0.2558 | | 4.0184 | 3.0 | 18 | 1.6279 | 0.2558 | | 1.7385 | 4.0 | 24 | 1.9636 | 0.2558 | | 1.583 | 5.0 | 30 | 1.6503 | 0.2558 | | 1.583 | 6.0 | 36 | 1.4630 | 0.2326 | | 1.4859 | 7.0 | 42 | 3.2936 | 0.2326 | | 1.4859 | 8.0 | 48 | 2.0073 | 0.2558 | | 2.0303 | 9.0 | 54 | 1.4859 | 0.2326 | | 1.4062 | 10.0 | 60 | 1.6529 | 0.2326 | | 1.4062 | 11.0 | 66 | 1.4259 | 0.2791 | | 1.359 | 12.0 | 72 | 1.3892 | 0.2558 | | 1.359 | 13.0 | 78 | 1.4650 | 0.3023 | | 1.3464 | 14.0 | 84 | 1.4368 | 0.2558 | | 1.262 | 15.0 | 90 | 1.4241 | 0.2558 | | 1.262 | 16.0 | 96 | 1.6562 | 0.3023 | | 1.2521 | 17.0 | 102 | 1.3729 | 0.3023 | | 1.2521 | 18.0 | 108 | 1.5241 | 0.2093 | | 1.2212 | 19.0 | 114 | 1.5032 | 0.3023 | | 1.1882 | 20.0 | 120 | 1.4178 | 0.2558 | | 1.1882 | 21.0 | 126 | 1.8156 | 0.3023 | | 1.1382 | 22.0 | 132 | 1.5280 | 0.2558 | | 1.1382 | 23.0 | 138 | 1.5037 | 0.2326 | | 1.0802 | 24.0 | 144 | 1.5058 | 0.3488 | | 1.1083 | 25.0 | 150 | 1.5421 | 0.2791 | | 1.1083 | 26.0 | 156 | 1.5398 | 0.2558 | | 1.0555 | 27.0 | 162 | 1.8560 | 0.2791 | | 1.0555 | 28.0 | 168 | 1.9193 | 0.2558 | | 1.0051 | 29.0 | 174 | 1.5934 | 0.3256 | | 0.958 | 30.0 | 180 | 1.6481 | 0.2791 | | 0.958 | 31.0 | 186 | 1.5950 | 0.2791 | | 0.9855 | 32.0 | 192 | 1.5539 | 0.2558 | | 0.9855 | 33.0 | 198 | 1.6644 | 0.2791 | | 0.9482 | 34.0 | 204 | 1.6743 | 0.2326 | | 0.9401 | 35.0 | 210 | 1.6352 | 0.3023 | | 0.9401 | 36.0 | 216 | 1.6896 | 0.2791 | | 0.9225 | 37.0 | 222 | 1.7369 | 0.2326 | | 0.9225 | 38.0 | 228 | 1.6916 | 0.2558 | | 0.8891 | 39.0 | 234 | 1.6919 | 0.2791 | | 0.8732 | 40.0 | 240 | 1.7104 | 0.2791 | | 0.8732 | 41.0 | 246 | 1.7028 | 0.2791 | | 0.8715 | 42.0 | 252 | 1.7042 | 0.2791 | | 0.8715 | 43.0 | 258 | 1.7042 | 0.2791 | | 0.8826 | 44.0 | 264 | 1.7042 | 0.2791 | | 0.8986 | 45.0 | 270 | 1.7042 | 0.2791 | | 0.8986 | 46.0 | 276 | 1.7042 | 0.2791 | | 0.8589 | 47.0 | 282 | 1.7042 | 0.2791 | | 0.8589 | 48.0 | 288 | 1.7042 | 0.2791 | | 0.9236 | 49.0 | 294 | 1.7042 | 0.2791 | | 0.8539 | 50.0 | 300 | 1.7042 | 0.2791 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1726 - Accuracy: 0.4524 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 5.8408 | 0.2619 | | 4.7138 | 2.0 | 12 | 1.8632 | 0.2381 | | 4.7138 | 3.0 | 18 | 1.9369 | 0.2619 | | 1.8439 | 4.0 | 24 | 1.7584 | 0.2381 | | 1.6449 | 5.0 | 30 | 1.4723 | 0.2619 | | 1.6449 | 6.0 | 36 | 1.7187 | 0.2381 | | 1.5171 | 7.0 | 42 | 1.4960 | 0.2381 | | 1.5171 | 8.0 | 48 | 1.3962 | 0.2619 | | 1.4701 | 9.0 | 54 | 1.4942 | 0.2619 | | 1.4652 | 10.0 | 60 | 1.3642 | 0.2381 | | 1.4652 | 11.0 | 66 | 1.4490 | 0.2619 | | 1.4547 | 12.0 | 72 | 1.1912 | 0.4524 | | 1.4547 | 13.0 | 78 | 1.4737 | 0.2857 | | 1.3944 | 14.0 | 84 | 1.2170 | 0.4286 | | 1.3536 | 15.0 | 90 | 1.3540 | 0.2381 | | 1.3536 | 16.0 | 96 | 1.0819 | 0.6190 | | 1.2835 | 17.0 | 102 | 1.1640 | 0.4286 | | 1.2835 | 18.0 | 108 | 1.2309 | 0.3333 | | 1.306 | 19.0 | 114 | 1.3288 | 0.2857 | | 1.2522 | 20.0 | 120 | 1.4561 | 0.2857 | | 1.2522 | 21.0 | 126 | 1.0774 | 0.4762 | | 1.2491 | 22.0 | 132 | 1.1807 | 0.4286 | | 1.2491 | 23.0 | 138 | 1.1668 | 0.3810 | | 1.1882 | 24.0 | 144 | 1.2075 | 0.4286 | | 1.2028 | 25.0 | 150 | 1.2635 | 0.3333 | | 1.2028 | 26.0 | 156 | 1.1653 | 0.3810 | | 1.1822 | 27.0 | 162 | 1.1741 | 0.4048 | | 1.1822 | 28.0 | 168 | 1.4014 | 0.2619 | | 1.1086 | 29.0 | 174 | 1.0259 | 0.5476 | | 1.1111 | 30.0 | 180 | 1.1225 | 0.5238 | | 1.1111 | 31.0 | 186 | 1.1813 | 0.5 | | 1.0458 | 32.0 | 192 | 1.1678 | 0.4286 | | 1.0458 | 33.0 | 198 | 1.1915 | 0.4048 | | 1.1348 | 34.0 | 204 | 1.3148 | 0.5 | | 0.9776 | 35.0 | 210 | 1.0082 | 0.5238 | | 0.9776 | 36.0 | 216 | 0.9144 | 0.6190 | | 0.9456 | 37.0 | 222 | 1.0677 | 0.4762 | | 0.9456 | 38.0 | 228 | 1.0695 | 0.5238 | | 0.8714 | 39.0 | 234 | 1.1982 | 0.4762 | | 0.8643 | 40.0 | 240 | 1.1143 | 0.4048 | | 0.8643 | 41.0 | 246 | 1.1270 | 0.4524 | | 0.7971 | 42.0 | 252 | 1.1726 | 0.4524 | | 0.7971 | 43.0 | 258 | 1.1726 | 0.4524 | | 0.7662 | 44.0 | 264 | 1.1726 | 0.4524 | | 0.7801 | 45.0 | 270 | 1.1726 | 0.4524 | | 0.7801 | 46.0 | 276 | 1.1726 | 0.4524 | | 0.7773 | 47.0 | 282 | 1.1726 | 0.4524 | | 0.7773 | 48.0 | 288 | 1.1726 | 0.4524 | | 0.7728 | 49.0 | 294 | 1.1726 | 0.4524 | | 0.7828 | 50.0 | 300 | 1.1726 | 0.4524 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0599 - Accuracy: 0.5366 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 2.6067 | 0.2439 | | 4.0909 | 2.0 | 12 | 1.8085 | 0.2439 | | 4.0909 | 3.0 | 18 | 1.7809 | 0.2439 | | 2.0948 | 4.0 | 24 | 1.7586 | 0.2439 | | 1.6719 | 5.0 | 30 | 1.5135 | 0.2439 | | 1.6719 | 6.0 | 36 | 1.7849 | 0.2683 | | 1.5694 | 7.0 | 42 | 1.4636 | 0.3902 | | 1.5694 | 8.0 | 48 | 1.4809 | 0.2683 | | 1.519 | 9.0 | 54 | 1.3587 | 0.3415 | | 1.5241 | 10.0 | 60 | 1.3823 | 0.2439 | | 1.5241 | 11.0 | 66 | 1.3645 | 0.3415 | | 1.4557 | 12.0 | 72 | 1.2525 | 0.3659 | | 1.4557 | 13.0 | 78 | 1.2955 | 0.3171 | | 1.3674 | 14.0 | 84 | 1.3174 | 0.3415 | | 1.3868 | 15.0 | 90 | 1.2787 | 0.3415 | | 1.3868 | 16.0 | 96 | 1.6408 | 0.2683 | | 1.3152 | 17.0 | 102 | 1.2750 | 0.3171 | | 1.3152 | 18.0 | 108 | 1.0560 | 0.5366 | | 1.2693 | 19.0 | 114 | 1.3256 | 0.4878 | | 1.2554 | 20.0 | 120 | 1.3190 | 0.3902 | | 1.2554 | 21.0 | 126 | 1.2498 | 0.3902 | | 1.1813 | 22.0 | 132 | 1.2514 | 0.3902 | | 1.1813 | 23.0 | 138 | 1.0907 | 0.5366 | | 1.1113 | 24.0 | 144 | 1.2821 | 0.3415 | | 1.1728 | 25.0 | 150 | 1.1433 | 0.4878 | | 1.1728 | 26.0 | 156 | 1.0143 | 0.5366 | | 1.1037 | 27.0 | 162 | 0.9542 | 0.5854 | | 1.1037 | 28.0 | 168 | 1.1443 | 0.5122 | | 1.0914 | 29.0 | 174 | 1.0904 | 0.4878 | | 1.1385 | 30.0 | 180 | 1.1995 | 0.4146 | | 1.1385 | 31.0 | 186 | 0.9746 | 0.6098 | | 1.0636 | 32.0 | 192 | 1.1104 | 0.4634 | | 1.0636 | 33.0 | 198 | 0.9890 | 0.6098 | | 1.0129 | 34.0 | 204 | 1.2113 | 0.3902 | | 0.999 | 35.0 | 210 | 1.0001 | 0.6098 | | 0.999 | 36.0 | 216 | 1.0972 | 0.5122 | | 0.9802 | 37.0 | 222 | 1.1639 | 0.4390 | | 0.9802 | 38.0 | 228 | 1.0730 | 0.5122 | | 0.9625 | 39.0 | 234 | 1.0471 | 0.4878 | | 0.9424 | 40.0 | 240 | 1.0692 | 0.5366 | | 0.9424 | 41.0 | 246 | 1.0654 | 0.5366 | | 0.9521 | 42.0 | 252 | 1.0599 | 0.5366 | | 0.9521 | 43.0 | 258 | 1.0599 | 0.5366 | | 0.9184 | 44.0 | 264 | 1.0599 | 0.5366 | | 0.9335 | 45.0 | 270 | 1.0599 | 0.5366 | | 0.9335 | 46.0 | 276 | 1.0599 | 0.5366 | | 0.9251 | 47.0 | 282 | 1.0599 | 0.5366 | | 0.9251 | 48.0 | 288 | 1.0599 | 0.5366 | | 0.9168 | 49.0 | 294 | 1.0599 | 0.5366 | | 0.8964 | 50.0 | 300 | 1.0599 | 0.5366 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr0001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr0001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 3.0724 - Accuracy: 0.5778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.7581 | 0.2444 | | 2.2176 | 2.0 | 12 | 1.4575 | 0.2444 | | 2.2176 | 3.0 | 18 | 1.4070 | 0.2444 | | 1.4625 | 4.0 | 24 | 1.4089 | 0.2444 | | 1.4303 | 5.0 | 30 | 1.4530 | 0.2667 | | 1.4303 | 6.0 | 36 | 1.3605 | 0.3778 | | 1.3759 | 7.0 | 42 | 1.4068 | 0.3778 | | 1.3759 | 8.0 | 48 | 1.3279 | 0.2889 | | 1.3199 | 9.0 | 54 | 1.5997 | 0.2444 | | 1.2335 | 10.0 | 60 | 1.5834 | 0.2444 | | 1.2335 | 11.0 | 66 | 1.6144 | 0.3333 | | 1.0406 | 12.0 | 72 | 1.1266 | 0.5111 | | 1.0406 | 13.0 | 78 | 1.6894 | 0.3556 | | 0.851 | 14.0 | 84 | 1.8080 | 0.4444 | | 0.5856 | 15.0 | 90 | 2.0552 | 0.3778 | | 0.5856 | 16.0 | 96 | 1.3379 | 0.4889 | | 0.3402 | 17.0 | 102 | 1.4787 | 0.4889 | | 0.3402 | 18.0 | 108 | 2.2439 | 0.4222 | | 0.233 | 19.0 | 114 | 1.7239 | 0.4889 | | 0.1016 | 20.0 | 120 | 2.5401 | 0.4222 | | 0.1016 | 21.0 | 126 | 1.5433 | 0.5778 | | 0.0994 | 22.0 | 132 | 1.8891 | 0.5333 | | 0.0994 | 23.0 | 138 | 1.9405 | 0.4889 | | 0.0839 | 24.0 | 144 | 1.5418 | 0.5778 | | 0.0282 | 25.0 | 150 | 2.4010 | 0.5778 | | 0.0282 | 26.0 | 156 | 2.6175 | 0.5778 | | 0.0011 | 27.0 | 162 | 2.7024 | 0.5778 | | 0.0011 | 28.0 | 168 | 2.7954 | 0.5778 | | 0.0007 | 29.0 | 174 | 2.8362 | 0.5778 | | 0.0006 | 30.0 | 180 | 2.8852 | 0.5778 | | 0.0006 | 31.0 | 186 | 2.9050 | 0.5778 | | 0.0005 | 32.0 | 192 | 2.9414 | 0.5778 | | 0.0005 | 33.0 | 198 | 2.9746 | 0.5778 | | 0.0005 | 34.0 | 204 | 2.9947 | 0.5778 | | 0.0004 | 35.0 | 210 | 3.0141 | 0.5778 | | 0.0004 | 36.0 | 216 | 3.0300 | 0.5778 | | 0.0004 | 37.0 | 222 | 3.0447 | 0.5778 | | 0.0004 | 38.0 | 228 | 3.0565 | 0.5778 | | 0.0003 | 39.0 | 234 | 3.0642 | 0.5778 | | 0.0003 | 40.0 | 240 | 3.0696 | 0.5778 | | 0.0003 | 41.0 | 246 | 3.0717 | 0.5778 | | 0.0003 | 42.0 | 252 | 3.0724 | 0.5778 | | 0.0003 | 43.0 | 258 | 3.0724 | 0.5778 | | 0.0003 | 44.0 | 264 | 3.0724 | 0.5778 | | 0.0003 | 45.0 | 270 | 3.0724 | 0.5778 | | 0.0003 | 46.0 | 276 | 3.0724 | 0.5778 | | 0.0003 | 47.0 | 282 | 3.0724 | 0.5778 | | 0.0003 | 48.0 | 288 | 3.0724 | 0.5778 | | 0.0003 | 49.0 | 294 | 3.0724 | 0.5778 | | 0.0003 | 50.0 | 300 | 3.0724 | 0.5778 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr0001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr0001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7300 - Accuracy: 0.6889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 3.0363 | 0.2444 | | 2.6477 | 2.0 | 12 | 1.6954 | 0.2444 | | 2.6477 | 3.0 | 18 | 1.4980 | 0.2444 | | 1.482 | 4.0 | 24 | 1.3454 | 0.3556 | | 1.4166 | 5.0 | 30 | 1.3094 | 0.4 | | 1.4166 | 6.0 | 36 | 1.6095 | 0.2444 | | 1.3414 | 7.0 | 42 | 1.9023 | 0.2444 | | 1.3414 | 8.0 | 48 | 1.3957 | 0.2222 | | 1.2396 | 9.0 | 54 | 1.1738 | 0.4 | | 1.2068 | 10.0 | 60 | 1.2312 | 0.4889 | | 1.2068 | 11.0 | 66 | 1.0903 | 0.6 | | 0.9263 | 12.0 | 72 | 0.9211 | 0.5778 | | 0.9263 | 13.0 | 78 | 1.1912 | 0.4444 | | 0.8539 | 14.0 | 84 | 1.2631 | 0.5333 | | 0.6672 | 15.0 | 90 | 1.2596 | 0.5111 | | 0.6672 | 16.0 | 96 | 1.3999 | 0.4889 | | 0.5299 | 17.0 | 102 | 1.2988 | 0.5556 | | 0.5299 | 18.0 | 108 | 1.3328 | 0.5333 | | 0.3853 | 19.0 | 114 | 1.0485 | 0.6222 | | 0.332 | 20.0 | 120 | 1.1428 | 0.5778 | | 0.332 | 21.0 | 126 | 1.0486 | 0.6444 | | 0.1829 | 22.0 | 132 | 1.0866 | 0.6667 | | 0.1829 | 23.0 | 138 | 1.7727 | 0.5778 | | 0.111 | 24.0 | 144 | 1.2950 | 0.6889 | | 0.0444 | 25.0 | 150 | 1.4579 | 0.7111 | | 0.0444 | 26.0 | 156 | 1.4269 | 0.6889 | | 0.0017 | 27.0 | 162 | 1.4804 | 0.6889 | | 0.0017 | 28.0 | 168 | 1.5281 | 0.6889 | | 0.0007 | 29.0 | 174 | 1.5658 | 0.6667 | | 0.0005 | 30.0 | 180 | 1.5943 | 0.6667 | | 0.0005 | 31.0 | 186 | 1.6212 | 0.6667 | | 0.0004 | 32.0 | 192 | 1.6444 | 0.6667 | | 0.0004 | 33.0 | 198 | 1.6608 | 0.6667 | | 0.0003 | 34.0 | 204 | 1.6759 | 0.6667 | | 0.0003 | 35.0 | 210 | 1.6896 | 0.6667 | | 0.0003 | 36.0 | 216 | 1.7018 | 0.6667 | | 0.0003 | 37.0 | 222 | 1.7108 | 0.6889 | | 0.0003 | 38.0 | 228 | 1.7185 | 0.6889 | | 0.0003 | 39.0 | 234 | 1.7237 | 0.6889 | | 0.0002 | 40.0 | 240 | 1.7275 | 0.6889 | | 0.0002 | 41.0 | 246 | 1.7295 | 0.6889 | | 0.0003 | 42.0 | 252 | 1.7300 | 0.6889 | | 0.0003 | 43.0 | 258 | 1.7300 | 0.6889 | | 0.0002 | 44.0 | 264 | 1.7300 | 0.6889 | | 0.0002 | 45.0 | 270 | 1.7300 | 0.6889 | | 0.0002 | 46.0 | 276 | 1.7300 | 0.6889 | | 0.0002 | 47.0 | 282 | 1.7300 | 0.6889 | | 0.0002 | 48.0 | 288 | 1.7300 | 0.6889 | | 0.0002 | 49.0 | 294 | 1.7300 | 0.6889 | | 0.0002 | 50.0 | 300 | 1.7300 | 0.6889 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr0001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_rms_lr0001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.7920 - Accuracy: 0.4651 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.3959 | 0.2558 | | 2.0191 | 2.0 | 12 | 1.4540 | 0.2791 | | 2.0191 | 3.0 | 18 | 1.5040 | 0.3721 | | 1.4688 | 4.0 | 24 | 1.3687 | 0.3256 | | 1.3397 | 5.0 | 30 | 1.3082 | 0.4186 | | 1.3397 | 6.0 | 36 | 1.3917 | 0.3256 | | 1.1986 | 7.0 | 42 | 1.4209 | 0.3256 | | 1.1986 | 8.0 | 48 | 1.4510 | 0.3721 | | 1.0698 | 9.0 | 54 | 1.4225 | 0.3023 | | 0.8214 | 10.0 | 60 | 1.5289 | 0.4186 | | 0.8214 | 11.0 | 66 | 1.4884 | 0.4419 | | 0.5823 | 12.0 | 72 | 2.0101 | 0.3256 | | 0.5823 | 13.0 | 78 | 1.6036 | 0.5349 | | 0.4001 | 14.0 | 84 | 1.6332 | 0.4186 | | 0.2362 | 15.0 | 90 | 2.0095 | 0.4884 | | 0.2362 | 16.0 | 96 | 1.8563 | 0.5581 | | 0.1078 | 17.0 | 102 | 2.1555 | 0.5116 | | 0.1078 | 18.0 | 108 | 2.0019 | 0.5581 | | 0.0769 | 19.0 | 114 | 2.3852 | 0.4884 | | 0.0351 | 20.0 | 120 | 2.4880 | 0.5349 | | 0.0351 | 21.0 | 126 | 2.5950 | 0.4884 | | 0.001 | 22.0 | 132 | 2.5992 | 0.4884 | | 0.001 | 23.0 | 138 | 2.6117 | 0.4884 | | 0.0006 | 24.0 | 144 | 2.6223 | 0.4884 | | 0.0005 | 25.0 | 150 | 2.6443 | 0.4884 | | 0.0005 | 26.0 | 156 | 2.6672 | 0.4884 | | 0.0004 | 27.0 | 162 | 2.6883 | 0.4884 | | 0.0004 | 28.0 | 168 | 2.6994 | 0.4884 | | 0.0003 | 29.0 | 174 | 2.7093 | 0.4884 | | 0.0003 | 30.0 | 180 | 2.7225 | 0.4884 | | 0.0003 | 31.0 | 186 | 2.7350 | 0.4884 | | 0.0003 | 32.0 | 192 | 2.7468 | 0.4651 | | 0.0003 | 33.0 | 198 | 2.7564 | 0.4651 | | 0.0003 | 34.0 | 204 | 2.7644 | 0.4651 | | 0.0002 | 35.0 | 210 | 2.7717 | 0.4651 | | 0.0002 | 36.0 | 216 | 2.7756 | 0.4651 | | 0.0002 | 37.0 | 222 | 2.7805 | 0.4651 | | 0.0002 | 38.0 | 228 | 2.7848 | 0.4651 | | 0.0002 | 39.0 | 234 | 2.7876 | 0.4651 | | 0.0002 | 40.0 | 240 | 2.7903 | 0.4651 | | 0.0002 | 41.0 | 246 | 2.7917 | 0.4651 | | 0.0002 | 42.0 | 252 | 2.7920 | 0.4651 | | 0.0002 | 43.0 | 258 | 2.7920 | 0.4651 | | 0.0002 | 44.0 | 264 | 2.7920 | 0.4651 | | 0.0002 | 45.0 | 270 | 2.7920 | 0.4651 | | 0.0002 | 46.0 | 276 | 2.7920 | 0.4651 | | 0.0002 | 47.0 | 282 | 2.7920 | 0.4651 | | 0.0002 | 48.0 | 288 | 2.7920 | 0.4651 | | 0.0002 | 49.0 | 294 | 2.7920 | 0.4651 | | 0.0002 | 50.0 | 300 | 2.7920 | 0.4651 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr0001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr0001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2563 - Accuracy: 0.7619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5375 | 0.2619 | | 2.1724 | 2.0 | 12 | 2.0732 | 0.2381 | | 2.1724 | 3.0 | 18 | 1.4131 | 0.2619 | | 1.6782 | 4.0 | 24 | 1.3425 | 0.4524 | | 1.4417 | 5.0 | 30 | 1.3458 | 0.2857 | | 1.4417 | 6.0 | 36 | 1.2594 | 0.6429 | | 1.3676 | 7.0 | 42 | 1.1740 | 0.4762 | | 1.3676 | 8.0 | 48 | 1.2511 | 0.3571 | | 1.2512 | 9.0 | 54 | 0.8438 | 0.6190 | | 0.8279 | 10.0 | 60 | 1.0096 | 0.5 | | 0.8279 | 11.0 | 66 | 0.7631 | 0.6667 | | 0.5322 | 12.0 | 72 | 0.6526 | 0.7857 | | 0.5322 | 13.0 | 78 | 0.6963 | 0.7143 | | 0.257 | 14.0 | 84 | 0.7429 | 0.7619 | | 0.1198 | 15.0 | 90 | 0.9632 | 0.6905 | | 0.1198 | 16.0 | 96 | 1.2325 | 0.7143 | | 0.0178 | 17.0 | 102 | 1.2090 | 0.7381 | | 0.0178 | 18.0 | 108 | 1.1054 | 0.7619 | | 0.0016 | 19.0 | 114 | 1.2184 | 0.7143 | | 0.0009 | 20.0 | 120 | 1.1716 | 0.7619 | | 0.0009 | 21.0 | 126 | 1.1784 | 0.7619 | | 0.0004 | 22.0 | 132 | 1.1866 | 0.7619 | | 0.0004 | 23.0 | 138 | 1.1935 | 0.7619 | | 0.0003 | 24.0 | 144 | 1.1995 | 0.7619 | | 0.0003 | 25.0 | 150 | 1.2046 | 0.7619 | | 0.0003 | 26.0 | 156 | 1.2111 | 0.7619 | | 0.0003 | 27.0 | 162 | 1.2169 | 0.7619 | | 0.0003 | 28.0 | 168 | 1.2218 | 0.7619 | | 0.0002 | 29.0 | 174 | 1.2261 | 0.7619 | | 0.0002 | 30.0 | 180 | 1.2318 | 0.7619 | | 0.0002 | 31.0 | 186 | 1.2354 | 0.7619 | | 0.0002 | 32.0 | 192 | 1.2392 | 0.7619 | | 0.0002 | 33.0 | 198 | 1.2423 | 0.7619 | | 0.0002 | 34.0 | 204 | 1.2453 | 0.7619 | | 0.0002 | 35.0 | 210 | 1.2477 | 0.7619 | | 0.0002 | 36.0 | 216 | 1.2499 | 0.7619 | | 0.0002 | 37.0 | 222 | 1.2519 | 0.7619 | | 0.0002 | 38.0 | 228 | 1.2534 | 0.7619 | | 0.0002 | 39.0 | 234 | 1.2547 | 0.7619 | | 0.0002 | 40.0 | 240 | 1.2556 | 0.7619 | | 0.0002 | 41.0 | 246 | 1.2562 | 0.7619 | | 0.0002 | 42.0 | 252 | 1.2563 | 0.7619 | | 0.0002 | 43.0 | 258 | 1.2563 | 0.7619 | | 0.0002 | 44.0 | 264 | 1.2563 | 0.7619 | | 0.0002 | 45.0 | 270 | 1.2563 | 0.7619 | | 0.0002 | 46.0 | 276 | 1.2563 | 0.7619 | | 0.0002 | 47.0 | 282 | 1.2563 | 0.7619 | | 0.0002 | 48.0 | 288 | 1.2563 | 0.7619 | | 0.0002 | 49.0 | 294 | 1.2563 | 0.7619 | | 0.0002 | 50.0 | 300 | 1.2563 | 0.7619 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr0001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr0001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.1537 - Accuracy: 0.5610 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5741 | 0.2683 | | 1.8922 | 2.0 | 12 | 1.3978 | 0.2683 | | 1.8922 | 3.0 | 18 | 1.4032 | 0.2439 | | 1.5101 | 4.0 | 24 | 1.4021 | 0.2683 | | 1.38 | 5.0 | 30 | 2.1528 | 0.2439 | | 1.38 | 6.0 | 36 | 1.4141 | 0.2439 | | 1.4096 | 7.0 | 42 | 1.2484 | 0.4390 | | 1.4096 | 8.0 | 48 | 1.2607 | 0.4390 | | 1.2381 | 9.0 | 54 | 0.9950 | 0.5366 | | 1.1539 | 10.0 | 60 | 1.0350 | 0.5610 | | 1.1539 | 11.0 | 66 | 1.2716 | 0.3415 | | 0.9039 | 12.0 | 72 | 1.0596 | 0.5854 | | 0.9039 | 13.0 | 78 | 1.5972 | 0.4146 | | 0.6191 | 14.0 | 84 | 1.9855 | 0.4390 | | 0.4358 | 15.0 | 90 | 1.2403 | 0.4878 | | 0.4358 | 16.0 | 96 | 2.3374 | 0.4390 | | 0.2291 | 17.0 | 102 | 1.5475 | 0.4390 | | 0.2291 | 18.0 | 108 | 1.2789 | 0.6341 | | 0.1203 | 19.0 | 114 | 1.8441 | 0.4390 | | 0.0604 | 20.0 | 120 | 1.7948 | 0.4878 | | 0.0604 | 21.0 | 126 | 2.0211 | 0.4634 | | 0.0322 | 22.0 | 132 | 1.8178 | 0.5366 | | 0.0322 | 23.0 | 138 | 2.0950 | 0.4878 | | 0.017 | 24.0 | 144 | 2.0410 | 0.5122 | | 0.0011 | 25.0 | 150 | 2.0405 | 0.5122 | | 0.0011 | 26.0 | 156 | 2.0495 | 0.5122 | | 0.0007 | 27.0 | 162 | 2.0594 | 0.5122 | | 0.0007 | 28.0 | 168 | 2.0747 | 0.5122 | | 0.0006 | 29.0 | 174 | 2.0825 | 0.5610 | | 0.0005 | 30.0 | 180 | 2.0915 | 0.5610 | | 0.0005 | 31.0 | 186 | 2.1017 | 0.5610 | | 0.0004 | 32.0 | 192 | 2.1110 | 0.5610 | | 0.0004 | 33.0 | 198 | 2.1199 | 0.5610 | | 0.0004 | 34.0 | 204 | 2.1276 | 0.5610 | | 0.0004 | 35.0 | 210 | 2.1335 | 0.5610 | | 0.0004 | 36.0 | 216 | 2.1398 | 0.5610 | | 0.0004 | 37.0 | 222 | 2.1439 | 0.5610 | | 0.0004 | 38.0 | 228 | 2.1473 | 0.5610 | | 0.0003 | 39.0 | 234 | 2.1497 | 0.5610 | | 0.0003 | 40.0 | 240 | 2.1519 | 0.5610 | | 0.0003 | 41.0 | 246 | 2.1532 | 0.5610 | | 0.0003 | 42.0 | 252 | 2.1537 | 0.5610 | | 0.0003 | 43.0 | 258 | 2.1537 | 0.5610 | | 0.0003 | 44.0 | 264 | 2.1537 | 0.5610 | | 0.0003 | 45.0 | 270 | 2.1537 | 0.5610 | | 0.0003 | 46.0 | 276 | 2.1537 | 0.5610 | | 0.0003 | 47.0 | 282 | 2.1537 | 0.5610 | | 0.0003 | 48.0 | 288 | 2.1537 | 0.5610 | | 0.0003 | 49.0 | 294 | 2.1537 | 0.5610 | | 0.0003 | 50.0 | 300 | 2.1537 | 0.5610 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr00001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2449 - Accuracy: 0.6889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.2771 | 0.4 | | 1.263 | 2.0 | 12 | 1.0784 | 0.5556 | | 1.263 | 3.0 | 18 | 0.9616 | 0.5556 | | 0.5461 | 4.0 | 24 | 1.0339 | 0.6889 | | 0.2446 | 5.0 | 30 | 0.9939 | 0.6667 | | 0.2446 | 6.0 | 36 | 1.2442 | 0.4889 | | 0.0817 | 7.0 | 42 | 0.7980 | 0.6222 | | 0.0817 | 8.0 | 48 | 0.8675 | 0.6444 | | 0.0302 | 9.0 | 54 | 0.8969 | 0.6889 | | 0.009 | 10.0 | 60 | 0.9399 | 0.6222 | | 0.009 | 11.0 | 66 | 1.0591 | 0.7111 | | 0.0037 | 12.0 | 72 | 1.0283 | 0.6667 | | 0.0037 | 13.0 | 78 | 1.0855 | 0.6667 | | 0.0025 | 14.0 | 84 | 1.1121 | 0.6667 | | 0.0019 | 15.0 | 90 | 1.1082 | 0.6667 | | 0.0019 | 16.0 | 96 | 1.1158 | 0.6667 | | 0.0015 | 17.0 | 102 | 1.1382 | 0.6667 | | 0.0015 | 18.0 | 108 | 1.1574 | 0.6667 | | 0.0013 | 19.0 | 114 | 1.1739 | 0.6667 | | 0.0011 | 20.0 | 120 | 1.1736 | 0.6667 | | 0.0011 | 21.0 | 126 | 1.1594 | 0.6889 | | 0.001 | 22.0 | 132 | 1.1738 | 0.6889 | | 0.001 | 23.0 | 138 | 1.1962 | 0.6667 | | 0.0009 | 24.0 | 144 | 1.1951 | 0.6889 | | 0.0008 | 25.0 | 150 | 1.2004 | 0.6889 | | 0.0008 | 26.0 | 156 | 1.1996 | 0.6889 | | 0.0008 | 27.0 | 162 | 1.2076 | 0.6889 | | 0.0008 | 28.0 | 168 | 1.2144 | 0.6889 | | 0.0007 | 29.0 | 174 | 1.2117 | 0.6889 | | 0.0007 | 30.0 | 180 | 1.2204 | 0.6889 | | 0.0007 | 31.0 | 186 | 1.2217 | 0.6889 | | 0.0006 | 32.0 | 192 | 1.2270 | 0.6889 | | 0.0006 | 33.0 | 198 | 1.2321 | 0.6889 | | 0.0006 | 34.0 | 204 | 1.2307 | 0.6889 | | 0.0006 | 35.0 | 210 | 1.2313 | 0.6889 | | 0.0006 | 36.0 | 216 | 1.2374 | 0.6889 | | 0.0006 | 37.0 | 222 | 1.2446 | 0.6889 | | 0.0006 | 38.0 | 228 | 1.2471 | 0.6889 | | 0.0005 | 39.0 | 234 | 1.2452 | 0.6889 | | 0.0006 | 40.0 | 240 | 1.2458 | 0.6889 | | 0.0006 | 41.0 | 246 | 1.2454 | 0.6889 | | 0.0005 | 42.0 | 252 | 1.2449 | 0.6889 | | 0.0005 | 43.0 | 258 | 1.2449 | 0.6889 | | 0.0005 | 44.0 | 264 | 1.2449 | 0.6889 | | 0.0005 | 45.0 | 270 | 1.2449 | 0.6889 | | 0.0005 | 46.0 | 276 | 1.2449 | 0.6889 | | 0.0005 | 47.0 | 282 | 1.2449 | 0.6889 | | 0.0005 | 48.0 | 288 | 1.2449 | 0.6889 | | 0.0005 | 49.0 | 294 | 1.2449 | 0.6889 | | 0.0005 | 50.0 | 300 | 1.2449 | 0.6889 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr00001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1351 - Accuracy: 0.6889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.2498 | 0.4444 | | 1.3042 | 2.0 | 12 | 1.0549 | 0.6222 | | 1.3042 | 3.0 | 18 | 1.0499 | 0.6444 | | 0.513 | 4.0 | 24 | 1.0344 | 0.6444 | | 0.1791 | 5.0 | 30 | 1.0697 | 0.5333 | | 0.1791 | 6.0 | 36 | 1.0207 | 0.6889 | | 0.0528 | 7.0 | 42 | 0.9620 | 0.6889 | | 0.0528 | 8.0 | 48 | 1.0015 | 0.6444 | | 0.0178 | 9.0 | 54 | 0.9911 | 0.6667 | | 0.0068 | 10.0 | 60 | 1.0253 | 0.6667 | | 0.0068 | 11.0 | 66 | 1.0141 | 0.6889 | | 0.0039 | 12.0 | 72 | 1.0366 | 0.6444 | | 0.0039 | 13.0 | 78 | 1.0348 | 0.6889 | | 0.0028 | 14.0 | 84 | 1.0325 | 0.6889 | | 0.0023 | 15.0 | 90 | 1.0525 | 0.6889 | | 0.0023 | 16.0 | 96 | 1.0555 | 0.6889 | | 0.0019 | 17.0 | 102 | 1.0728 | 0.6667 | | 0.0019 | 18.0 | 108 | 1.0773 | 0.6889 | | 0.0016 | 19.0 | 114 | 1.0791 | 0.6667 | | 0.0014 | 20.0 | 120 | 1.0906 | 0.6667 | | 0.0014 | 21.0 | 126 | 1.0852 | 0.6889 | | 0.0013 | 22.0 | 132 | 1.0885 | 0.7111 | | 0.0013 | 23.0 | 138 | 1.1009 | 0.6889 | | 0.0012 | 24.0 | 144 | 1.1078 | 0.6889 | | 0.001 | 25.0 | 150 | 1.1057 | 0.7111 | | 0.001 | 26.0 | 156 | 1.1088 | 0.7111 | | 0.001 | 27.0 | 162 | 1.1100 | 0.7111 | | 0.001 | 28.0 | 168 | 1.1174 | 0.6889 | | 0.0009 | 29.0 | 174 | 1.1173 | 0.6889 | | 0.0009 | 30.0 | 180 | 1.1217 | 0.6889 | | 0.0009 | 31.0 | 186 | 1.1218 | 0.6889 | | 0.0008 | 32.0 | 192 | 1.1230 | 0.6889 | | 0.0008 | 33.0 | 198 | 1.1264 | 0.6889 | | 0.0008 | 34.0 | 204 | 1.1266 | 0.6889 | | 0.0008 | 35.0 | 210 | 1.1281 | 0.6889 | | 0.0008 | 36.0 | 216 | 1.1299 | 0.6889 | | 0.0007 | 37.0 | 222 | 1.1316 | 0.6889 | | 0.0007 | 38.0 | 228 | 1.1339 | 0.6889 | | 0.0007 | 39.0 | 234 | 1.1344 | 0.6889 | | 0.0007 | 40.0 | 240 | 1.1349 | 0.6889 | | 0.0007 | 41.0 | 246 | 1.1350 | 0.6889 | | 0.0007 | 42.0 | 252 | 1.1351 | 0.6889 | | 0.0007 | 43.0 | 258 | 1.1351 | 0.6889 | | 0.0007 | 44.0 | 264 | 1.1351 | 0.6889 | | 0.0007 | 45.0 | 270 | 1.1351 | 0.6889 | | 0.0007 | 46.0 | 276 | 1.1351 | 0.6889 | | 0.0007 | 47.0 | 282 | 1.1351 | 0.6889 | | 0.0007 | 48.0 | 288 | 1.1351 | 0.6889 | | 0.0007 | 49.0 | 294 | 1.1351 | 0.6889 | | 0.0007 | 50.0 | 300 | 1.1351 | 0.6889 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr00001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7349 - Accuracy: 0.6977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.2207 | 0.4419 | | 1.2147 | 2.0 | 12 | 0.9891 | 0.6047 | | 1.2147 | 3.0 | 18 | 0.7510 | 0.7209 | | 0.576 | 4.0 | 24 | 0.7741 | 0.7209 | | 0.2188 | 5.0 | 30 | 0.7926 | 0.6279 | | 0.2188 | 6.0 | 36 | 0.8648 | 0.6047 | | 0.0657 | 7.0 | 42 | 0.9083 | 0.6279 | | 0.0657 | 8.0 | 48 | 0.6744 | 0.7209 | | 0.024 | 9.0 | 54 | 0.6865 | 0.6744 | | 0.0081 | 10.0 | 60 | 0.7121 | 0.7209 | | 0.0081 | 11.0 | 66 | 0.7038 | 0.6279 | | 0.0043 | 12.0 | 72 | 0.6990 | 0.6977 | | 0.0043 | 13.0 | 78 | 0.6958 | 0.6744 | | 0.003 | 14.0 | 84 | 0.7014 | 0.6744 | | 0.0024 | 15.0 | 90 | 0.6973 | 0.6744 | | 0.0024 | 16.0 | 96 | 0.7050 | 0.6744 | | 0.002 | 17.0 | 102 | 0.7045 | 0.6512 | | 0.002 | 18.0 | 108 | 0.7008 | 0.6512 | | 0.0017 | 19.0 | 114 | 0.7130 | 0.6744 | | 0.0015 | 20.0 | 120 | 0.7143 | 0.6744 | | 0.0015 | 21.0 | 126 | 0.7112 | 0.6744 | | 0.0013 | 22.0 | 132 | 0.7160 | 0.6744 | | 0.0013 | 23.0 | 138 | 0.7131 | 0.6744 | | 0.0012 | 24.0 | 144 | 0.7144 | 0.6744 | | 0.0011 | 25.0 | 150 | 0.7160 | 0.6744 | | 0.0011 | 26.0 | 156 | 0.7202 | 0.6977 | | 0.001 | 27.0 | 162 | 0.7225 | 0.6977 | | 0.001 | 28.0 | 168 | 0.7211 | 0.6744 | | 0.001 | 29.0 | 174 | 0.7237 | 0.6977 | | 0.0009 | 30.0 | 180 | 0.7265 | 0.6977 | | 0.0009 | 31.0 | 186 | 0.7272 | 0.6977 | | 0.0008 | 32.0 | 192 | 0.7283 | 0.6977 | | 0.0008 | 33.0 | 198 | 0.7304 | 0.6977 | | 0.0008 | 34.0 | 204 | 0.7314 | 0.6977 | | 0.0008 | 35.0 | 210 | 0.7309 | 0.6977 | | 0.0008 | 36.0 | 216 | 0.7324 | 0.6977 | | 0.0008 | 37.0 | 222 | 0.7325 | 0.6977 | | 0.0008 | 38.0 | 228 | 0.7335 | 0.6977 | | 0.0007 | 39.0 | 234 | 0.7342 | 0.6977 | | 0.0007 | 40.0 | 240 | 0.7346 | 0.6977 | | 0.0007 | 41.0 | 246 | 0.7348 | 0.6977 | | 0.0007 | 42.0 | 252 | 0.7349 | 0.6977 | | 0.0007 | 43.0 | 258 | 0.7349 | 0.6977 | | 0.0007 | 44.0 | 264 | 0.7349 | 0.6977 | | 0.0007 | 45.0 | 270 | 0.7349 | 0.6977 | | 0.0007 | 46.0 | 276 | 0.7349 | 0.6977 | | 0.0007 | 47.0 | 282 | 0.7349 | 0.6977 | | 0.0007 | 48.0 | 288 | 0.7349 | 0.6977 | | 0.0007 | 49.0 | 294 | 0.7349 | 0.6977 | | 0.0007 | 50.0 | 300 | 0.7349 | 0.6977 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr00001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6971 - Accuracy: 0.7619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.1869 | 0.4762 | | 1.3589 | 2.0 | 12 | 1.4321 | 0.2381 | | 1.3589 | 3.0 | 18 | 0.8086 | 0.7143 | | 0.7587 | 4.0 | 24 | 0.7860 | 0.6667 | | 0.3552 | 5.0 | 30 | 0.6443 | 0.7143 | | 0.3552 | 6.0 | 36 | 0.6345 | 0.7381 | | 0.1624 | 7.0 | 42 | 0.6029 | 0.7381 | | 0.1624 | 8.0 | 48 | 0.6145 | 0.6667 | | 0.0655 | 9.0 | 54 | 0.6448 | 0.6905 | | 0.0257 | 10.0 | 60 | 0.6084 | 0.7381 | | 0.0257 | 11.0 | 66 | 0.5594 | 0.7143 | | 0.0099 | 12.0 | 72 | 0.6088 | 0.7381 | | 0.0099 | 13.0 | 78 | 0.6402 | 0.7619 | | 0.0054 | 14.0 | 84 | 0.6319 | 0.7381 | | 0.0038 | 15.0 | 90 | 0.6323 | 0.7619 | | 0.0038 | 16.0 | 96 | 0.6432 | 0.7381 | | 0.0029 | 17.0 | 102 | 0.6446 | 0.7381 | | 0.0029 | 18.0 | 108 | 0.6470 | 0.7381 | | 0.0023 | 19.0 | 114 | 0.6562 | 0.7381 | | 0.002 | 20.0 | 120 | 0.6656 | 0.7381 | | 0.002 | 21.0 | 126 | 0.6696 | 0.7381 | | 0.0017 | 22.0 | 132 | 0.6739 | 0.7381 | | 0.0017 | 23.0 | 138 | 0.6722 | 0.7619 | | 0.0015 | 24.0 | 144 | 0.6705 | 0.7619 | | 0.0014 | 25.0 | 150 | 0.6761 | 0.7619 | | 0.0014 | 26.0 | 156 | 0.6768 | 0.7619 | | 0.0012 | 27.0 | 162 | 0.6844 | 0.7619 | | 0.0012 | 28.0 | 168 | 0.6843 | 0.7619 | | 0.0012 | 29.0 | 174 | 0.6854 | 0.7619 | | 0.0011 | 30.0 | 180 | 0.6913 | 0.7619 | | 0.0011 | 31.0 | 186 | 0.6928 | 0.7619 | | 0.001 | 32.0 | 192 | 0.6912 | 0.7619 | | 0.001 | 33.0 | 198 | 0.6912 | 0.7619 | | 0.001 | 34.0 | 204 | 0.6924 | 0.7619 | | 0.0009 | 35.0 | 210 | 0.6912 | 0.7619 | | 0.0009 | 36.0 | 216 | 0.6935 | 0.7619 | | 0.0009 | 37.0 | 222 | 0.6948 | 0.7619 | | 0.0009 | 38.0 | 228 | 0.6957 | 0.7619 | | 0.0009 | 39.0 | 234 | 0.6966 | 0.7619 | | 0.0009 | 40.0 | 240 | 0.6969 | 0.7619 | | 0.0009 | 41.0 | 246 | 0.6971 | 0.7619 | | 0.0009 | 42.0 | 252 | 0.6971 | 0.7619 | | 0.0009 | 43.0 | 258 | 0.6971 | 0.7619 | | 0.0008 | 44.0 | 264 | 0.6971 | 0.7619 | | 0.0009 | 45.0 | 270 | 0.6971 | 0.7619 | | 0.0009 | 46.0 | 276 | 0.6971 | 0.7619 | | 0.0008 | 47.0 | 282 | 0.6971 | 0.7619 | | 0.0008 | 48.0 | 288 | 0.6971 | 0.7619 | | 0.0009 | 49.0 | 294 | 0.6971 | 0.7619 | | 0.0009 | 50.0 | 300 | 0.6971 | 0.7619 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_rms_lr00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_rms_lr00001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8230 - Accuracy: 0.7073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.1737 | 0.4634 | | 1.1816 | 2.0 | 12 | 0.8675 | 0.5366 | | 1.1816 | 3.0 | 18 | 0.8079 | 0.6341 | | 0.5246 | 4.0 | 24 | 0.8632 | 0.5854 | | 0.2225 | 5.0 | 30 | 0.7815 | 0.5610 | | 0.2225 | 6.0 | 36 | 0.6787 | 0.6585 | | 0.0792 | 7.0 | 42 | 0.7052 | 0.6585 | | 0.0792 | 8.0 | 48 | 0.7120 | 0.6341 | | 0.029 | 9.0 | 54 | 0.8373 | 0.6585 | | 0.0096 | 10.0 | 60 | 0.6713 | 0.7317 | | 0.0096 | 11.0 | 66 | 0.7185 | 0.7073 | | 0.0045 | 12.0 | 72 | 0.7237 | 0.6829 | | 0.0045 | 13.0 | 78 | 0.7062 | 0.6829 | | 0.0033 | 14.0 | 84 | 0.7203 | 0.7073 | | 0.0025 | 15.0 | 90 | 0.7207 | 0.7073 | | 0.0025 | 16.0 | 96 | 0.7400 | 0.7073 | | 0.002 | 17.0 | 102 | 0.7337 | 0.6829 | | 0.002 | 18.0 | 108 | 0.7527 | 0.6829 | | 0.0017 | 19.0 | 114 | 0.7553 | 0.6829 | | 0.0015 | 20.0 | 120 | 0.7631 | 0.6829 | | 0.0015 | 21.0 | 126 | 0.7684 | 0.6829 | | 0.0014 | 22.0 | 132 | 0.7730 | 0.6829 | | 0.0014 | 23.0 | 138 | 0.7803 | 0.6829 | | 0.0012 | 24.0 | 144 | 0.7869 | 0.6829 | | 0.0011 | 25.0 | 150 | 0.7854 | 0.6829 | | 0.0011 | 26.0 | 156 | 0.7958 | 0.6829 | | 0.001 | 27.0 | 162 | 0.7899 | 0.6829 | | 0.001 | 28.0 | 168 | 0.7956 | 0.6829 | | 0.001 | 29.0 | 174 | 0.8038 | 0.6829 | | 0.0009 | 30.0 | 180 | 0.8059 | 0.6829 | | 0.0009 | 31.0 | 186 | 0.8121 | 0.6829 | | 0.0008 | 32.0 | 192 | 0.8137 | 0.6829 | | 0.0008 | 33.0 | 198 | 0.8161 | 0.6829 | | 0.0008 | 34.0 | 204 | 0.8136 | 0.6829 | | 0.0008 | 35.0 | 210 | 0.8158 | 0.6829 | | 0.0008 | 36.0 | 216 | 0.8175 | 0.7073 | | 0.0007 | 37.0 | 222 | 0.8190 | 0.7073 | | 0.0007 | 38.0 | 228 | 0.8213 | 0.7073 | | 0.0007 | 39.0 | 234 | 0.8222 | 0.7073 | | 0.0007 | 40.0 | 240 | 0.8227 | 0.7073 | | 0.0007 | 41.0 | 246 | 0.8228 | 0.7073 | | 0.0007 | 42.0 | 252 | 0.8230 | 0.7073 | | 0.0007 | 43.0 | 258 | 0.8230 | 0.7073 | | 0.0007 | 44.0 | 264 | 0.8230 | 0.7073 | | 0.0007 | 45.0 | 270 | 0.8230 | 0.7073 | | 0.0007 | 46.0 | 276 | 0.8230 | 0.7073 | | 0.0007 | 47.0 | 282 | 0.8230 | 0.7073 | | 0.0007 | 48.0 | 288 | 0.8230 | 0.7073 | | 0.0007 | 49.0 | 294 | 0.8230 | 0.7073 | | 0.0007 | 50.0 | 300 | 0.8230 | 0.7073 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr00001_fold1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr00001_fold1 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6574 - Accuracy: 0.2667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6633 | 0.2667 | | 1.6088 | 2.0 | 12 | 1.6630 | 0.2667 | | 1.6088 | 3.0 | 18 | 1.6627 | 0.2667 | | 1.5763 | 4.0 | 24 | 1.6624 | 0.2667 | | 1.6076 | 5.0 | 30 | 1.6621 | 0.2667 | | 1.6076 | 6.0 | 36 | 1.6618 | 0.2667 | | 1.5951 | 7.0 | 42 | 1.6616 | 0.2667 | | 1.5951 | 8.0 | 48 | 1.6613 | 0.2667 | | 1.5898 | 9.0 | 54 | 1.6611 | 0.2667 | | 1.5905 | 10.0 | 60 | 1.6609 | 0.2667 | | 1.5905 | 11.0 | 66 | 1.6606 | 0.2667 | | 1.5785 | 12.0 | 72 | 1.6604 | 0.2667 | | 1.5785 | 13.0 | 78 | 1.6602 | 0.2667 | | 1.623 | 14.0 | 84 | 1.6600 | 0.2667 | | 1.5698 | 15.0 | 90 | 1.6598 | 0.2667 | | 1.5698 | 16.0 | 96 | 1.6596 | 0.2667 | | 1.5831 | 17.0 | 102 | 1.6594 | 0.2667 | | 1.5831 | 18.0 | 108 | 1.6593 | 0.2667 | | 1.6234 | 19.0 | 114 | 1.6591 | 0.2667 | | 1.605 | 20.0 | 120 | 1.6589 | 0.2667 | | 1.605 | 21.0 | 126 | 1.6588 | 0.2667 | | 1.6023 | 22.0 | 132 | 1.6586 | 0.2667 | | 1.6023 | 23.0 | 138 | 1.6585 | 0.2667 | | 1.5903 | 24.0 | 144 | 1.6584 | 0.2667 | | 1.5877 | 25.0 | 150 | 1.6583 | 0.2667 | | 1.5877 | 26.0 | 156 | 1.6582 | 0.2667 | | 1.5697 | 27.0 | 162 | 1.6581 | 0.2667 | | 1.5697 | 28.0 | 168 | 1.6580 | 0.2667 | | 1.6252 | 29.0 | 174 | 1.6579 | 0.2667 | | 1.6032 | 30.0 | 180 | 1.6578 | 0.2667 | | 1.6032 | 31.0 | 186 | 1.6577 | 0.2667 | | 1.6035 | 32.0 | 192 | 1.6577 | 0.2667 | | 1.6035 | 33.0 | 198 | 1.6576 | 0.2667 | | 1.5747 | 34.0 | 204 | 1.6575 | 0.2667 | | 1.5966 | 35.0 | 210 | 1.6575 | 0.2667 | | 1.5966 | 36.0 | 216 | 1.6575 | 0.2667 | | 1.5685 | 37.0 | 222 | 1.6574 | 0.2667 | | 1.5685 | 38.0 | 228 | 1.6574 | 0.2667 | | 1.5973 | 39.0 | 234 | 1.6574 | 0.2667 | | 1.5951 | 40.0 | 240 | 1.6574 | 0.2667 | | 1.5951 | 41.0 | 246 | 1.6574 | 0.2667 | | 1.5959 | 42.0 | 252 | 1.6574 | 0.2667 | | 1.5959 | 43.0 | 258 | 1.6574 | 0.2667 | | 1.6121 | 44.0 | 264 | 1.6574 | 0.2667 | | 1.5823 | 45.0 | 270 | 1.6574 | 0.2667 | | 1.5823 | 46.0 | 276 | 1.6574 | 0.2667 | | 1.5911 | 47.0 | 282 | 1.6574 | 0.2667 | | 1.5911 | 48.0 | 288 | 1.6574 | 0.2667 | | 1.6171 | 49.0 | 294 | 1.6574 | 0.2667 | | 1.5945 | 50.0 | 300 | 1.6574 | 0.2667 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr00001_fold2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr00001_fold2 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6351 - Accuracy: 0.1333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6407 | 0.1333 | | 1.6149 | 2.0 | 12 | 1.6404 | 0.1333 | | 1.6149 | 3.0 | 18 | 1.6401 | 0.1333 | | 1.588 | 4.0 | 24 | 1.6398 | 0.1333 | | 1.6243 | 5.0 | 30 | 1.6396 | 0.1333 | | 1.6243 | 6.0 | 36 | 1.6393 | 0.1333 | | 1.6041 | 7.0 | 42 | 1.6390 | 0.1333 | | 1.6041 | 8.0 | 48 | 1.6388 | 0.1333 | | 1.5784 | 9.0 | 54 | 1.6386 | 0.1333 | | 1.61 | 10.0 | 60 | 1.6383 | 0.1333 | | 1.61 | 11.0 | 66 | 1.6381 | 0.1333 | | 1.5857 | 12.0 | 72 | 1.6379 | 0.1333 | | 1.5857 | 13.0 | 78 | 1.6377 | 0.1333 | | 1.6282 | 14.0 | 84 | 1.6375 | 0.1333 | | 1.5739 | 15.0 | 90 | 1.6373 | 0.1333 | | 1.5739 | 16.0 | 96 | 1.6372 | 0.1333 | | 1.5784 | 17.0 | 102 | 1.6370 | 0.1333 | | 1.5784 | 18.0 | 108 | 1.6368 | 0.1333 | | 1.6525 | 19.0 | 114 | 1.6367 | 0.1333 | | 1.5978 | 20.0 | 120 | 1.6365 | 0.1333 | | 1.5978 | 21.0 | 126 | 1.6364 | 0.1333 | | 1.6239 | 22.0 | 132 | 1.6362 | 0.1333 | | 1.6239 | 23.0 | 138 | 1.6361 | 0.1333 | | 1.581 | 24.0 | 144 | 1.6360 | 0.1333 | | 1.597 | 25.0 | 150 | 1.6359 | 0.1333 | | 1.597 | 26.0 | 156 | 1.6358 | 0.1333 | | 1.5864 | 27.0 | 162 | 1.6357 | 0.1333 | | 1.5864 | 28.0 | 168 | 1.6356 | 0.1333 | | 1.6236 | 29.0 | 174 | 1.6355 | 0.1333 | | 1.6201 | 30.0 | 180 | 1.6354 | 0.1333 | | 1.6201 | 31.0 | 186 | 1.6354 | 0.1333 | | 1.6018 | 32.0 | 192 | 1.6353 | 0.1333 | | 1.6018 | 33.0 | 198 | 1.6352 | 0.1333 | | 1.5711 | 34.0 | 204 | 1.6352 | 0.1333 | | 1.6003 | 35.0 | 210 | 1.6352 | 0.1333 | | 1.6003 | 36.0 | 216 | 1.6351 | 0.1333 | | 1.5762 | 37.0 | 222 | 1.6351 | 0.1333 | | 1.5762 | 38.0 | 228 | 1.6351 | 0.1333 | | 1.5979 | 39.0 | 234 | 1.6351 | 0.1333 | | 1.6035 | 40.0 | 240 | 1.6351 | 0.1333 | | 1.6035 | 41.0 | 246 | 1.6351 | 0.1333 | | 1.5976 | 42.0 | 252 | 1.6351 | 0.1333 | | 1.5976 | 43.0 | 258 | 1.6351 | 0.1333 | | 1.5981 | 44.0 | 264 | 1.6351 | 0.1333 | | 1.5912 | 45.0 | 270 | 1.6351 | 0.1333 | | 1.5912 | 46.0 | 276 | 1.6351 | 0.1333 | | 1.5981 | 47.0 | 282 | 1.6351 | 0.1333 | | 1.5981 | 48.0 | 288 | 1.6351 | 0.1333 | | 1.6158 | 49.0 | 294 | 1.6351 | 0.1333 | | 1.593 | 50.0 | 300 | 1.6351 | 0.1333 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr00001_fold3
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr00001_fold3 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5383 - Accuracy: 0.3023 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5437 | 0.3023 | | 1.6283 | 2.0 | 12 | 1.5434 | 0.3023 | | 1.6283 | 3.0 | 18 | 1.5431 | 0.3023 | | 1.63 | 4.0 | 24 | 1.5428 | 0.3023 | | 1.6367 | 5.0 | 30 | 1.5426 | 0.3023 | | 1.6367 | 6.0 | 36 | 1.5423 | 0.3023 | | 1.6273 | 7.0 | 42 | 1.5421 | 0.3023 | | 1.6273 | 8.0 | 48 | 1.5419 | 0.3023 | | 1.6489 | 9.0 | 54 | 1.5417 | 0.3023 | | 1.5924 | 10.0 | 60 | 1.5414 | 0.3023 | | 1.5924 | 11.0 | 66 | 1.5412 | 0.3023 | | 1.6227 | 12.0 | 72 | 1.5411 | 0.3023 | | 1.6227 | 13.0 | 78 | 1.5409 | 0.3023 | | 1.6367 | 14.0 | 84 | 1.5407 | 0.3023 | | 1.622 | 15.0 | 90 | 1.5405 | 0.3023 | | 1.622 | 16.0 | 96 | 1.5403 | 0.3023 | | 1.621 | 17.0 | 102 | 1.5401 | 0.3023 | | 1.621 | 18.0 | 108 | 1.5400 | 0.3023 | | 1.6386 | 19.0 | 114 | 1.5398 | 0.3023 | | 1.6207 | 20.0 | 120 | 1.5397 | 0.3023 | | 1.6207 | 21.0 | 126 | 1.5395 | 0.3023 | | 1.6152 | 22.0 | 132 | 1.5394 | 0.3023 | | 1.6152 | 23.0 | 138 | 1.5393 | 0.3023 | | 1.6503 | 24.0 | 144 | 1.5392 | 0.3023 | | 1.6219 | 25.0 | 150 | 1.5390 | 0.3023 | | 1.6219 | 26.0 | 156 | 1.5389 | 0.3023 | | 1.6152 | 27.0 | 162 | 1.5389 | 0.3023 | | 1.6152 | 28.0 | 168 | 1.5388 | 0.3023 | | 1.6192 | 29.0 | 174 | 1.5387 | 0.3023 | | 1.6111 | 30.0 | 180 | 1.5386 | 0.3023 | | 1.6111 | 31.0 | 186 | 1.5386 | 0.3023 | | 1.6114 | 32.0 | 192 | 1.5385 | 0.3023 | | 1.6114 | 33.0 | 198 | 1.5384 | 0.3023 | | 1.6361 | 34.0 | 204 | 1.5384 | 0.3023 | | 1.6146 | 35.0 | 210 | 1.5384 | 0.3023 | | 1.6146 | 36.0 | 216 | 1.5383 | 0.3023 | | 1.6254 | 37.0 | 222 | 1.5383 | 0.3023 | | 1.6254 | 38.0 | 228 | 1.5383 | 0.3023 | | 1.6124 | 39.0 | 234 | 1.5383 | 0.3023 | | 1.6367 | 40.0 | 240 | 1.5383 | 0.3023 | | 1.6367 | 41.0 | 246 | 1.5383 | 0.3023 | | 1.6229 | 42.0 | 252 | 1.5383 | 0.3023 | | 1.6229 | 43.0 | 258 | 1.5383 | 0.3023 | | 1.6506 | 44.0 | 264 | 1.5383 | 0.3023 | | 1.6148 | 45.0 | 270 | 1.5383 | 0.3023 | | 1.6148 | 46.0 | 276 | 1.5383 | 0.3023 | | 1.6242 | 47.0 | 282 | 1.5383 | 0.3023 | | 1.6242 | 48.0 | 288 | 1.5383 | 0.3023 | | 1.6087 | 49.0 | 294 | 1.5383 | 0.3023 | | 1.6097 | 50.0 | 300 | 1.5383 | 0.3023 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr00001_fold4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr00001_fold4 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.5565 - Accuracy: 0.1905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.5632 | 0.1905 | | 1.6067 | 2.0 | 12 | 1.5628 | 0.1905 | | 1.6067 | 3.0 | 18 | 1.5625 | 0.1905 | | 1.6234 | 4.0 | 24 | 1.5621 | 0.1905 | | 1.6412 | 5.0 | 30 | 1.5618 | 0.1905 | | 1.6412 | 6.0 | 36 | 1.5615 | 0.1905 | | 1.6304 | 7.0 | 42 | 1.5612 | 0.1905 | | 1.6304 | 8.0 | 48 | 1.5609 | 0.1905 | | 1.6339 | 9.0 | 54 | 1.5606 | 0.1905 | | 1.6208 | 10.0 | 60 | 1.5604 | 0.1905 | | 1.6208 | 11.0 | 66 | 1.5601 | 0.1905 | | 1.599 | 12.0 | 72 | 1.5598 | 0.1905 | | 1.599 | 13.0 | 78 | 1.5596 | 0.1905 | | 1.6454 | 14.0 | 84 | 1.5594 | 0.1905 | | 1.5993 | 15.0 | 90 | 1.5591 | 0.1905 | | 1.5993 | 16.0 | 96 | 1.5589 | 0.1905 | | 1.6104 | 17.0 | 102 | 1.5587 | 0.1905 | | 1.6104 | 18.0 | 108 | 1.5585 | 0.1905 | | 1.5995 | 19.0 | 114 | 1.5584 | 0.1905 | | 1.6359 | 20.0 | 120 | 1.5582 | 0.1905 | | 1.6359 | 21.0 | 126 | 1.5580 | 0.1905 | | 1.5868 | 22.0 | 132 | 1.5579 | 0.1905 | | 1.5868 | 23.0 | 138 | 1.5577 | 0.1905 | | 1.67 | 24.0 | 144 | 1.5576 | 0.1905 | | 1.5662 | 25.0 | 150 | 1.5575 | 0.1905 | | 1.5662 | 26.0 | 156 | 1.5573 | 0.1905 | | 1.6118 | 27.0 | 162 | 1.5572 | 0.1905 | | 1.6118 | 28.0 | 168 | 1.5571 | 0.1905 | | 1.6163 | 29.0 | 174 | 1.5570 | 0.1905 | | 1.6392 | 30.0 | 180 | 1.5569 | 0.1905 | | 1.6392 | 31.0 | 186 | 1.5568 | 0.1905 | | 1.6602 | 32.0 | 192 | 1.5568 | 0.1905 | | 1.6602 | 33.0 | 198 | 1.5567 | 0.1905 | | 1.5354 | 34.0 | 204 | 1.5567 | 0.1905 | | 1.6205 | 35.0 | 210 | 1.5566 | 0.1905 | | 1.6205 | 36.0 | 216 | 1.5566 | 0.1905 | | 1.6201 | 37.0 | 222 | 1.5565 | 0.1905 | | 1.6201 | 38.0 | 228 | 1.5565 | 0.1905 | | 1.5923 | 39.0 | 234 | 1.5565 | 0.1905 | | 1.6521 | 40.0 | 240 | 1.5565 | 0.1905 | | 1.6521 | 41.0 | 246 | 1.5565 | 0.1905 | | 1.6177 | 42.0 | 252 | 1.5565 | 0.1905 | | 1.6177 | 43.0 | 258 | 1.5565 | 0.1905 | | 1.6437 | 44.0 | 264 | 1.5565 | 0.1905 | | 1.5948 | 45.0 | 270 | 1.5565 | 0.1905 | | 1.5948 | 46.0 | 276 | 1.5565 | 0.1905 | | 1.6236 | 47.0 | 282 | 1.5565 | 0.1905 | | 1.6236 | 48.0 | 288 | 1.5565 | 0.1905 | | 1.6168 | 49.0 | 294 | 1.5565 | 0.1905 | | 1.6032 | 50.0 | 300 | 1.5565 | 0.1905 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
hkivancoral/hushem_1x_deit_tiny_sgd_lr00001_fold5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hushem_1x_deit_tiny_sgd_lr00001_fold5 This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.6421 - Accuracy: 0.1220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 6 | 1.6490 | 0.1220 | | 1.6062 | 2.0 | 12 | 1.6487 | 0.1220 | | 1.6062 | 3.0 | 18 | 1.6483 | 0.1220 | | 1.6229 | 4.0 | 24 | 1.6480 | 0.1220 | | 1.5995 | 5.0 | 30 | 1.6477 | 0.1220 | | 1.5995 | 6.0 | 36 | 1.6474 | 0.1220 | | 1.5906 | 7.0 | 42 | 1.6470 | 0.1220 | | 1.5906 | 8.0 | 48 | 1.6468 | 0.1220 | | 1.609 | 9.0 | 54 | 1.6465 | 0.1220 | | 1.6018 | 10.0 | 60 | 1.6462 | 0.1220 | | 1.6018 | 11.0 | 66 | 1.6459 | 0.1220 | | 1.5944 | 12.0 | 72 | 1.6457 | 0.1220 | | 1.5944 | 13.0 | 78 | 1.6454 | 0.1220 | | 1.6013 | 14.0 | 84 | 1.6452 | 0.1220 | | 1.5987 | 15.0 | 90 | 1.6449 | 0.1220 | | 1.5987 | 16.0 | 96 | 1.6447 | 0.1220 | | 1.5899 | 17.0 | 102 | 1.6445 | 0.1220 | | 1.5899 | 18.0 | 108 | 1.6443 | 0.1220 | | 1.626 | 19.0 | 114 | 1.6441 | 0.1220 | | 1.5972 | 20.0 | 120 | 1.6439 | 0.1220 | | 1.5972 | 21.0 | 126 | 1.6437 | 0.1220 | | 1.5649 | 22.0 | 132 | 1.6436 | 0.1220 | | 1.5649 | 23.0 | 138 | 1.6434 | 0.1220 | | 1.6699 | 24.0 | 144 | 1.6433 | 0.1220 | | 1.5696 | 25.0 | 150 | 1.6431 | 0.1220 | | 1.5696 | 26.0 | 156 | 1.6430 | 0.1220 | | 1.5743 | 27.0 | 162 | 1.6429 | 0.1220 | | 1.5743 | 28.0 | 168 | 1.6427 | 0.1220 | | 1.6236 | 29.0 | 174 | 1.6426 | 0.1220 | | 1.5936 | 30.0 | 180 | 1.6426 | 0.1220 | | 1.5936 | 31.0 | 186 | 1.6425 | 0.1220 | | 1.5875 | 32.0 | 192 | 1.6424 | 0.1220 | | 1.5875 | 33.0 | 198 | 1.6423 | 0.1220 | | 1.6171 | 34.0 | 204 | 1.6423 | 0.1220 | | 1.5897 | 35.0 | 210 | 1.6422 | 0.1220 | | 1.5897 | 36.0 | 216 | 1.6422 | 0.1220 | | 1.5725 | 37.0 | 222 | 1.6421 | 0.1220 | | 1.5725 | 38.0 | 228 | 1.6421 | 0.1220 | | 1.6227 | 39.0 | 234 | 1.6421 | 0.1220 | | 1.5924 | 40.0 | 240 | 1.6421 | 0.1220 | | 1.5924 | 41.0 | 246 | 1.6421 | 0.1220 | | 1.5811 | 42.0 | 252 | 1.6421 | 0.1220 | | 1.5811 | 43.0 | 258 | 1.6421 | 0.1220 | | 1.6072 | 44.0 | 264 | 1.6421 | 0.1220 | | 1.5938 | 45.0 | 270 | 1.6421 | 0.1220 | | 1.5938 | 46.0 | 276 | 1.6421 | 0.1220 | | 1.6243 | 47.0 | 282 | 1.6421 | 0.1220 | | 1.6243 | 48.0 | 288 | 1.6421 | 0.1220 | | 1.5633 | 49.0 | 294 | 1.6421 | 0.1220 | | 1.6091 | 50.0 | 300 | 1.6421 | 0.1220 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "01_normal", "02_tapered", "03_pyriform", "04_amorphous" ]
zbbg1111/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0170 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.8 | 3 | 0.1346 | 1.0 | | No log | 1.87 | 7 | 0.0209 | 1.0 | | No log | 2.4 | 9 | 0.0170 | 1.0 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cpu - Datasets 2.14.6 - Tokenizers 0.14.1
[ "oilstorage", "residential" ]
anirudhmu/swin-tiny-patch4-window7-224-finetuned-soccer-binary
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-soccer-binary This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1138 - Accuracy: 0.9714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1286 | 0.96 | 12 | 0.1138 | 0.9714 | | 0.1267 | 2.0 | 25 | 0.1283 | 0.9657 | | 0.121 | 2.96 | 37 | 0.1124 | 0.9657 | | 0.1142 | 4.0 | 50 | 0.1151 | 0.9657 | | 0.1069 | 4.96 | 62 | 0.1063 | 0.96 | | 0.1038 | 6.0 | 75 | 0.1210 | 0.96 | | 0.0935 | 6.96 | 87 | 0.1150 | 0.96 | | 0.1042 | 8.0 | 100 | 0.1038 | 0.9657 | | 0.0945 | 8.96 | 112 | 0.1071 | 0.96 | | 0.0891 | 9.6 | 120 | 0.1077 | 0.96 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "closeup", "overview" ]
dwiedarioo/vit-base-patch16-224-in21k-finalmultibrainmri
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dwiedarioo/vit-base-patch16-224-in21k-finalmultibrainmri This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1240 - Train Accuracy: 0.9989 - Train Top-3-accuracy: 1.0 - Validation Loss: 0.2638 - Validation Accuracy: 0.9568 - Validation Top-3-accuracy: 0.9892 - Epoch: 10 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 8200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 2.2501 | 0.3937 | 0.6346 | 1.8763 | 0.5551 | 0.8035 | 0 | | 1.5448 | 0.6808 | 0.8732 | 1.3666 | 0.7127 | 0.8812 | 1 | | 1.0471 | 0.8324 | 0.9439 | 0.9732 | 0.8402 | 0.9568 | 2 | | 0.7074 | 0.9385 | 0.9828 | 0.7078 | 0.9266 | 0.9849 | 3 | | 0.4854 | 0.9748 | 0.9924 | 0.5190 | 0.9374 | 0.9892 | 4 | | 0.3465 | 0.9905 | 0.9962 | 0.4126 | 0.9482 | 0.9935 | 5 | | 0.2571 | 0.9950 | 0.9981 | 0.3267 | 0.9719 | 0.9957 | 6 | | 0.2031 | 0.9962 | 0.9992 | 0.2788 | 0.9741 | 0.9957 | 7 | | 0.1667 | 0.9985 | 1.0 | 0.2484 | 0.9698 | 0.9957 | 8 | | 0.1398 | 0.9992 | 1.0 | 0.2225 | 0.9719 | 0.9957 | 9 | | 0.1240 | 0.9989 | 1.0 | 0.2638 | 0.9568 | 0.9892 | 10 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "schwannoma", "germinoma", "tuberculoma", "oligodendroglioma", "meningioma", "_normal", "ganglioglioma", "glioblastoma", "meduloblastoma", "neurocitoma", "papiloma", "astrocitoma", "carcinoma", "ependimoma", "granuloma" ]
arieg/4_100_s_clr
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/4_100_s_clr This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0378 - Validation Loss: 0.0380 - Train Accuracy: 1.0 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 7200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.9829 | 0.7003 | 0.875 | 0 | | 0.5404 | 0.3962 | 0.975 | 1 | | 0.3221 | 0.2131 | 0.975 | 2 | | 0.2120 | 0.1755 | 1.0 | 3 | | 0.1496 | 0.1308 | 1.0 | 4 | | 0.1181 | 0.1103 | 1.0 | 5 | | 0.0998 | 0.0973 | 1.0 | 6 | | 0.0878 | 0.0845 | 1.0 | 7 | | 0.0790 | 0.0793 | 1.0 | 8 | | 0.0721 | 0.0709 | 1.0 | 9 | | 0.0665 | 0.0657 | 1.0 | 10 | | 0.0614 | 0.0602 | 1.0 | 11 | | 0.0571 | 0.0565 | 1.0 | 12 | | 0.0534 | 0.0538 | 1.0 | 13 | | 0.0501 | 0.0499 | 1.0 | 14 | | 0.0472 | 0.0473 | 1.0 | 15 | | 0.0445 | 0.0445 | 1.0 | 16 | | 0.0421 | 0.0423 | 1.0 | 17 | | 0.0398 | 0.0397 | 1.0 | 18 | | 0.0378 | 0.0380 | 1.0 | 19 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "10", "140", "2", "5" ]
arieg/4_00_s_200
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/4_100_s_200 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0155 - Validation Loss: 0.0151 - Train Accuracy: 1.0 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 14400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6483 | 0.2667 | 1.0 | 0 | | 0.1768 | 0.1322 | 1.0 | 1 | | 0.1096 | 0.0960 | 1.0 | 2 | | 0.0850 | 0.0781 | 1.0 | 3 | | 0.0710 | 0.0663 | 1.0 | 4 | | 0.0612 | 0.0576 | 1.0 | 5 | | 0.0534 | 0.0506 | 1.0 | 6 | | 0.0472 | 0.0448 | 1.0 | 7 | | 0.0420 | 0.0400 | 1.0 | 8 | | 0.0376 | 0.0359 | 1.0 | 9 | | 0.0339 | 0.0324 | 1.0 | 10 | | 0.0306 | 0.0294 | 1.0 | 11 | | 0.0278 | 0.0267 | 1.0 | 12 | | 0.0253 | 0.0244 | 1.0 | 13 | | 0.0232 | 0.0223 | 1.0 | 14 | | 0.0212 | 0.0205 | 1.0 | 15 | | 0.0196 | 0.0189 | 1.0 | 16 | | 0.0180 | 0.0175 | 1.0 | 17 | | 0.0167 | 0.0162 | 1.0 | 18 | | 0.0155 | 0.0151 | 1.0 | 19 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "10", "140", "2", "5" ]
arieg/4_01_s_200
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/4_01_s_200 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0156 - Validation Loss: 0.0151 - Train Accuracy: 1.0 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 14400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.7193 | 0.2997 | 1.0 | 0 | | 0.2007 | 0.1391 | 1.0 | 1 | | 0.1164 | 0.0981 | 1.0 | 2 | | 0.0881 | 0.0788 | 1.0 | 3 | | 0.0724 | 0.0664 | 1.0 | 4 | | 0.0618 | 0.0573 | 1.0 | 5 | | 0.0537 | 0.0502 | 1.0 | 6 | | 0.0474 | 0.0445 | 1.0 | 7 | | 0.0421 | 0.0397 | 1.0 | 8 | | 0.0377 | 0.0357 | 1.0 | 9 | | 0.0339 | 0.0322 | 1.0 | 10 | | 0.0307 | 0.0292 | 1.0 | 11 | | 0.0279 | 0.0266 | 1.0 | 12 | | 0.0254 | 0.0243 | 1.0 | 13 | | 0.0233 | 0.0223 | 1.0 | 14 | | 0.0214 | 0.0205 | 1.0 | 15 | | 0.0197 | 0.0189 | 1.0 | 16 | | 0.0182 | 0.0175 | 1.0 | 17 | | 0.0168 | 0.0162 | 1.0 | 18 | | 0.0156 | 0.0151 | 1.0 | 19 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "141", "190", "193", "194" ]
dima806/vehicle_10_types_image_detection
Return a vehicle type probability based on an image with about 93% accuracy. See https://www.kaggle.com/code/dima806/vehicle-10-types-detection-vit for more details. ``` Classification report: precision recall f1-score support SUV 0.8780 0.9000 0.8889 40 bus 1.0000 1.0000 1.0000 40 family sedan 0.8571 0.9000 0.8780 40 fire engine 0.8444 0.9500 0.8941 40 heavy truck 0.9459 0.8750 0.9091 40 jeep 0.9512 0.9750 0.9630 40 minibus 0.9500 0.9500 0.9500 40 racing car 1.0000 0.9500 0.9744 40 taxi 0.9750 0.9750 0.9750 40 truck 0.9722 0.8750 0.9211 40 accuracy 0.9350 400 macro avg 0.9374 0.9350 0.9354 400 weighted avg 0.9374 0.9350 0.9354 400 ```
[ "suv", "bus", "family sedan", "fire engine", "heavy truck", "jeep", "minibus", "racing car", "taxi", "truck" ]
ArtificialMargoles/vit-base-patch16-224-finetuned-flower
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.1.0+cu118 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "daisy", "dandelion", "roses", "sunflowers", "tulips" ]
dwiedarioo/vit-base-patch16-224-in21k-final2multibrainmri
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dwiedarioo/vit-base-patch16-224-in21k-final2multibrainmri This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0072 - Train Accuracy: 1.0 - Train Top-3-accuracy: 1.0 - Validation Loss: 0.1111 - Validation Accuracy: 0.9719 - Validation Top-3-accuracy: 0.9914 - Epoch: 49 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 8200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 2.2742 | 0.3856 | 0.6522 | 1.8596 | 0.6112 | 0.8337 | 0 | | 1.5673 | 0.6919 | 0.8778 | 1.3120 | 0.7883 | 0.9136 | 1 | | 1.0377 | 0.8622 | 0.9576 | 0.9078 | 0.8661 | 0.9611 | 2 | | 0.6816 | 0.9511 | 0.9859 | 0.6497 | 0.9222 | 0.9849 | 3 | | 0.4698 | 0.9805 | 0.9939 | 0.5104 | 0.9395 | 0.9870 | 4 | | 0.3375 | 0.9897 | 0.9973 | 0.3975 | 0.9590 | 0.9892 | 5 | | 0.2554 | 0.9966 | 0.9992 | 0.3107 | 0.9676 | 0.9978 | 6 | | 0.2346 | 0.9905 | 0.9992 | 0.3804 | 0.9287 | 0.9914 | 7 | | 0.1976 | 0.9935 | 0.9989 | 0.3250 | 0.9546 | 0.9914 | 8 | | 0.1686 | 0.9939 | 0.9992 | 0.4980 | 0.8920 | 0.9762 | 9 | | 0.1423 | 0.9969 | 0.9996 | 0.2129 | 0.9654 | 0.9957 | 10 | | 0.1073 | 0.9992 | 1.0 | 0.1840 | 0.9741 | 0.9978 | 11 | | 0.0925 | 0.9992 | 1.0 | 0.1714 | 0.9719 | 0.9978 | 12 | | 0.0809 | 0.9992 | 1.0 | 0.1595 | 0.9719 | 0.9978 | 13 | | 0.0715 | 0.9992 | 1.0 | 0.1503 | 0.9719 | 0.9978 | 14 | | 0.0637 | 1.0 | 1.0 | 0.1426 | 0.9762 | 0.9978 | 15 | | 0.0573 | 0.9996 | 1.0 | 0.1361 | 0.9784 | 0.9978 | 16 | | 0.0516 | 1.0 | 1.0 | 0.1325 | 0.9784 | 0.9957 | 17 | | 0.0469 | 1.0 | 1.0 | 0.1279 | 0.9784 | 0.9957 | 18 | | 0.0427 | 1.0 | 1.0 | 0.1248 | 0.9784 | 0.9957 | 19 | | 0.0392 | 1.0 | 1.0 | 0.1224 | 0.9784 | 0.9957 | 20 | | 0.0359 | 1.0 | 1.0 | 0.1191 | 0.9784 | 0.9957 | 21 | | 0.0331 | 1.0 | 1.0 | 0.1178 | 0.9762 | 0.9914 | 22 | | 0.0306 | 1.0 | 1.0 | 0.1162 | 0.9784 | 0.9957 | 23 | | 0.0284 | 1.0 | 1.0 | 0.1144 | 0.9784 | 0.9957 | 24 | | 0.0264 | 1.0 | 1.0 | 0.1143 | 0.9741 | 0.9957 | 25 | | 0.0246 | 1.0 | 1.0 | 0.1126 | 0.9762 | 0.9957 | 26 | | 0.0230 | 1.0 | 1.0 | 0.1104 | 0.9784 | 0.9957 | 27 | | 0.0215 | 1.0 | 1.0 | 0.1110 | 0.9762 | 0.9935 | 28 | | 0.0201 | 1.0 | 1.0 | 0.1091 | 0.9762 | 0.9957 | 29 | | 0.0189 | 1.0 | 1.0 | 0.1101 | 0.9741 | 0.9957 | 30 | | 0.0178 | 1.0 | 1.0 | 0.1099 | 0.9762 | 0.9914 | 31 | | 0.0167 | 1.0 | 1.0 | 0.1091 | 0.9762 | 0.9935 | 32 | | 0.0158 | 1.0 | 1.0 | 0.1091 | 0.9762 | 0.9914 | 33 | | 0.0149 | 1.0 | 1.0 | 0.1094 | 0.9741 | 0.9914 | 34 | | 0.0141 | 1.0 | 1.0 | 0.1088 | 0.9719 | 0.9914 | 35 | | 0.0134 | 1.0 | 1.0 | 0.1089 | 0.9762 | 0.9914 | 36 | | 0.0127 | 1.0 | 1.0 | 0.1084 | 0.9741 | 0.9935 | 37 | | 0.0120 | 1.0 | 1.0 | 0.1087 | 0.9741 | 0.9914 | 38 | | 0.0114 | 1.0 | 1.0 | 0.1078 | 0.9741 | 0.9914 | 39 | | 0.0109 | 1.0 | 1.0 | 0.1088 | 0.9719 | 0.9914 | 40 | | 0.0104 | 1.0 | 1.0 | 0.1087 | 0.9719 | 0.9914 | 41 | | 0.0099 | 1.0 | 1.0 | 0.1094 | 0.9719 | 0.9935 | 42 | | 0.0094 | 1.0 | 1.0 | 0.1095 | 0.9719 | 0.9914 | 43 | | 0.0090 | 1.0 | 1.0 | 0.1099 | 0.9719 | 0.9914 | 44 | | 0.0086 | 1.0 | 1.0 | 0.1112 | 0.9719 | 0.9914 | 45 | | 0.0082 | 1.0 | 1.0 | 0.1104 | 0.9719 | 0.9914 | 46 | | 0.0079 | 1.0 | 1.0 | 0.1107 | 0.9719 | 0.9914 | 47 | | 0.0075 | 1.0 | 1.0 | 0.1102 | 0.9741 | 0.9914 | 48 | | 0.0072 | 1.0 | 1.0 | 0.1111 | 0.9719 | 0.9914 | 49 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "schwannoma", "germinoma", "tuberculoma", "oligodendroglioma", "meningioma", "_normal", "ganglioglioma", "glioblastoma", "meduloblastoma", "neurocitoma", "papiloma", "astrocitoma", "carcinoma", "ependimoma", "granuloma" ]
arieg/bw_spec_cls_4_01_noise_200
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/bw_spec_cls_4_01_noise_200 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0370 - Train Categorical Accuracy: 0.2486 - Validation Loss: 0.0349 - Validation Categorical Accuracy: 0.2625 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'clipnorm': 1.0, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 7200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Categorical Accuracy | Validation Loss | Validation Categorical Accuracy | Epoch | |:----------:|:--------------------------:|:---------------:|:-------------------------------:|:-----:| | 0.6021 | 0.2458 | 0.2372 | 0.2625 | 0 | | 0.1654 | 0.2486 | 0.1210 | 0.2625 | 1 | | 0.1042 | 0.2486 | 0.0902 | 0.2625 | 2 | | 0.0819 | 0.2486 | 0.0741 | 0.2625 | 3 | | 0.0688 | 0.2486 | 0.0634 | 0.2625 | 4 | | 0.0595 | 0.2486 | 0.0553 | 0.2625 | 5 | | 0.0522 | 0.2486 | 0.0488 | 0.2625 | 6 | | 0.0462 | 0.2486 | 0.0434 | 0.2625 | 7 | | 0.0412 | 0.2486 | 0.0388 | 0.2625 | 8 | | 0.0370 | 0.2486 | 0.0349 | 0.2625 | 9 | ### Framework versions - Transformers 4.35.0 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "141", "190", "193", "194" ]
parisapouya/vit-base-beans
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0146 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1021 | 1.54 | 100 | 0.0688 | 0.9774 | | 0.0438 | 3.08 | 200 | 0.0146 | 1.0 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
ger99/ger-vit-model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ger-vit-model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0070 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1419 | 3.85 | 500 | 0.0070 | 1.0 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]