model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
jordyvl/dit-base_tobacco-tiny_tobacco3482_kd_CEKD_t2.5_a0.5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-tiny_tobacco3482_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6206
- Accuracy: 0.825
- Brier Loss: 0.2570
- Nll: 0.9939
- F1 Micro: 0.825
- F1 Macro: 0.8166
- Ece: 0.1370
- Aurc: 0.0444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 4.0511 | 0.22 | 0.9320 | 7.5162 | 0.22 | 0.0792 | 0.3104 | 0.7615 |
| No log | 2.0 | 14 | 3.4353 | 0.35 | 0.8214 | 5.1797 | 0.35 | 0.2316 | 0.2589 | 0.6234 |
| No log | 3.0 | 21 | 2.6406 | 0.47 | 0.6828 | 2.9202 | 0.47 | 0.3725 | 0.2781 | 0.3277 |
| No log | 4.0 | 28 | 2.0027 | 0.57 | 0.5596 | 1.8624 | 0.57 | 0.4971 | 0.2526 | 0.2124 |
| No log | 5.0 | 35 | 1.5018 | 0.65 | 0.4518 | 1.7094 | 0.65 | 0.6128 | 0.2242 | 0.1396 |
| No log | 6.0 | 42 | 1.3904 | 0.71 | 0.4105 | 1.8765 | 0.7100 | 0.7058 | 0.2202 | 0.1126 |
| No log | 7.0 | 49 | 1.1226 | 0.76 | 0.3558 | 1.8024 | 0.76 | 0.7029 | 0.1815 | 0.0841 |
| No log | 8.0 | 56 | 1.1810 | 0.73 | 0.3716 | 1.5642 | 0.7300 | 0.7027 | 0.1956 | 0.0834 |
| No log | 9.0 | 63 | 1.2131 | 0.73 | 0.3811 | 1.7544 | 0.7300 | 0.6774 | 0.2070 | 0.0872 |
| No log | 10.0 | 70 | 1.3986 | 0.72 | 0.4043 | 2.0161 | 0.72 | 0.7259 | 0.2021 | 0.1098 |
| No log | 11.0 | 77 | 1.1001 | 0.765 | 0.3202 | 1.9113 | 0.765 | 0.7578 | 0.1859 | 0.0678 |
| No log | 12.0 | 84 | 1.0429 | 0.77 | 0.3487 | 1.2955 | 0.7700 | 0.7663 | 0.1910 | 0.0827 |
| No log | 13.0 | 91 | 0.9864 | 0.77 | 0.3227 | 1.3721 | 0.7700 | 0.7734 | 0.1692 | 0.0710 |
| No log | 14.0 | 98 | 1.0068 | 0.74 | 0.3581 | 1.3362 | 0.74 | 0.7271 | 0.1848 | 0.0804 |
| No log | 15.0 | 105 | 0.8635 | 0.795 | 0.3009 | 1.4785 | 0.795 | 0.7810 | 0.1646 | 0.0538 |
| No log | 16.0 | 112 | 0.8157 | 0.81 | 0.2845 | 1.2525 | 0.81 | 0.7931 | 0.1545 | 0.0519 |
| No log | 17.0 | 119 | 0.8616 | 0.78 | 0.3186 | 1.4230 | 0.78 | 0.7705 | 0.1610 | 0.0647 |
| No log | 18.0 | 126 | 0.8034 | 0.8 | 0.2784 | 1.4410 | 0.8000 | 0.7811 | 0.1576 | 0.0489 |
| No log | 19.0 | 133 | 0.7601 | 0.805 | 0.2697 | 1.2885 | 0.805 | 0.7823 | 0.1499 | 0.0494 |
| No log | 20.0 | 140 | 0.7598 | 0.82 | 0.2709 | 1.3643 | 0.82 | 0.8090 | 0.1542 | 0.0516 |
| No log | 21.0 | 147 | 0.8221 | 0.79 | 0.2905 | 1.4031 | 0.79 | 0.7640 | 0.1612 | 0.0585 |
| No log | 22.0 | 154 | 0.7271 | 0.825 | 0.2599 | 1.0950 | 0.825 | 0.8147 | 0.1381 | 0.0454 |
| No log | 23.0 | 161 | 0.7556 | 0.795 | 0.2891 | 1.1111 | 0.795 | 0.7822 | 0.1413 | 0.0558 |
| No log | 24.0 | 168 | 0.7197 | 0.81 | 0.2759 | 1.1361 | 0.81 | 0.7905 | 0.1617 | 0.0500 |
| No log | 25.0 | 175 | 0.7192 | 0.83 | 0.2620 | 1.3395 | 0.83 | 0.8155 | 0.1459 | 0.0433 |
| No log | 26.0 | 182 | 0.7347 | 0.805 | 0.2821 | 1.1396 | 0.805 | 0.7868 | 0.1512 | 0.0541 |
| No log | 27.0 | 189 | 0.7402 | 0.815 | 0.2805 | 1.3562 | 0.815 | 0.7928 | 0.1489 | 0.0519 |
| No log | 28.0 | 196 | 0.6986 | 0.815 | 0.2562 | 1.1454 | 0.815 | 0.7944 | 0.1467 | 0.0443 |
| No log | 29.0 | 203 | 0.7148 | 0.81 | 0.2718 | 1.1404 | 0.81 | 0.7944 | 0.1440 | 0.0513 |
| No log | 30.0 | 210 | 0.7041 | 0.81 | 0.2796 | 1.3773 | 0.81 | 0.7998 | 0.1484 | 0.0494 |
| No log | 31.0 | 217 | 0.7428 | 0.815 | 0.2823 | 1.1146 | 0.815 | 0.7967 | 0.1626 | 0.0542 |
| No log | 32.0 | 224 | 0.6941 | 0.82 | 0.2682 | 1.1921 | 0.82 | 0.8098 | 0.1427 | 0.0478 |
| No log | 33.0 | 231 | 0.7170 | 0.81 | 0.2794 | 1.2244 | 0.81 | 0.7875 | 0.1407 | 0.0511 |
| No log | 34.0 | 238 | 0.7024 | 0.815 | 0.2805 | 1.0423 | 0.815 | 0.8043 | 0.1560 | 0.0512 |
| No log | 35.0 | 245 | 0.7299 | 0.81 | 0.2710 | 1.1835 | 0.81 | 0.7964 | 0.1475 | 0.0530 |
| No log | 36.0 | 252 | 0.6488 | 0.83 | 0.2500 | 1.1662 | 0.83 | 0.8117 | 0.1315 | 0.0431 |
| No log | 37.0 | 259 | 0.6877 | 0.815 | 0.2751 | 1.0878 | 0.815 | 0.7973 | 0.1381 | 0.0489 |
| No log | 38.0 | 266 | 0.7019 | 0.84 | 0.2620 | 1.2709 | 0.8400 | 0.8282 | 0.1607 | 0.0498 |
| No log | 39.0 | 273 | 0.6687 | 0.81 | 0.2680 | 1.3004 | 0.81 | 0.7959 | 0.1346 | 0.0465 |
| No log | 40.0 | 280 | 0.6813 | 0.81 | 0.2809 | 1.0539 | 0.81 | 0.7929 | 0.1628 | 0.0500 |
| No log | 41.0 | 287 | 0.6525 | 0.83 | 0.2493 | 1.1496 | 0.83 | 0.8176 | 0.1413 | 0.0437 |
| No log | 42.0 | 294 | 0.6526 | 0.835 | 0.2547 | 1.2429 | 0.835 | 0.8253 | 0.1420 | 0.0450 |
| No log | 43.0 | 301 | 0.6696 | 0.82 | 0.2717 | 1.0446 | 0.82 | 0.8118 | 0.1486 | 0.0501 |
| No log | 44.0 | 308 | 0.6555 | 0.83 | 0.2626 | 0.9948 | 0.83 | 0.8214 | 0.1366 | 0.0461 |
| No log | 45.0 | 315 | 0.6380 | 0.82 | 0.2600 | 1.2151 | 0.82 | 0.8026 | 0.1263 | 0.0428 |
| No log | 46.0 | 322 | 0.6356 | 0.82 | 0.2571 | 1.0923 | 0.82 | 0.8114 | 0.1443 | 0.0449 |
| No log | 47.0 | 329 | 0.6444 | 0.815 | 0.2638 | 1.0657 | 0.815 | 0.7980 | 0.1503 | 0.0476 |
| No log | 48.0 | 336 | 0.6337 | 0.82 | 0.2676 | 1.0650 | 0.82 | 0.8077 | 0.1370 | 0.0442 |
| No log | 49.0 | 343 | 0.6271 | 0.84 | 0.2541 | 1.1500 | 0.8400 | 0.8230 | 0.1365 | 0.0422 |
| No log | 50.0 | 350 | 0.6284 | 0.81 | 0.2588 | 1.2703 | 0.81 | 0.7964 | 0.1411 | 0.0425 |
| No log | 51.0 | 357 | 0.6507 | 0.82 | 0.2612 | 1.1306 | 0.82 | 0.7996 | 0.1558 | 0.0460 |
| No log | 52.0 | 364 | 0.6329 | 0.825 | 0.2602 | 1.2060 | 0.825 | 0.8146 | 0.1296 | 0.0439 |
| No log | 53.0 | 371 | 0.6342 | 0.825 | 0.2574 | 1.0132 | 0.825 | 0.8158 | 0.1467 | 0.0434 |
| No log | 54.0 | 378 | 0.6486 | 0.82 | 0.2633 | 1.1662 | 0.82 | 0.8060 | 0.1445 | 0.0466 |
| No log | 55.0 | 385 | 0.6245 | 0.825 | 0.2588 | 1.1358 | 0.825 | 0.8088 | 0.1428 | 0.0429 |
| No log | 56.0 | 392 | 0.6303 | 0.815 | 0.2616 | 0.9843 | 0.815 | 0.8013 | 0.1447 | 0.0458 |
| No log | 57.0 | 399 | 0.6196 | 0.82 | 0.2545 | 1.1936 | 0.82 | 0.8076 | 0.1516 | 0.0438 |
| No log | 58.0 | 406 | 0.6241 | 0.82 | 0.2620 | 1.0557 | 0.82 | 0.8100 | 0.1423 | 0.0450 |
| No log | 59.0 | 413 | 0.6278 | 0.82 | 0.2579 | 1.0777 | 0.82 | 0.8076 | 0.1382 | 0.0451 |
| No log | 60.0 | 420 | 0.6385 | 0.81 | 0.2651 | 0.9962 | 0.81 | 0.7910 | 0.1565 | 0.0467 |
| No log | 61.0 | 427 | 0.6328 | 0.82 | 0.2619 | 0.9968 | 0.82 | 0.8103 | 0.1299 | 0.0469 |
| No log | 62.0 | 434 | 0.6195 | 0.82 | 0.2571 | 0.9997 | 0.82 | 0.8062 | 0.1471 | 0.0438 |
| No log | 63.0 | 441 | 0.6150 | 0.825 | 0.2560 | 1.0061 | 0.825 | 0.8166 | 0.1498 | 0.0430 |
| No log | 64.0 | 448 | 0.6201 | 0.825 | 0.2574 | 1.0592 | 0.825 | 0.8166 | 0.1369 | 0.0442 |
| No log | 65.0 | 455 | 0.6281 | 0.815 | 0.2601 | 0.9990 | 0.815 | 0.8013 | 0.1449 | 0.0459 |
| No log | 66.0 | 462 | 0.6232 | 0.825 | 0.2538 | 1.0657 | 0.825 | 0.8166 | 0.1341 | 0.0442 |
| No log | 67.0 | 469 | 0.6242 | 0.82 | 0.2567 | 1.0622 | 0.82 | 0.8100 | 0.1432 | 0.0445 |
| No log | 68.0 | 476 | 0.6213 | 0.82 | 0.2598 | 1.0666 | 0.82 | 0.8100 | 0.1517 | 0.0447 |
| No log | 69.0 | 483 | 0.6268 | 0.82 | 0.2577 | 1.0106 | 0.82 | 0.8100 | 0.1365 | 0.0455 |
| No log | 70.0 | 490 | 0.6252 | 0.82 | 0.2579 | 0.9979 | 0.82 | 0.8100 | 0.1395 | 0.0451 |
| No log | 71.0 | 497 | 0.6251 | 0.82 | 0.2589 | 1.0606 | 0.82 | 0.8100 | 0.1485 | 0.0448 |
| 0.3286 | 72.0 | 504 | 0.6212 | 0.825 | 0.2571 | 1.0034 | 0.825 | 0.8166 | 0.1448 | 0.0443 |
| 0.3286 | 73.0 | 511 | 0.6212 | 0.82 | 0.2584 | 0.9940 | 0.82 | 0.8100 | 0.1499 | 0.0444 |
| 0.3286 | 74.0 | 518 | 0.6214 | 0.82 | 0.2576 | 0.9914 | 0.82 | 0.8100 | 0.1411 | 0.0448 |
| 0.3286 | 75.0 | 525 | 0.6233 | 0.82 | 0.2580 | 0.9966 | 0.82 | 0.8100 | 0.1592 | 0.0450 |
| 0.3286 | 76.0 | 532 | 0.6214 | 0.82 | 0.2568 | 0.9952 | 0.82 | 0.8100 | 0.1404 | 0.0448 |
| 0.3286 | 77.0 | 539 | 0.6217 | 0.825 | 0.2575 | 0.9951 | 0.825 | 0.8166 | 0.1361 | 0.0445 |
| 0.3286 | 78.0 | 546 | 0.6220 | 0.82 | 0.2569 | 0.9964 | 0.82 | 0.8100 | 0.1385 | 0.0450 |
| 0.3286 | 79.0 | 553 | 0.6225 | 0.82 | 0.2581 | 0.9950 | 0.82 | 0.8100 | 0.1485 | 0.0450 |
| 0.3286 | 80.0 | 560 | 0.6213 | 0.82 | 0.2578 | 0.9912 | 0.82 | 0.8100 | 0.1381 | 0.0446 |
| 0.3286 | 81.0 | 567 | 0.6209 | 0.82 | 0.2572 | 0.9948 | 0.82 | 0.8100 | 0.1415 | 0.0447 |
| 0.3286 | 82.0 | 574 | 0.6213 | 0.82 | 0.2578 | 0.9958 | 0.82 | 0.8100 | 0.1422 | 0.0449 |
| 0.3286 | 83.0 | 581 | 0.6220 | 0.82 | 0.2579 | 0.9947 | 0.82 | 0.8100 | 0.1553 | 0.0448 |
| 0.3286 | 84.0 | 588 | 0.6212 | 0.82 | 0.2574 | 0.9915 | 0.82 | 0.8100 | 0.1418 | 0.0447 |
| 0.3286 | 85.0 | 595 | 0.6220 | 0.82 | 0.2579 | 0.9937 | 0.82 | 0.8100 | 0.1628 | 0.0450 |
| 0.3286 | 86.0 | 602 | 0.6207 | 0.82 | 0.2572 | 0.9945 | 0.82 | 0.8100 | 0.1412 | 0.0447 |
| 0.3286 | 87.0 | 609 | 0.6212 | 0.82 | 0.2573 | 0.9940 | 0.82 | 0.8100 | 0.1414 | 0.0447 |
| 0.3286 | 88.0 | 616 | 0.6201 | 0.825 | 0.2570 | 0.9943 | 0.825 | 0.8166 | 0.1366 | 0.0443 |
| 0.3286 | 89.0 | 623 | 0.6210 | 0.82 | 0.2573 | 0.9944 | 0.82 | 0.8100 | 0.1414 | 0.0448 |
| 0.3286 | 90.0 | 630 | 0.6207 | 0.82 | 0.2572 | 0.9942 | 0.82 | 0.8100 | 0.1414 | 0.0447 |
| 0.3286 | 91.0 | 637 | 0.6210 | 0.82 | 0.2572 | 0.9952 | 0.82 | 0.8100 | 0.1415 | 0.0447 |
| 0.3286 | 92.0 | 644 | 0.6205 | 0.82 | 0.2572 | 0.9939 | 0.82 | 0.8100 | 0.1414 | 0.0447 |
| 0.3286 | 93.0 | 651 | 0.6207 | 0.825 | 0.2570 | 0.9938 | 0.825 | 0.8166 | 0.1373 | 0.0445 |
| 0.3286 | 94.0 | 658 | 0.6206 | 0.82 | 0.2572 | 0.9945 | 0.82 | 0.8100 | 0.1414 | 0.0447 |
| 0.3286 | 95.0 | 665 | 0.6203 | 0.825 | 0.2568 | 0.9951 | 0.825 | 0.8166 | 0.1370 | 0.0444 |
| 0.3286 | 96.0 | 672 | 0.6205 | 0.82 | 0.2571 | 0.9942 | 0.82 | 0.8100 | 0.1413 | 0.0448 |
| 0.3286 | 97.0 | 679 | 0.6206 | 0.825 | 0.2570 | 0.9943 | 0.825 | 0.8166 | 0.1370 | 0.0445 |
| 0.3286 | 98.0 | 686 | 0.6206 | 0.825 | 0.2570 | 0.9942 | 0.825 | 0.8166 | 0.1370 | 0.0445 |
| 0.3286 | 99.0 | 693 | 0.6206 | 0.825 | 0.2570 | 0.9940 | 0.825 | 0.8166 | 0.1370 | 0.0445 |
| 0.3286 | 100.0 | 700 | 0.6206 | 0.825 | 0.2570 | 0.9939 | 0.825 | 0.8166 | 0.1370 | 0.0444 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/dit-base_tobacco-tiny_tobacco3482_kd_MSE
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-tiny_tobacco3482_kd_MSE
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0108
- Accuracy: 0.815
- Brier Loss: 0.2593
- Nll: 1.1011
- F1 Micro: 0.815
- F1 Macro: 0.8014
- Ece: 0.1462
- Aurc: 0.0442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 7.1562 | 0.195 | 0.9158 | 7.6908 | 0.195 | 0.1043 | 0.2991 | 0.7927 |
| No log | 2.0 | 14 | 6.3485 | 0.245 | 0.8600 | 4.2261 | 0.245 | 0.1794 | 0.2960 | 0.7445 |
| No log | 3.0 | 21 | 5.2488 | 0.41 | 0.7090 | 3.6293 | 0.41 | 0.3494 | 0.2826 | 0.3786 |
| No log | 4.0 | 28 | 4.0484 | 0.565 | 0.5698 | 1.7703 | 0.565 | 0.5209 | 0.2622 | 0.2372 |
| No log | 5.0 | 35 | 3.0368 | 0.655 | 0.4710 | 1.5921 | 0.655 | 0.6368 | 0.2222 | 0.1598 |
| No log | 6.0 | 42 | 2.6191 | 0.695 | 0.4219 | 1.6919 | 0.695 | 0.6535 | 0.2041 | 0.1200 |
| No log | 7.0 | 49 | 2.0941 | 0.725 | 0.3913 | 1.3852 | 0.7250 | 0.6844 | 0.2046 | 0.0966 |
| No log | 8.0 | 56 | 2.0668 | 0.725 | 0.4119 | 1.3829 | 0.7250 | 0.6811 | 0.1890 | 0.1045 |
| No log | 9.0 | 63 | 1.7456 | 0.79 | 0.3138 | 1.5258 | 0.79 | 0.7539 | 0.1521 | 0.0651 |
| No log | 10.0 | 70 | 1.5815 | 0.77 | 0.3391 | 1.2461 | 0.7700 | 0.7323 | 0.1593 | 0.0725 |
| No log | 11.0 | 77 | 1.5720 | 0.785 | 0.2895 | 1.3282 | 0.785 | 0.7659 | 0.1408 | 0.0522 |
| No log | 12.0 | 84 | 1.8886 | 0.78 | 0.3692 | 1.6238 | 0.78 | 0.7717 | 0.2015 | 0.0917 |
| No log | 13.0 | 91 | 1.6164 | 0.785 | 0.2918 | 1.6303 | 0.785 | 0.7925 | 0.1545 | 0.0564 |
| No log | 14.0 | 98 | 1.4318 | 0.785 | 0.3220 | 1.3070 | 0.785 | 0.7606 | 0.1430 | 0.0639 |
| No log | 15.0 | 105 | 1.2774 | 0.81 | 0.2807 | 1.2877 | 0.81 | 0.7939 | 0.1532 | 0.0595 |
| No log | 16.0 | 112 | 1.3797 | 0.8 | 0.2993 | 1.2409 | 0.8000 | 0.7759 | 0.1565 | 0.0700 |
| No log | 17.0 | 119 | 1.3629 | 0.795 | 0.3091 | 1.1781 | 0.795 | 0.7670 | 0.1712 | 0.0567 |
| No log | 18.0 | 126 | 1.5101 | 0.8 | 0.3192 | 1.3586 | 0.8000 | 0.7878 | 0.1919 | 0.0665 |
| No log | 19.0 | 133 | 1.3897 | 0.805 | 0.2857 | 1.4983 | 0.805 | 0.7851 | 0.1356 | 0.0516 |
| No log | 20.0 | 140 | 1.3821 | 0.795 | 0.3204 | 1.0916 | 0.795 | 0.7745 | 0.1678 | 0.0651 |
| No log | 21.0 | 147 | 1.2852 | 0.83 | 0.2621 | 1.5182 | 0.83 | 0.8246 | 0.1483 | 0.0486 |
| No log | 22.0 | 154 | 1.2080 | 0.815 | 0.2744 | 1.1921 | 0.815 | 0.7957 | 0.1319 | 0.0500 |
| No log | 23.0 | 161 | 1.4016 | 0.805 | 0.3165 | 1.3364 | 0.805 | 0.7844 | 0.1534 | 0.0624 |
| No log | 24.0 | 168 | 1.2883 | 0.825 | 0.2592 | 1.4946 | 0.825 | 0.8119 | 0.1549 | 0.0481 |
| No log | 25.0 | 175 | 1.1715 | 0.815 | 0.2676 | 1.3363 | 0.815 | 0.8054 | 0.1464 | 0.0494 |
| No log | 26.0 | 182 | 1.1844 | 0.825 | 0.2585 | 1.4938 | 0.825 | 0.8045 | 0.1572 | 0.0469 |
| No log | 27.0 | 189 | 1.1739 | 0.81 | 0.2959 | 1.0692 | 0.81 | 0.7909 | 0.1625 | 0.0550 |
| No log | 28.0 | 196 | 1.1944 | 0.815 | 0.2891 | 1.1811 | 0.815 | 0.7971 | 0.1430 | 0.0572 |
| No log | 29.0 | 203 | 1.2115 | 0.83 | 0.2597 | 1.4809 | 0.83 | 0.8101 | 0.1289 | 0.0469 |
| No log | 30.0 | 210 | 1.1622 | 0.81 | 0.2825 | 1.1104 | 0.81 | 0.7931 | 0.1463 | 0.0511 |
| No log | 31.0 | 217 | 1.2591 | 0.8 | 0.3096 | 1.2310 | 0.8000 | 0.7789 | 0.1719 | 0.0591 |
| No log | 32.0 | 224 | 1.1752 | 0.82 | 0.2687 | 1.4091 | 0.82 | 0.7959 | 0.1581 | 0.0504 |
| No log | 33.0 | 231 | 1.1114 | 0.815 | 0.2719 | 1.0945 | 0.815 | 0.7885 | 0.1492 | 0.0485 |
| No log | 34.0 | 238 | 1.1105 | 0.815 | 0.2727 | 1.1239 | 0.815 | 0.7962 | 0.1300 | 0.0479 |
| No log | 35.0 | 245 | 1.1662 | 0.825 | 0.2748 | 1.3396 | 0.825 | 0.8100 | 0.1571 | 0.0554 |
| No log | 36.0 | 252 | 1.1023 | 0.815 | 0.2757 | 1.1805 | 0.815 | 0.8031 | 0.1428 | 0.0504 |
| No log | 37.0 | 259 | 1.1060 | 0.84 | 0.2604 | 1.3305 | 0.8400 | 0.8319 | 0.1596 | 0.0487 |
| No log | 38.0 | 266 | 1.1123 | 0.81 | 0.2682 | 1.1122 | 0.81 | 0.7922 | 0.1310 | 0.0482 |
| No log | 39.0 | 273 | 1.0820 | 0.815 | 0.2669 | 1.1629 | 0.815 | 0.7955 | 0.1479 | 0.0490 |
| No log | 40.0 | 280 | 1.0972 | 0.805 | 0.2784 | 1.2442 | 0.805 | 0.7858 | 0.1576 | 0.0483 |
| No log | 41.0 | 287 | 1.0845 | 0.83 | 0.2705 | 1.1180 | 0.83 | 0.8221 | 0.1504 | 0.0468 |
| No log | 42.0 | 294 | 1.0769 | 0.82 | 0.2602 | 1.1173 | 0.82 | 0.8066 | 0.1458 | 0.0451 |
| No log | 43.0 | 301 | 1.1366 | 0.81 | 0.2939 | 1.0722 | 0.81 | 0.7958 | 0.1532 | 0.0526 |
| No log | 44.0 | 308 | 1.0716 | 0.82 | 0.2635 | 1.1839 | 0.82 | 0.8043 | 0.1403 | 0.0451 |
| No log | 45.0 | 315 | 1.0865 | 0.81 | 0.2770 | 1.3595 | 0.81 | 0.7929 | 0.1501 | 0.0528 |
| No log | 46.0 | 322 | 1.0768 | 0.82 | 0.2638 | 1.1161 | 0.82 | 0.8067 | 0.1462 | 0.0457 |
| No log | 47.0 | 329 | 1.0644 | 0.825 | 0.2552 | 1.2086 | 0.825 | 0.8098 | 0.1579 | 0.0439 |
| No log | 48.0 | 336 | 1.0511 | 0.815 | 0.2656 | 1.1019 | 0.815 | 0.8014 | 0.1518 | 0.0471 |
| No log | 49.0 | 343 | 1.0517 | 0.82 | 0.2717 | 1.0881 | 0.82 | 0.8044 | 0.1559 | 0.0473 |
| No log | 50.0 | 350 | 1.0824 | 0.81 | 0.2813 | 1.1022 | 0.81 | 0.7968 | 0.1538 | 0.0505 |
| No log | 51.0 | 357 | 1.1439 | 0.835 | 0.2634 | 1.3483 | 0.835 | 0.8206 | 0.1471 | 0.0496 |
| No log | 52.0 | 364 | 1.0444 | 0.83 | 0.2500 | 1.0999 | 0.83 | 0.8156 | 0.1310 | 0.0423 |
| No log | 53.0 | 371 | 1.0426 | 0.825 | 0.2644 | 1.1112 | 0.825 | 0.8053 | 0.1295 | 0.0474 |
| No log | 54.0 | 378 | 1.0341 | 0.825 | 0.2635 | 1.1053 | 0.825 | 0.8092 | 0.1467 | 0.0465 |
| No log | 55.0 | 385 | 1.0900 | 0.815 | 0.2762 | 1.1021 | 0.815 | 0.7990 | 0.1439 | 0.0480 |
| No log | 56.0 | 392 | 1.0423 | 0.845 | 0.2517 | 1.2594 | 0.845 | 0.8444 | 0.1497 | 0.0428 |
| No log | 57.0 | 399 | 1.0246 | 0.825 | 0.2634 | 1.0927 | 0.825 | 0.8130 | 0.1260 | 0.0454 |
| No log | 58.0 | 406 | 1.0365 | 0.835 | 0.2649 | 1.0825 | 0.835 | 0.8232 | 0.1291 | 0.0448 |
| No log | 59.0 | 413 | 1.0394 | 0.82 | 0.2668 | 1.0968 | 0.82 | 0.8045 | 0.1458 | 0.0460 |
| No log | 60.0 | 420 | 1.0261 | 0.815 | 0.2720 | 1.0883 | 0.815 | 0.8011 | 0.1409 | 0.0472 |
| No log | 61.0 | 427 | 1.0503 | 0.83 | 0.2543 | 1.3230 | 0.83 | 0.8132 | 0.1378 | 0.0455 |
| No log | 62.0 | 434 | 1.0400 | 0.82 | 0.2637 | 1.0958 | 0.82 | 0.8043 | 0.1397 | 0.0456 |
| No log | 63.0 | 441 | 1.0338 | 0.82 | 0.2629 | 1.0960 | 0.82 | 0.8042 | 0.1338 | 0.0435 |
| No log | 64.0 | 448 | 1.0373 | 0.84 | 0.2508 | 1.2817 | 0.8400 | 0.8260 | 0.1325 | 0.0433 |
| No log | 65.0 | 455 | 1.0266 | 0.83 | 0.2663 | 1.1057 | 0.83 | 0.8163 | 0.1383 | 0.0460 |
| No log | 66.0 | 462 | 1.0303 | 0.825 | 0.2549 | 1.1906 | 0.825 | 0.8098 | 0.1399 | 0.0450 |
| No log | 67.0 | 469 | 1.0224 | 0.82 | 0.2668 | 1.0920 | 0.82 | 0.8042 | 0.1252 | 0.0433 |
| No log | 68.0 | 476 | 1.0274 | 0.845 | 0.2526 | 1.1948 | 0.845 | 0.8368 | 0.1423 | 0.0442 |
| No log | 69.0 | 483 | 1.0145 | 0.82 | 0.2647 | 1.0884 | 0.82 | 0.8070 | 0.1345 | 0.0449 |
| No log | 70.0 | 490 | 1.0194 | 0.815 | 0.2606 | 1.1076 | 0.815 | 0.8014 | 0.1529 | 0.0446 |
| No log | 71.0 | 497 | 1.0153 | 0.825 | 0.2572 | 1.2484 | 0.825 | 0.8142 | 0.1425 | 0.0445 |
| 0.6377 | 72.0 | 504 | 1.0265 | 0.815 | 0.2607 | 1.1109 | 0.815 | 0.8039 | 0.1457 | 0.0445 |
| 0.6377 | 73.0 | 511 | 1.0081 | 0.82 | 0.2567 | 1.1031 | 0.82 | 0.8040 | 0.1321 | 0.0440 |
| 0.6377 | 74.0 | 518 | 1.0135 | 0.825 | 0.2600 | 1.1036 | 0.825 | 0.8074 | 0.1477 | 0.0450 |
| 0.6377 | 75.0 | 525 | 1.0053 | 0.82 | 0.2616 | 1.1012 | 0.82 | 0.8044 | 0.1542 | 0.0442 |
| 0.6377 | 76.0 | 532 | 1.0187 | 0.82 | 0.2598 | 1.1115 | 0.82 | 0.8069 | 0.1566 | 0.0445 |
| 0.6377 | 77.0 | 539 | 1.0127 | 0.82 | 0.2610 | 1.1024 | 0.82 | 0.8097 | 0.1489 | 0.0443 |
| 0.6377 | 78.0 | 546 | 1.0079 | 0.82 | 0.2581 | 1.1034 | 0.82 | 0.8069 | 0.1463 | 0.0434 |
| 0.6377 | 79.0 | 553 | 1.0097 | 0.815 | 0.2592 | 1.1030 | 0.815 | 0.8014 | 0.1478 | 0.0438 |
| 0.6377 | 80.0 | 560 | 1.0131 | 0.835 | 0.2556 | 1.1048 | 0.835 | 0.8281 | 0.1508 | 0.0441 |
| 0.6377 | 81.0 | 567 | 1.0183 | 0.82 | 0.2602 | 1.1057 | 0.82 | 0.8044 | 0.1417 | 0.0446 |
| 0.6377 | 82.0 | 574 | 1.0190 | 0.815 | 0.2665 | 1.0966 | 0.815 | 0.7987 | 0.1370 | 0.0462 |
| 0.6377 | 83.0 | 581 | 1.0117 | 0.815 | 0.2619 | 1.0974 | 0.815 | 0.8014 | 0.1614 | 0.0442 |
| 0.6377 | 84.0 | 588 | 1.0099 | 0.82 | 0.2557 | 1.1070 | 0.82 | 0.8044 | 0.1327 | 0.0436 |
| 0.6377 | 85.0 | 595 | 1.0088 | 0.82 | 0.2569 | 1.1037 | 0.82 | 0.8044 | 0.1446 | 0.0437 |
| 0.6377 | 86.0 | 602 | 1.0110 | 0.82 | 0.2596 | 1.0945 | 0.82 | 0.8043 | 0.1505 | 0.0442 |
| 0.6377 | 87.0 | 609 | 1.0151 | 0.815 | 0.2606 | 1.1046 | 0.815 | 0.8014 | 0.1416 | 0.0451 |
| 0.6377 | 88.0 | 616 | 1.0101 | 0.815 | 0.2587 | 1.1025 | 0.815 | 0.8014 | 0.1435 | 0.0440 |
| 0.6377 | 89.0 | 623 | 1.0106 | 0.815 | 0.2613 | 1.0976 | 0.815 | 0.8014 | 0.1489 | 0.0443 |
| 0.6377 | 90.0 | 630 | 1.0097 | 0.815 | 0.2590 | 1.0993 | 0.815 | 0.8014 | 0.1490 | 0.0439 |
| 0.6377 | 91.0 | 637 | 1.0098 | 0.815 | 0.2593 | 1.1024 | 0.815 | 0.8014 | 0.1510 | 0.0440 |
| 0.6377 | 92.0 | 644 | 1.0116 | 0.815 | 0.2600 | 1.1004 | 0.815 | 0.8014 | 0.1465 | 0.0442 |
| 0.6377 | 93.0 | 651 | 1.0107 | 0.815 | 0.2596 | 1.1005 | 0.815 | 0.8014 | 0.1548 | 0.0442 |
| 0.6377 | 94.0 | 658 | 1.0110 | 0.815 | 0.2599 | 1.0993 | 0.815 | 0.8014 | 0.1463 | 0.0440 |
| 0.6377 | 95.0 | 665 | 1.0106 | 0.815 | 0.2593 | 1.1011 | 0.815 | 0.8014 | 0.1409 | 0.0441 |
| 0.6377 | 96.0 | 672 | 1.0106 | 0.815 | 0.2596 | 1.1011 | 0.815 | 0.8014 | 0.1496 | 0.0442 |
| 0.6377 | 97.0 | 679 | 1.0109 | 0.815 | 0.2595 | 1.1007 | 0.815 | 0.8014 | 0.1462 | 0.0442 |
| 0.6377 | 98.0 | 686 | 1.0107 | 0.815 | 0.2593 | 1.1013 | 0.815 | 0.8014 | 0.1409 | 0.0441 |
| 0.6377 | 99.0 | 693 | 1.0107 | 0.815 | 0.2594 | 1.1009 | 0.815 | 0.8014 | 0.1462 | 0.0441 |
| 0.6377 | 100.0 | 700 | 1.0108 | 0.815 | 0.2593 | 1.1011 | 0.815 | 0.8014 | 0.1462 | 0.0442 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/dit-base_tobacco-tiny_tobacco3482_kd_NKD_t1.0_g1.5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-tiny_tobacco3482_kd_NKD_t1.0_g1.5
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1418
- Accuracy: 0.84
- Brier Loss: 0.2718
- Nll: 0.9778
- F1 Micro: 0.8400
- F1 Macro: 0.8296
- Ece: 0.1553
- Aurc: 0.0479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 5.6749 | 0.2 | 0.9075 | 8.1551 | 0.2000 | 0.1380 | 0.2949 | 0.8075 |
| No log | 2.0 | 14 | 5.1602 | 0.15 | 0.8781 | 7.4212 | 0.15 | 0.1415 | 0.2402 | 0.7606 |
| No log | 3.0 | 21 | 4.4947 | 0.46 | 0.7066 | 2.8419 | 0.46 | 0.4202 | 0.3230 | 0.3244 |
| No log | 4.0 | 28 | 3.9789 | 0.555 | 0.5827 | 1.6986 | 0.555 | 0.5525 | 0.2796 | 0.2218 |
| No log | 5.0 | 35 | 3.6991 | 0.65 | 0.4828 | 1.7197 | 0.65 | 0.6491 | 0.2315 | 0.1491 |
| No log | 6.0 | 42 | 3.6495 | 0.68 | 0.4691 | 1.8258 | 0.68 | 0.6586 | 0.2555 | 0.1358 |
| No log | 7.0 | 49 | 3.3912 | 0.75 | 0.3899 | 1.8385 | 0.75 | 0.7276 | 0.2237 | 0.0920 |
| No log | 8.0 | 56 | 3.3055 | 0.71 | 0.3792 | 1.5754 | 0.7100 | 0.6922 | 0.2130 | 0.1023 |
| No log | 9.0 | 63 | 3.3535 | 0.72 | 0.3836 | 1.7076 | 0.72 | 0.7195 | 0.2015 | 0.0978 |
| No log | 10.0 | 70 | 3.1877 | 0.785 | 0.3190 | 1.5736 | 0.785 | 0.7582 | 0.1905 | 0.0693 |
| No log | 11.0 | 77 | 3.5578 | 0.72 | 0.3812 | 2.2613 | 0.72 | 0.7241 | 0.1684 | 0.0846 |
| No log | 12.0 | 84 | 3.3589 | 0.775 | 0.3389 | 1.3228 | 0.775 | 0.7540 | 0.1665 | 0.0764 |
| No log | 13.0 | 91 | 3.1097 | 0.805 | 0.2856 | 1.4183 | 0.805 | 0.7929 | 0.1603 | 0.0484 |
| No log | 14.0 | 98 | 3.2661 | 0.815 | 0.3146 | 1.9097 | 0.815 | 0.8066 | 0.1753 | 0.0636 |
| No log | 15.0 | 105 | 3.3637 | 0.755 | 0.3361 | 1.5166 | 0.755 | 0.7492 | 0.1804 | 0.0720 |
| No log | 16.0 | 112 | 3.1495 | 0.8 | 0.2994 | 1.4586 | 0.8000 | 0.7926 | 0.1714 | 0.0604 |
| No log | 17.0 | 119 | 3.1573 | 0.8 | 0.2941 | 1.6755 | 0.8000 | 0.7899 | 0.1545 | 0.0577 |
| No log | 18.0 | 126 | 3.4445 | 0.77 | 0.3416 | 1.4075 | 0.7700 | 0.7503 | 0.1620 | 0.0807 |
| No log | 19.0 | 133 | 3.1292 | 0.805 | 0.2816 | 1.3835 | 0.805 | 0.7815 | 0.1768 | 0.0526 |
| No log | 20.0 | 140 | 3.4253 | 0.75 | 0.3459 | 2.0430 | 0.75 | 0.7591 | 0.1697 | 0.0706 |
| No log | 21.0 | 147 | 3.1645 | 0.81 | 0.3000 | 1.7363 | 0.81 | 0.8113 | 0.1711 | 0.0614 |
| No log | 22.0 | 154 | 3.0823 | 0.815 | 0.2791 | 1.5997 | 0.815 | 0.8020 | 0.1417 | 0.0556 |
| No log | 23.0 | 161 | 2.9898 | 0.83 | 0.2521 | 1.4274 | 0.83 | 0.8189 | 0.1589 | 0.0434 |
| No log | 24.0 | 168 | 3.0915 | 0.83 | 0.2770 | 1.3516 | 0.83 | 0.8173 | 0.1495 | 0.0538 |
| No log | 25.0 | 175 | 3.0405 | 0.825 | 0.2621 | 1.5191 | 0.825 | 0.8048 | 0.1329 | 0.0494 |
| No log | 26.0 | 182 | 3.0621 | 0.815 | 0.2735 | 1.0698 | 0.815 | 0.7955 | 0.1617 | 0.0522 |
| No log | 27.0 | 189 | 3.0228 | 0.835 | 0.2650 | 1.4235 | 0.835 | 0.8315 | 0.1565 | 0.0502 |
| No log | 28.0 | 196 | 3.0677 | 0.82 | 0.2778 | 1.5299 | 0.82 | 0.8165 | 0.1660 | 0.0557 |
| No log | 29.0 | 203 | 3.0272 | 0.825 | 0.2699 | 1.4726 | 0.825 | 0.8204 | 0.1643 | 0.0491 |
| No log | 30.0 | 210 | 3.1090 | 0.815 | 0.2892 | 1.3258 | 0.815 | 0.8026 | 0.1585 | 0.0536 |
| No log | 31.0 | 217 | 3.1069 | 0.81 | 0.2866 | 1.5638 | 0.81 | 0.8050 | 0.1473 | 0.0557 |
| No log | 32.0 | 224 | 3.0374 | 0.815 | 0.2765 | 1.2895 | 0.815 | 0.8045 | 0.1476 | 0.0527 |
| No log | 33.0 | 231 | 3.0503 | 0.815 | 0.2750 | 1.3113 | 0.815 | 0.7975 | 0.1531 | 0.0517 |
| No log | 34.0 | 238 | 2.9852 | 0.82 | 0.2613 | 1.4575 | 0.82 | 0.8110 | 0.1600 | 0.0448 |
| No log | 35.0 | 245 | 3.0437 | 0.83 | 0.2724 | 1.3491 | 0.83 | 0.8205 | 0.1622 | 0.0571 |
| No log | 36.0 | 252 | 3.0098 | 0.82 | 0.2717 | 1.2671 | 0.82 | 0.8055 | 0.1567 | 0.0519 |
| No log | 37.0 | 259 | 3.0025 | 0.845 | 0.2599 | 1.2628 | 0.845 | 0.8255 | 0.1342 | 0.0481 |
| No log | 38.0 | 266 | 3.1854 | 0.805 | 0.3015 | 1.2550 | 0.805 | 0.7956 | 0.1560 | 0.0601 |
| No log | 39.0 | 273 | 3.0704 | 0.82 | 0.2793 | 1.2393 | 0.82 | 0.8057 | 0.1566 | 0.0557 |
| No log | 40.0 | 280 | 3.0739 | 0.825 | 0.2842 | 1.2701 | 0.825 | 0.8169 | 0.1371 | 0.0513 |
| No log | 41.0 | 287 | 3.0465 | 0.835 | 0.2747 | 1.2598 | 0.835 | 0.8302 | 0.1449 | 0.0538 |
| No log | 42.0 | 294 | 3.0691 | 0.825 | 0.2773 | 1.1796 | 0.825 | 0.8137 | 0.1372 | 0.0511 |
| No log | 43.0 | 301 | 3.0734 | 0.84 | 0.2732 | 1.1765 | 0.8400 | 0.8282 | 0.1564 | 0.0565 |
| No log | 44.0 | 308 | 3.0262 | 0.845 | 0.2622 | 1.2152 | 0.845 | 0.8306 | 0.1457 | 0.0541 |
| No log | 45.0 | 315 | 3.0610 | 0.835 | 0.2727 | 1.2249 | 0.835 | 0.8261 | 0.1606 | 0.0544 |
| No log | 46.0 | 322 | 3.0358 | 0.84 | 0.2767 | 1.1020 | 0.8400 | 0.8323 | 0.1416 | 0.0527 |
| No log | 47.0 | 329 | 2.9893 | 0.835 | 0.2650 | 1.1536 | 0.835 | 0.8252 | 0.1386 | 0.0493 |
| No log | 48.0 | 336 | 3.0498 | 0.84 | 0.2726 | 1.1253 | 0.8400 | 0.8320 | 0.1302 | 0.0535 |
| No log | 49.0 | 343 | 2.9816 | 0.845 | 0.2585 | 1.2068 | 0.845 | 0.8355 | 0.1455 | 0.0451 |
| No log | 50.0 | 350 | 3.0431 | 0.835 | 0.2686 | 1.0596 | 0.835 | 0.8238 | 0.1542 | 0.0540 |
| No log | 51.0 | 357 | 3.0200 | 0.835 | 0.2639 | 1.1806 | 0.835 | 0.8290 | 0.1434 | 0.0501 |
| No log | 52.0 | 364 | 3.0217 | 0.845 | 0.2664 | 1.0846 | 0.845 | 0.8324 | 0.1671 | 0.0503 |
| No log | 53.0 | 371 | 3.0255 | 0.84 | 0.2649 | 1.1803 | 0.8400 | 0.8318 | 0.1350 | 0.0488 |
| No log | 54.0 | 378 | 3.0069 | 0.835 | 0.2616 | 1.2057 | 0.835 | 0.8190 | 0.1284 | 0.0496 |
| No log | 55.0 | 385 | 3.0609 | 0.815 | 0.2746 | 1.0378 | 0.815 | 0.7970 | 0.1422 | 0.0490 |
| No log | 56.0 | 392 | 3.0111 | 0.84 | 0.2622 | 1.1806 | 0.8400 | 0.8341 | 0.1428 | 0.0513 |
| No log | 57.0 | 399 | 3.0050 | 0.84 | 0.2643 | 1.1898 | 0.8400 | 0.8299 | 0.1452 | 0.0494 |
| No log | 58.0 | 406 | 3.0426 | 0.84 | 0.2662 | 1.0337 | 0.8400 | 0.8307 | 0.1397 | 0.0514 |
| No log | 59.0 | 413 | 3.0427 | 0.835 | 0.2682 | 1.0309 | 0.835 | 0.8247 | 0.1453 | 0.0491 |
| No log | 60.0 | 420 | 3.0449 | 0.83 | 0.2744 | 1.0039 | 0.83 | 0.8141 | 0.1436 | 0.0484 |
| No log | 61.0 | 427 | 3.0524 | 0.83 | 0.2729 | 1.1480 | 0.83 | 0.8162 | 0.1454 | 0.0477 |
| No log | 62.0 | 434 | 3.0290 | 0.835 | 0.2610 | 1.1757 | 0.835 | 0.8264 | 0.1476 | 0.0506 |
| No log | 63.0 | 441 | 3.0574 | 0.83 | 0.2712 | 1.0242 | 0.83 | 0.8161 | 0.1464 | 0.0485 |
| No log | 64.0 | 448 | 3.0436 | 0.835 | 0.2684 | 1.1326 | 0.835 | 0.8267 | 0.1417 | 0.0470 |
| No log | 65.0 | 455 | 3.0170 | 0.84 | 0.2610 | 1.1095 | 0.8400 | 0.8289 | 0.1520 | 0.0492 |
| No log | 66.0 | 462 | 3.0176 | 0.835 | 0.2623 | 1.1140 | 0.835 | 0.8225 | 0.1262 | 0.0459 |
| No log | 67.0 | 469 | 3.0712 | 0.84 | 0.2735 | 1.0884 | 0.8400 | 0.8296 | 0.1421 | 0.0516 |
| No log | 68.0 | 476 | 3.0258 | 0.84 | 0.2670 | 1.1388 | 0.8400 | 0.8279 | 0.1478 | 0.0461 |
| No log | 69.0 | 483 | 3.0838 | 0.835 | 0.2707 | 1.0937 | 0.835 | 0.8232 | 0.1425 | 0.0477 |
| No log | 70.0 | 490 | 3.1076 | 0.82 | 0.2819 | 1.0030 | 0.82 | 0.7998 | 0.1507 | 0.0480 |
| No log | 71.0 | 497 | 3.0696 | 0.84 | 0.2725 | 1.0175 | 0.8400 | 0.8349 | 0.1567 | 0.0501 |
| 2.6485 | 72.0 | 504 | 3.0535 | 0.84 | 0.2676 | 1.0079 | 0.8400 | 0.8253 | 0.1351 | 0.0477 |
| 2.6485 | 73.0 | 511 | 3.0326 | 0.83 | 0.2667 | 0.9792 | 0.83 | 0.8093 | 0.1334 | 0.0464 |
| 2.6485 | 74.0 | 518 | 3.0271 | 0.835 | 0.2616 | 1.0865 | 0.835 | 0.8193 | 0.1223 | 0.0454 |
| 2.6485 | 75.0 | 525 | 3.0894 | 0.83 | 0.2732 | 0.9764 | 0.83 | 0.8123 | 0.1446 | 0.0489 |
| 2.6485 | 76.0 | 532 | 3.0905 | 0.835 | 0.2730 | 1.0736 | 0.835 | 0.8232 | 0.1578 | 0.0485 |
| 2.6485 | 77.0 | 539 | 3.0507 | 0.84 | 0.2646 | 1.0716 | 0.8400 | 0.8279 | 0.1424 | 0.0469 |
| 2.6485 | 78.0 | 546 | 3.0981 | 0.845 | 0.2712 | 0.9916 | 0.845 | 0.8324 | 0.1452 | 0.0508 |
| 2.6485 | 79.0 | 553 | 3.0820 | 0.84 | 0.2728 | 0.9791 | 0.8400 | 0.8296 | 0.1403 | 0.0473 |
| 2.6485 | 80.0 | 560 | 3.0978 | 0.84 | 0.2733 | 0.9864 | 0.8400 | 0.8296 | 0.1480 | 0.0485 |
| 2.6485 | 81.0 | 567 | 3.0936 | 0.84 | 0.2716 | 0.9955 | 0.8400 | 0.8296 | 0.1483 | 0.0474 |
| 2.6485 | 82.0 | 574 | 3.0937 | 0.845 | 0.2685 | 0.9875 | 0.845 | 0.8324 | 0.1459 | 0.0486 |
| 2.6485 | 83.0 | 581 | 3.0940 | 0.84 | 0.2719 | 0.9863 | 0.8400 | 0.8296 | 0.1481 | 0.0470 |
| 2.6485 | 84.0 | 588 | 3.0745 | 0.84 | 0.2656 | 1.0795 | 0.8400 | 0.8323 | 0.1460 | 0.0476 |
| 2.6485 | 85.0 | 595 | 3.1089 | 0.845 | 0.2681 | 1.0050 | 0.845 | 0.8324 | 0.1568 | 0.0492 |
| 2.6485 | 86.0 | 602 | 3.0880 | 0.84 | 0.2695 | 1.0607 | 0.8400 | 0.8296 | 0.1409 | 0.0474 |
| 2.6485 | 87.0 | 609 | 3.0848 | 0.84 | 0.2666 | 0.9996 | 0.8400 | 0.8296 | 0.1425 | 0.0470 |
| 2.6485 | 88.0 | 616 | 3.1144 | 0.84 | 0.2682 | 0.9937 | 0.8400 | 0.8296 | 0.1380 | 0.0482 |
| 2.6485 | 89.0 | 623 | 3.1316 | 0.84 | 0.2711 | 0.9884 | 0.8400 | 0.8296 | 0.1484 | 0.0490 |
| 2.6485 | 90.0 | 630 | 3.1312 | 0.84 | 0.2726 | 0.9732 | 0.8400 | 0.8296 | 0.1525 | 0.0488 |
| 2.6485 | 91.0 | 637 | 3.1312 | 0.84 | 0.2723 | 0.9794 | 0.8400 | 0.8296 | 0.1475 | 0.0481 |
| 2.6485 | 92.0 | 644 | 3.1426 | 0.84 | 0.2731 | 0.9728 | 0.8400 | 0.8296 | 0.1478 | 0.0491 |
| 2.6485 | 93.0 | 651 | 3.1351 | 0.84 | 0.2709 | 0.9741 | 0.8400 | 0.8296 | 0.1438 | 0.0483 |
| 2.6485 | 94.0 | 658 | 3.1390 | 0.84 | 0.2716 | 0.9764 | 0.8400 | 0.8296 | 0.1576 | 0.0483 |
| 2.6485 | 95.0 | 665 | 3.1366 | 0.84 | 0.2711 | 0.9795 | 0.8400 | 0.8296 | 0.1480 | 0.0484 |
| 2.6485 | 96.0 | 672 | 3.1337 | 0.84 | 0.2710 | 0.9828 | 0.8400 | 0.8296 | 0.1475 | 0.0478 |
| 2.6485 | 97.0 | 679 | 3.1431 | 0.84 | 0.2723 | 0.9767 | 0.8400 | 0.8296 | 0.1587 | 0.0480 |
| 2.6485 | 98.0 | 686 | 3.1388 | 0.84 | 0.2713 | 0.9808 | 0.8400 | 0.8296 | 0.1476 | 0.0480 |
| 2.6485 | 99.0 | 693 | 3.1420 | 0.84 | 0.2718 | 0.9778 | 0.8400 | 0.8296 | 0.1560 | 0.0480 |
| 2.6485 | 100.0 | 700 | 3.1418 | 0.84 | 0.2718 | 0.9778 | 0.8400 | 0.8296 | 0.1553 | 0.0479 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/dit-base_tobacco-tiny_tobacco3482_hint
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-tiny_tobacco3482_hint
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1196
- Accuracy: 0.805
- Brier Loss: 0.3299
- Nll: 1.3687
- F1 Micro: 0.805
- F1 Macro: 0.7917
- Ece: 0.1606
- Aurc: 0.0741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 3.7183 | 0.225 | 0.9191 | 7.9210 | 0.225 | 0.1037 | 0.3197 | 0.7539 |
| No log | 2.0 | 14 | 3.2602 | 0.365 | 0.7808 | 4.4817 | 0.3650 | 0.2558 | 0.2908 | 0.5116 |
| No log | 3.0 | 21 | 2.8475 | 0.53 | 0.6305 | 2.7026 | 0.53 | 0.4263 | 0.2496 | 0.2812 |
| No log | 4.0 | 28 | 2.6118 | 0.575 | 0.5382 | 1.8007 | 0.575 | 0.5478 | 0.2435 | 0.2022 |
| No log | 5.0 | 35 | 2.4289 | 0.66 | 0.4404 | 1.6436 | 0.66 | 0.6317 | 0.2124 | 0.1236 |
| No log | 6.0 | 42 | 2.4007 | 0.69 | 0.4123 | 1.9255 | 0.69 | 0.6896 | 0.2038 | 0.1195 |
| No log | 7.0 | 49 | 2.3239 | 0.745 | 0.3775 | 1.9428 | 0.745 | 0.7365 | 0.1838 | 0.0957 |
| No log | 8.0 | 56 | 2.3109 | 0.735 | 0.3660 | 1.6472 | 0.735 | 0.7370 | 0.1729 | 0.0842 |
| No log | 9.0 | 63 | 2.3263 | 0.74 | 0.3867 | 1.5935 | 0.74 | 0.7422 | 0.1803 | 0.0944 |
| No log | 10.0 | 70 | 2.3878 | 0.735 | 0.4037 | 1.6604 | 0.735 | 0.7039 | 0.2057 | 0.0909 |
| No log | 11.0 | 77 | 2.5009 | 0.715 | 0.4220 | 1.8526 | 0.715 | 0.6918 | 0.2184 | 0.1017 |
| No log | 12.0 | 84 | 2.5428 | 0.72 | 0.4169 | 1.8799 | 0.72 | 0.6894 | 0.2000 | 0.0979 |
| No log | 13.0 | 91 | 2.5197 | 0.765 | 0.3884 | 1.8802 | 0.765 | 0.7268 | 0.1895 | 0.0933 |
| No log | 14.0 | 98 | 2.4928 | 0.725 | 0.4147 | 1.6096 | 0.7250 | 0.6923 | 0.2078 | 0.0940 |
| No log | 15.0 | 105 | 2.4172 | 0.765 | 0.3748 | 1.6754 | 0.765 | 0.7317 | 0.1874 | 0.0824 |
| No log | 16.0 | 112 | 2.4116 | 0.76 | 0.3873 | 1.4155 | 0.76 | 0.7280 | 0.1972 | 0.0890 |
| No log | 17.0 | 119 | 2.9196 | 0.73 | 0.4500 | 1.8493 | 0.7300 | 0.7220 | 0.2290 | 0.1657 |
| No log | 18.0 | 126 | 2.5815 | 0.765 | 0.4175 | 2.0326 | 0.765 | 0.7467 | 0.2125 | 0.0990 |
| No log | 19.0 | 133 | 2.7076 | 0.735 | 0.4475 | 1.8526 | 0.735 | 0.7152 | 0.2062 | 0.1049 |
| No log | 20.0 | 140 | 2.6951 | 0.71 | 0.4709 | 2.0521 | 0.7100 | 0.7258 | 0.2380 | 0.1188 |
| No log | 21.0 | 147 | 2.4037 | 0.765 | 0.4013 | 1.8740 | 0.765 | 0.7691 | 0.2009 | 0.0949 |
| No log | 22.0 | 154 | 2.6585 | 0.73 | 0.4303 | 2.0299 | 0.7300 | 0.7170 | 0.2110 | 0.1004 |
| No log | 23.0 | 161 | 2.4320 | 0.75 | 0.3950 | 1.8720 | 0.75 | 0.7340 | 0.2004 | 0.0895 |
| No log | 24.0 | 168 | 2.4891 | 0.74 | 0.4199 | 1.6458 | 0.74 | 0.7458 | 0.2227 | 0.1128 |
| No log | 25.0 | 175 | 2.6550 | 0.705 | 0.4833 | 1.7755 | 0.705 | 0.7042 | 0.2478 | 0.1145 |
| No log | 26.0 | 182 | 2.3191 | 0.765 | 0.3965 | 1.6941 | 0.765 | 0.7415 | 0.1954 | 0.0731 |
| No log | 27.0 | 189 | 2.3000 | 0.785 | 0.3763 | 1.3143 | 0.785 | 0.7416 | 0.2026 | 0.0712 |
| No log | 28.0 | 196 | 2.2047 | 0.78 | 0.3409 | 1.4818 | 0.78 | 0.7694 | 0.1759 | 0.0688 |
| No log | 29.0 | 203 | 2.3587 | 0.77 | 0.3781 | 1.6779 | 0.7700 | 0.7571 | 0.1937 | 0.0766 |
| No log | 30.0 | 210 | 2.5027 | 0.75 | 0.4400 | 1.8454 | 0.75 | 0.7338 | 0.2232 | 0.0817 |
| No log | 31.0 | 217 | 2.4092 | 0.77 | 0.3899 | 1.5010 | 0.7700 | 0.7498 | 0.1987 | 0.0710 |
| No log | 32.0 | 224 | 2.7655 | 0.74 | 0.4520 | 2.0720 | 0.74 | 0.7177 | 0.2266 | 0.0855 |
| No log | 33.0 | 231 | 2.3814 | 0.76 | 0.3979 | 1.4053 | 0.76 | 0.7352 | 0.1982 | 0.0754 |
| No log | 34.0 | 238 | 2.3946 | 0.775 | 0.3790 | 1.6969 | 0.775 | 0.7387 | 0.1908 | 0.0876 |
| No log | 35.0 | 245 | 2.5158 | 0.775 | 0.4064 | 1.4329 | 0.775 | 0.7428 | 0.2068 | 0.0929 |
| No log | 36.0 | 252 | 2.4920 | 0.75 | 0.4281 | 1.5724 | 0.75 | 0.7470 | 0.2161 | 0.1028 |
| No log | 37.0 | 259 | 2.4541 | 0.765 | 0.3842 | 1.5272 | 0.765 | 0.7163 | 0.1918 | 0.0853 |
| No log | 38.0 | 266 | 2.3785 | 0.82 | 0.3319 | 1.6360 | 0.82 | 0.7944 | 0.1732 | 0.0811 |
| No log | 39.0 | 273 | 2.3721 | 0.77 | 0.3831 | 1.2724 | 0.7700 | 0.7649 | 0.1880 | 0.0818 |
| No log | 40.0 | 280 | 2.5684 | 0.795 | 0.3494 | 1.7291 | 0.795 | 0.7724 | 0.1771 | 0.1126 |
| No log | 41.0 | 287 | 2.4835 | 0.78 | 0.3799 | 1.7700 | 0.78 | 0.7594 | 0.2024 | 0.0821 |
| No log | 42.0 | 294 | 2.4690 | 0.795 | 0.3678 | 1.6379 | 0.795 | 0.7804 | 0.1836 | 0.0840 |
| No log | 43.0 | 301 | 2.3069 | 0.77 | 0.3809 | 1.4970 | 0.7700 | 0.7621 | 0.1807 | 0.0784 |
| No log | 44.0 | 308 | 2.4424 | 0.795 | 0.3558 | 1.7162 | 0.795 | 0.7857 | 0.1879 | 0.0667 |
| No log | 45.0 | 315 | 2.0018 | 0.86 | 0.2387 | 1.4924 | 0.8600 | 0.8558 | 0.1180 | 0.0526 |
| No log | 46.0 | 322 | 2.4174 | 0.785 | 0.3593 | 1.7688 | 0.785 | 0.7662 | 0.1859 | 0.0773 |
| No log | 47.0 | 329 | 2.1816 | 0.84 | 0.2862 | 1.5093 | 0.8400 | 0.8230 | 0.1459 | 0.0679 |
| No log | 48.0 | 336 | 2.3006 | 0.78 | 0.3601 | 1.7675 | 0.78 | 0.7496 | 0.1847 | 0.0752 |
| No log | 49.0 | 343 | 2.2952 | 0.81 | 0.3231 | 1.4851 | 0.81 | 0.7865 | 0.1691 | 0.0656 |
| No log | 50.0 | 350 | 2.1346 | 0.8 | 0.3132 | 1.4625 | 0.8000 | 0.7937 | 0.1660 | 0.0789 |
| No log | 51.0 | 357 | 2.2935 | 0.81 | 0.3383 | 1.5764 | 0.81 | 0.7914 | 0.1770 | 0.0717 |
| No log | 52.0 | 364 | 2.1792 | 0.825 | 0.3116 | 1.5652 | 0.825 | 0.8163 | 0.1454 | 0.0654 |
| No log | 53.0 | 371 | 2.1231 | 0.81 | 0.3066 | 1.3012 | 0.81 | 0.8062 | 0.1552 | 0.0604 |
| No log | 54.0 | 378 | 1.9712 | 0.825 | 0.2854 | 1.2891 | 0.825 | 0.8137 | 0.1430 | 0.0521 |
| No log | 55.0 | 385 | 2.0133 | 0.825 | 0.2839 | 1.3994 | 0.825 | 0.8086 | 0.1433 | 0.0560 |
| No log | 56.0 | 392 | 1.9978 | 0.835 | 0.2800 | 1.4348 | 0.835 | 0.8232 | 0.1415 | 0.0573 |
| No log | 57.0 | 399 | 1.9847 | 0.83 | 0.2825 | 1.3907 | 0.83 | 0.8153 | 0.1421 | 0.0560 |
| No log | 58.0 | 406 | 1.9892 | 0.83 | 0.2832 | 1.4502 | 0.83 | 0.8153 | 0.1503 | 0.0566 |
| No log | 59.0 | 413 | 1.9848 | 0.83 | 0.2851 | 1.4506 | 0.83 | 0.8156 | 0.1462 | 0.0560 |
| No log | 60.0 | 420 | 1.9871 | 0.835 | 0.2910 | 1.4527 | 0.835 | 0.8191 | 0.1608 | 0.0566 |
| No log | 61.0 | 427 | 1.9914 | 0.825 | 0.2932 | 1.4490 | 0.825 | 0.8095 | 0.1464 | 0.0587 |
| No log | 62.0 | 434 | 1.9908 | 0.825 | 0.2958 | 1.4459 | 0.825 | 0.8095 | 0.1493 | 0.0597 |
| No log | 63.0 | 441 | 1.9954 | 0.825 | 0.3012 | 1.4480 | 0.825 | 0.8095 | 0.1469 | 0.0606 |
| No log | 64.0 | 448 | 2.0111 | 0.82 | 0.3050 | 1.4487 | 0.82 | 0.8026 | 0.1507 | 0.0619 |
| No log | 65.0 | 455 | 2.0212 | 0.82 | 0.3046 | 1.4469 | 0.82 | 0.8026 | 0.1604 | 0.0634 |
| No log | 66.0 | 462 | 2.0170 | 0.82 | 0.3059 | 1.4443 | 0.82 | 0.8040 | 0.1539 | 0.0639 |
| No log | 67.0 | 469 | 2.0170 | 0.815 | 0.3056 | 1.4496 | 0.815 | 0.8019 | 0.1534 | 0.0643 |
| No log | 68.0 | 476 | 2.0316 | 0.82 | 0.3115 | 1.4522 | 0.82 | 0.8026 | 0.1606 | 0.0645 |
| No log | 69.0 | 483 | 2.0335 | 0.805 | 0.3132 | 1.3831 | 0.805 | 0.7855 | 0.1607 | 0.0654 |
| No log | 70.0 | 490 | 2.0362 | 0.815 | 0.3106 | 1.3834 | 0.815 | 0.7989 | 0.1614 | 0.0655 |
| No log | 71.0 | 497 | 2.0318 | 0.815 | 0.3105 | 1.3893 | 0.815 | 0.7947 | 0.1541 | 0.0662 |
| 1.3661 | 72.0 | 504 | 2.0434 | 0.815 | 0.3135 | 1.4473 | 0.815 | 0.7955 | 0.1579 | 0.0653 |
| 1.3661 | 73.0 | 511 | 2.0517 | 0.81 | 0.3139 | 1.3838 | 0.81 | 0.7917 | 0.1564 | 0.0680 |
| 1.3661 | 74.0 | 518 | 2.0594 | 0.82 | 0.3162 | 1.3783 | 0.82 | 0.7975 | 0.1626 | 0.0681 |
| 1.3661 | 75.0 | 525 | 2.0628 | 0.815 | 0.3210 | 1.3752 | 0.815 | 0.7944 | 0.1598 | 0.0706 |
| 1.3661 | 76.0 | 532 | 2.0605 | 0.81 | 0.3158 | 1.3711 | 0.81 | 0.7886 | 0.1639 | 0.0684 |
| 1.3661 | 77.0 | 539 | 2.0718 | 0.815 | 0.3187 | 1.3860 | 0.815 | 0.7944 | 0.1710 | 0.0705 |
| 1.3661 | 78.0 | 546 | 2.0749 | 0.815 | 0.3168 | 1.3658 | 0.815 | 0.7958 | 0.1569 | 0.0713 |
| 1.3661 | 79.0 | 553 | 2.0796 | 0.83 | 0.3188 | 1.3016 | 0.83 | 0.8147 | 0.1646 | 0.0722 |
| 1.3661 | 80.0 | 560 | 2.0746 | 0.81 | 0.3210 | 1.3758 | 0.81 | 0.7916 | 0.1580 | 0.0729 |
| 1.3661 | 81.0 | 567 | 2.0819 | 0.815 | 0.3194 | 1.3686 | 0.815 | 0.7913 | 0.1576 | 0.0722 |
| 1.3661 | 82.0 | 574 | 2.0866 | 0.825 | 0.3182 | 1.3627 | 0.825 | 0.8085 | 0.1602 | 0.0718 |
| 1.3661 | 83.0 | 581 | 2.0942 | 0.815 | 0.3246 | 1.3249 | 0.815 | 0.8008 | 0.1591 | 0.0727 |
| 1.3661 | 84.0 | 588 | 2.0938 | 0.815 | 0.3246 | 1.3680 | 0.815 | 0.7984 | 0.1848 | 0.0727 |
| 1.3661 | 85.0 | 595 | 2.0912 | 0.82 | 0.3222 | 1.3662 | 0.82 | 0.8012 | 0.1594 | 0.0702 |
| 1.3661 | 86.0 | 602 | 2.0941 | 0.82 | 0.3234 | 1.3764 | 0.82 | 0.8012 | 0.1576 | 0.0738 |
| 1.3661 | 87.0 | 609 | 2.1037 | 0.8 | 0.3304 | 1.3821 | 0.8000 | 0.7821 | 0.1599 | 0.0740 |
| 1.3661 | 88.0 | 616 | 2.1098 | 0.805 | 0.3288 | 1.3587 | 0.805 | 0.7932 | 0.1678 | 0.0718 |
| 1.3661 | 89.0 | 623 | 2.1119 | 0.81 | 0.3276 | 1.3636 | 0.81 | 0.7945 | 0.1622 | 0.0728 |
| 1.3661 | 90.0 | 630 | 2.1078 | 0.805 | 0.3279 | 1.3641 | 0.805 | 0.7914 | 0.1734 | 0.0737 |
| 1.3661 | 91.0 | 637 | 2.1110 | 0.8 | 0.3296 | 1.3686 | 0.8000 | 0.7879 | 0.1636 | 0.0744 |
| 1.3661 | 92.0 | 644 | 2.1150 | 0.8 | 0.3317 | 1.3685 | 0.8000 | 0.7879 | 0.1730 | 0.0742 |
| 1.3661 | 93.0 | 651 | 2.1146 | 0.8 | 0.3303 | 1.3693 | 0.8000 | 0.7881 | 0.1631 | 0.0742 |
| 1.3661 | 94.0 | 658 | 2.1153 | 0.805 | 0.3292 | 1.3657 | 0.805 | 0.7917 | 0.1676 | 0.0734 |
| 1.3661 | 95.0 | 665 | 2.1188 | 0.805 | 0.3298 | 1.3683 | 0.805 | 0.7917 | 0.1690 | 0.0735 |
| 1.3661 | 96.0 | 672 | 2.1183 | 0.805 | 0.3291 | 1.3691 | 0.805 | 0.7914 | 0.1687 | 0.0742 |
| 1.3661 | 97.0 | 679 | 2.1155 | 0.81 | 0.3271 | 1.3664 | 0.81 | 0.7942 | 0.1599 | 0.0743 |
| 1.3661 | 98.0 | 686 | 2.1183 | 0.805 | 0.3285 | 1.3673 | 0.805 | 0.7914 | 0.1638 | 0.0740 |
| 1.3661 | 99.0 | 693 | 2.1179 | 0.805 | 0.3297 | 1.3686 | 0.805 | 0.7917 | 0.1613 | 0.0741 |
| 1.3661 | 100.0 | 700 | 2.1196 | 0.805 | 0.3299 | 1.3687 | 0.805 | 0.7917 | 0.1606 | 0.0741 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
hkivancoral/hushem_5x_deit_small_sgd_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6074
- Accuracy: 0.2222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4194 | 1.0 | 27 | 1.6162 | 0.2222 |
| 1.4315 | 2.0 | 54 | 1.6158 | 0.2222 |
| 1.4532 | 3.0 | 81 | 1.6154 | 0.2222 |
| 1.4652 | 4.0 | 108 | 1.6150 | 0.2222 |
| 1.4244 | 5.0 | 135 | 1.6147 | 0.2222 |
| 1.4622 | 6.0 | 162 | 1.6143 | 0.2222 |
| 1.4528 | 7.0 | 189 | 1.6140 | 0.2222 |
| 1.4262 | 8.0 | 216 | 1.6136 | 0.2222 |
| 1.4181 | 9.0 | 243 | 1.6133 | 0.2222 |
| 1.4163 | 10.0 | 270 | 1.6130 | 0.2222 |
| 1.4463 | 11.0 | 297 | 1.6127 | 0.2222 |
| 1.4137 | 12.0 | 324 | 1.6124 | 0.2222 |
| 1.4131 | 13.0 | 351 | 1.6121 | 0.2222 |
| 1.4148 | 14.0 | 378 | 1.6118 | 0.2222 |
| 1.444 | 15.0 | 405 | 1.6115 | 0.2222 |
| 1.4135 | 16.0 | 432 | 1.6113 | 0.2222 |
| 1.4356 | 17.0 | 459 | 1.6110 | 0.2222 |
| 1.4146 | 18.0 | 486 | 1.6108 | 0.2222 |
| 1.4096 | 19.0 | 513 | 1.6105 | 0.2222 |
| 1.4038 | 20.0 | 540 | 1.6103 | 0.2222 |
| 1.3926 | 21.0 | 567 | 1.6101 | 0.2222 |
| 1.4332 | 22.0 | 594 | 1.6099 | 0.2222 |
| 1.4214 | 23.0 | 621 | 1.6097 | 0.2222 |
| 1.4083 | 24.0 | 648 | 1.6095 | 0.2222 |
| 1.4271 | 25.0 | 675 | 1.6093 | 0.2222 |
| 1.4496 | 26.0 | 702 | 1.6091 | 0.2222 |
| 1.4117 | 27.0 | 729 | 1.6090 | 0.2222 |
| 1.403 | 28.0 | 756 | 1.6088 | 0.2222 |
| 1.3913 | 29.0 | 783 | 1.6087 | 0.2222 |
| 1.4302 | 30.0 | 810 | 1.6085 | 0.2222 |
| 1.4037 | 31.0 | 837 | 1.6084 | 0.2222 |
| 1.4442 | 32.0 | 864 | 1.6083 | 0.2222 |
| 1.4272 | 33.0 | 891 | 1.6082 | 0.2222 |
| 1.4095 | 34.0 | 918 | 1.6080 | 0.2222 |
| 1.4234 | 35.0 | 945 | 1.6079 | 0.2222 |
| 1.4343 | 36.0 | 972 | 1.6079 | 0.2222 |
| 1.4253 | 37.0 | 999 | 1.6078 | 0.2222 |
| 1.4109 | 38.0 | 1026 | 1.6077 | 0.2222 |
| 1.4096 | 39.0 | 1053 | 1.6076 | 0.2222 |
| 1.3772 | 40.0 | 1080 | 1.6076 | 0.2222 |
| 1.4046 | 41.0 | 1107 | 1.6075 | 0.2222 |
| 1.384 | 42.0 | 1134 | 1.6075 | 0.2222 |
| 1.4202 | 43.0 | 1161 | 1.6075 | 0.2222 |
| 1.3963 | 44.0 | 1188 | 1.6074 | 0.2222 |
| 1.4183 | 45.0 | 1215 | 1.6074 | 0.2222 |
| 1.3888 | 46.0 | 1242 | 1.6074 | 0.2222 |
| 1.4088 | 47.0 | 1269 | 1.6074 | 0.2222 |
| 1.393 | 48.0 | 1296 | 1.6074 | 0.2222 |
| 1.4397 | 49.0 | 1323 | 1.6074 | 0.2222 |
| 1.4472 | 50.0 | 1350 | 1.6074 | 0.2222 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_small_sgd_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_00001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4921
- Accuracy: 0.2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5247 | 1.0 | 27 | 1.5122 | 0.1778 |
| 1.5208 | 2.0 | 54 | 1.5113 | 0.1778 |
| 1.5431 | 3.0 | 81 | 1.5104 | 0.1778 |
| 1.5874 | 4.0 | 108 | 1.5095 | 0.1778 |
| 1.5185 | 5.0 | 135 | 1.5086 | 0.1778 |
| 1.5124 | 6.0 | 162 | 1.5078 | 0.1778 |
| 1.4656 | 7.0 | 189 | 1.5070 | 0.1778 |
| 1.5113 | 8.0 | 216 | 1.5062 | 0.1778 |
| 1.5043 | 9.0 | 243 | 1.5054 | 0.1778 |
| 1.505 | 10.0 | 270 | 1.5047 | 0.1778 |
| 1.4599 | 11.0 | 297 | 1.5040 | 0.1778 |
| 1.5036 | 12.0 | 324 | 1.5033 | 0.1778 |
| 1.5237 | 13.0 | 351 | 1.5026 | 0.1778 |
| 1.511 | 14.0 | 378 | 1.5019 | 0.1778 |
| 1.5324 | 15.0 | 405 | 1.5013 | 0.1778 |
| 1.5272 | 16.0 | 432 | 1.5007 | 0.1778 |
| 1.5263 | 17.0 | 459 | 1.5002 | 0.1778 |
| 1.4937 | 18.0 | 486 | 1.4996 | 0.1778 |
| 1.5117 | 19.0 | 513 | 1.4991 | 0.1778 |
| 1.516 | 20.0 | 540 | 1.4985 | 0.1778 |
| 1.5298 | 21.0 | 567 | 1.4981 | 0.1778 |
| 1.5031 | 22.0 | 594 | 1.4976 | 0.1778 |
| 1.496 | 23.0 | 621 | 1.4971 | 0.1778 |
| 1.4984 | 24.0 | 648 | 1.4967 | 0.2 |
| 1.4849 | 25.0 | 675 | 1.4963 | 0.2 |
| 1.5277 | 26.0 | 702 | 1.4959 | 0.2 |
| 1.4813 | 27.0 | 729 | 1.4955 | 0.2 |
| 1.5008 | 28.0 | 756 | 1.4952 | 0.2 |
| 1.5143 | 29.0 | 783 | 1.4948 | 0.2 |
| 1.5063 | 30.0 | 810 | 1.4945 | 0.2 |
| 1.5197 | 31.0 | 837 | 1.4942 | 0.2 |
| 1.4689 | 32.0 | 864 | 1.4940 | 0.2 |
| 1.5261 | 33.0 | 891 | 1.4937 | 0.2 |
| 1.5047 | 34.0 | 918 | 1.4935 | 0.2 |
| 1.4608 | 35.0 | 945 | 1.4933 | 0.2 |
| 1.5134 | 36.0 | 972 | 1.4931 | 0.2 |
| 1.4999 | 37.0 | 999 | 1.4929 | 0.2 |
| 1.4901 | 38.0 | 1026 | 1.4928 | 0.2 |
| 1.4933 | 39.0 | 1053 | 1.4926 | 0.2 |
| 1.5285 | 40.0 | 1080 | 1.4925 | 0.2 |
| 1.5189 | 41.0 | 1107 | 1.4924 | 0.2 |
| 1.5357 | 42.0 | 1134 | 1.4923 | 0.2 |
| 1.5726 | 43.0 | 1161 | 1.4923 | 0.2 |
| 1.4926 | 44.0 | 1188 | 1.4922 | 0.2 |
| 1.4915 | 45.0 | 1215 | 1.4922 | 0.2 |
| 1.4934 | 46.0 | 1242 | 1.4921 | 0.2 |
| 1.5214 | 47.0 | 1269 | 1.4921 | 0.2 |
| 1.5071 | 48.0 | 1296 | 1.4921 | 0.2 |
| 1.5711 | 49.0 | 1323 | 1.4921 | 0.2 |
| 1.4665 | 50.0 | 1350 | 1.4921 | 0.2 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_small_sgd_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_00001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5763
- Accuracy: 0.2093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5114 | 1.0 | 28 | 1.6002 | 0.1860 |
| 1.516 | 2.0 | 56 | 1.5991 | 0.2093 |
| 1.448 | 3.0 | 84 | 1.5981 | 0.2093 |
| 1.5009 | 4.0 | 112 | 1.5971 | 0.2093 |
| 1.646 | 5.0 | 140 | 1.5960 | 0.2093 |
| 1.55 | 6.0 | 168 | 1.5950 | 0.2093 |
| 1.5383 | 7.0 | 196 | 1.5941 | 0.2093 |
| 1.4892 | 8.0 | 224 | 1.5932 | 0.2093 |
| 1.5021 | 9.0 | 252 | 1.5922 | 0.2093 |
| 1.5057 | 10.0 | 280 | 1.5913 | 0.2093 |
| 1.5326 | 11.0 | 308 | 1.5905 | 0.2093 |
| 1.5462 | 12.0 | 336 | 1.5897 | 0.2093 |
| 1.5032 | 13.0 | 364 | 1.5889 | 0.2093 |
| 1.4978 | 14.0 | 392 | 1.5881 | 0.2093 |
| 1.4757 | 15.0 | 420 | 1.5874 | 0.2093 |
| 1.4813 | 16.0 | 448 | 1.5867 | 0.2093 |
| 1.4635 | 17.0 | 476 | 1.5860 | 0.2093 |
| 1.511 | 18.0 | 504 | 1.5853 | 0.2093 |
| 1.4905 | 19.0 | 532 | 1.5847 | 0.2093 |
| 1.4302 | 20.0 | 560 | 1.5841 | 0.2093 |
| 1.5108 | 21.0 | 588 | 1.5835 | 0.2093 |
| 1.504 | 22.0 | 616 | 1.5829 | 0.2093 |
| 1.4924 | 23.0 | 644 | 1.5824 | 0.2093 |
| 1.5112 | 24.0 | 672 | 1.5819 | 0.2093 |
| 1.5256 | 25.0 | 700 | 1.5814 | 0.2093 |
| 1.4493 | 26.0 | 728 | 1.5809 | 0.2093 |
| 1.4989 | 27.0 | 756 | 1.5805 | 0.2093 |
| 1.5477 | 28.0 | 784 | 1.5801 | 0.2093 |
| 1.5116 | 29.0 | 812 | 1.5797 | 0.2093 |
| 1.4275 | 30.0 | 840 | 1.5793 | 0.2093 |
| 1.453 | 31.0 | 868 | 1.5790 | 0.2093 |
| 1.524 | 32.0 | 896 | 1.5786 | 0.2093 |
| 1.5026 | 33.0 | 924 | 1.5783 | 0.2093 |
| 1.4458 | 34.0 | 952 | 1.5780 | 0.2093 |
| 1.4708 | 35.0 | 980 | 1.5778 | 0.2093 |
| 1.5117 | 36.0 | 1008 | 1.5776 | 0.2093 |
| 1.5594 | 37.0 | 1036 | 1.5773 | 0.2093 |
| 1.5028 | 38.0 | 1064 | 1.5771 | 0.2093 |
| 1.4691 | 39.0 | 1092 | 1.5770 | 0.2093 |
| 1.5214 | 40.0 | 1120 | 1.5768 | 0.2093 |
| 1.5285 | 41.0 | 1148 | 1.5767 | 0.2093 |
| 1.4667 | 42.0 | 1176 | 1.5766 | 0.2093 |
| 1.4652 | 43.0 | 1204 | 1.5765 | 0.2093 |
| 1.4952 | 44.0 | 1232 | 1.5765 | 0.2093 |
| 1.4825 | 45.0 | 1260 | 1.5764 | 0.2093 |
| 1.4816 | 46.0 | 1288 | 1.5764 | 0.2093 |
| 1.4911 | 47.0 | 1316 | 1.5764 | 0.2093 |
| 1.5027 | 48.0 | 1344 | 1.5763 | 0.2093 |
| 1.4424 | 49.0 | 1372 | 1.5763 | 0.2093 |
| 1.4881 | 50.0 | 1400 | 1.5763 | 0.2093 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
jordyvl/dit-base_tobacco-tiny_tobacco3482_simkd
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-tiny_tobacco3482_simkd
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7298
- Accuracy: 0.8
- Brier Loss: 0.3356
- Nll: 1.1950
- F1 Micro: 0.8000
- F1 Macro: 0.7677
- Ece: 0.2868
- Aurc: 0.0614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 50 | 1.0044 | 0.11 | 0.8970 | 5.3755 | 0.11 | 0.0297 | 0.1810 | 0.9082 |
| No log | 2.0 | 100 | 0.9997 | 0.27 | 0.8946 | 5.6759 | 0.27 | 0.1038 | 0.2752 | 0.7229 |
| No log | 3.0 | 150 | 0.9946 | 0.345 | 0.8902 | 4.6234 | 0.345 | 0.1969 | 0.3377 | 0.6577 |
| No log | 4.0 | 200 | 0.9814 | 0.4 | 0.8686 | 3.0912 | 0.4000 | 0.2605 | 0.3687 | 0.3808 |
| No log | 5.0 | 250 | 0.9618 | 0.56 | 0.8277 | 2.9065 | 0.56 | 0.4439 | 0.4769 | 0.2239 |
| No log | 6.0 | 300 | 0.9225 | 0.58 | 0.7429 | 2.5647 | 0.58 | 0.4408 | 0.4561 | 0.1944 |
| No log | 7.0 | 350 | 0.8843 | 0.705 | 0.6414 | 2.4145 | 0.705 | 0.5531 | 0.4493 | 0.1261 |
| No log | 8.0 | 400 | 0.8627 | 0.685 | 0.5773 | 2.4171 | 0.685 | 0.5755 | 0.3710 | 0.1378 |
| No log | 9.0 | 450 | 0.8252 | 0.73 | 0.5158 | 1.6133 | 0.7300 | 0.6403 | 0.3706 | 0.1066 |
| 0.9306 | 10.0 | 500 | 0.8164 | 0.74 | 0.4861 | 1.9299 | 0.74 | 0.6672 | 0.3352 | 0.1090 |
| 0.9306 | 11.0 | 550 | 0.8350 | 0.67 | 0.5078 | 2.0291 | 0.67 | 0.6083 | 0.3271 | 0.1514 |
| 0.9306 | 12.0 | 600 | 0.8089 | 0.695 | 0.4680 | 1.6726 | 0.695 | 0.6065 | 0.3049 | 0.1040 |
| 0.9306 | 13.0 | 650 | 0.7847 | 0.78 | 0.4097 | 1.3710 | 0.78 | 0.7067 | 0.3090 | 0.0825 |
| 0.9306 | 14.0 | 700 | 0.7793 | 0.8 | 0.3952 | 1.4382 | 0.8000 | 0.7351 | 0.3189 | 0.0823 |
| 0.9306 | 15.0 | 750 | 0.7756 | 0.775 | 0.3979 | 1.2640 | 0.775 | 0.6997 | 0.2950 | 0.0835 |
| 0.9306 | 16.0 | 800 | 0.7888 | 0.765 | 0.3927 | 1.2499 | 0.765 | 0.6894 | 0.3175 | 0.0719 |
| 0.9306 | 17.0 | 850 | 0.7596 | 0.795 | 0.3603 | 1.1834 | 0.795 | 0.7250 | 0.2930 | 0.0673 |
| 0.9306 | 18.0 | 900 | 0.7581 | 0.795 | 0.3580 | 1.1902 | 0.795 | 0.7241 | 0.3104 | 0.0665 |
| 0.9306 | 19.0 | 950 | 0.7546 | 0.81 | 0.3547 | 1.1055 | 0.81 | 0.7583 | 0.3024 | 0.0621 |
| 0.7329 | 20.0 | 1000 | 0.7520 | 0.81 | 0.3547 | 1.1284 | 0.81 | 0.7533 | 0.3209 | 0.0581 |
| 0.7329 | 21.0 | 1050 | 0.7669 | 0.775 | 0.3906 | 1.3812 | 0.775 | 0.7502 | 0.3212 | 0.0794 |
| 0.7329 | 22.0 | 1100 | 0.7532 | 0.81 | 0.3591 | 1.0982 | 0.81 | 0.7836 | 0.3035 | 0.0708 |
| 0.7329 | 23.0 | 1150 | 0.7519 | 0.805 | 0.3643 | 1.0628 | 0.805 | 0.7742 | 0.2813 | 0.0732 |
| 0.7329 | 24.0 | 1200 | 0.7494 | 0.795 | 0.3614 | 1.1123 | 0.795 | 0.7618 | 0.2988 | 0.0699 |
| 0.7329 | 25.0 | 1250 | 0.7517 | 0.79 | 0.3696 | 1.0703 | 0.79 | 0.7606 | 0.3081 | 0.0800 |
| 0.7329 | 26.0 | 1300 | 0.7513 | 0.795 | 0.3629 | 1.1020 | 0.795 | 0.7769 | 0.2797 | 0.0722 |
| 0.7329 | 27.0 | 1350 | 0.7485 | 0.795 | 0.3552 | 1.0352 | 0.795 | 0.7671 | 0.2678 | 0.0684 |
| 0.7329 | 28.0 | 1400 | 0.7442 | 0.805 | 0.3471 | 1.0956 | 0.805 | 0.7706 | 0.2807 | 0.0630 |
| 0.7329 | 29.0 | 1450 | 0.7473 | 0.795 | 0.3592 | 1.1204 | 0.795 | 0.7685 | 0.2897 | 0.0722 |
| 0.6917 | 30.0 | 1500 | 0.7449 | 0.815 | 0.3482 | 1.0584 | 0.815 | 0.7862 | 0.2949 | 0.0629 |
| 0.6917 | 31.0 | 1550 | 0.7443 | 0.8 | 0.3512 | 1.1010 | 0.8000 | 0.7770 | 0.2954 | 0.0622 |
| 0.6917 | 32.0 | 1600 | 0.7454 | 0.785 | 0.3543 | 1.0994 | 0.785 | 0.7631 | 0.2957 | 0.0639 |
| 0.6917 | 33.0 | 1650 | 0.7421 | 0.815 | 0.3449 | 1.1826 | 0.815 | 0.7853 | 0.2996 | 0.0592 |
| 0.6917 | 34.0 | 1700 | 0.7454 | 0.79 | 0.3559 | 1.1000 | 0.79 | 0.7597 | 0.2964 | 0.0659 |
| 0.6917 | 35.0 | 1750 | 0.7418 | 0.815 | 0.3477 | 1.1616 | 0.815 | 0.7867 | 0.3133 | 0.0617 |
| 0.6917 | 36.0 | 1800 | 0.7425 | 0.815 | 0.3464 | 1.1274 | 0.815 | 0.7949 | 0.3173 | 0.0578 |
| 0.6917 | 37.0 | 1850 | 0.7421 | 0.8 | 0.3448 | 1.1909 | 0.8000 | 0.7732 | 0.2900 | 0.0639 |
| 0.6917 | 38.0 | 1900 | 0.7415 | 0.795 | 0.3471 | 1.1816 | 0.795 | 0.7594 | 0.2860 | 0.0655 |
| 0.6917 | 39.0 | 1950 | 0.7405 | 0.78 | 0.3502 | 1.1084 | 0.78 | 0.7491 | 0.2709 | 0.0650 |
| 0.6764 | 40.0 | 2000 | 0.7398 | 0.81 | 0.3457 | 1.1746 | 0.81 | 0.7797 | 0.2973 | 0.0603 |
| 0.6764 | 41.0 | 2050 | 0.7394 | 0.805 | 0.3437 | 1.1201 | 0.805 | 0.7764 | 0.2915 | 0.0626 |
| 0.6764 | 42.0 | 2100 | 0.7380 | 0.81 | 0.3420 | 1.0987 | 0.81 | 0.7861 | 0.2815 | 0.0583 |
| 0.6764 | 43.0 | 2150 | 0.7386 | 0.8 | 0.3437 | 1.1855 | 0.8000 | 0.7667 | 0.2804 | 0.0617 |
| 0.6764 | 44.0 | 2200 | 0.7398 | 0.795 | 0.3437 | 1.1138 | 0.795 | 0.7660 | 0.2719 | 0.0614 |
| 0.6764 | 45.0 | 2250 | 0.7384 | 0.805 | 0.3441 | 1.1100 | 0.805 | 0.7699 | 0.3065 | 0.0628 |
| 0.6764 | 46.0 | 2300 | 0.7389 | 0.79 | 0.3488 | 1.1079 | 0.79 | 0.7552 | 0.2615 | 0.0647 |
| 0.6764 | 47.0 | 2350 | 0.7368 | 0.8 | 0.3440 | 1.1095 | 0.8000 | 0.7698 | 0.2908 | 0.0624 |
| 0.6764 | 48.0 | 2400 | 0.7365 | 0.8 | 0.3452 | 1.0995 | 0.8000 | 0.7739 | 0.2838 | 0.0645 |
| 0.6764 | 49.0 | 2450 | 0.7365 | 0.8 | 0.3367 | 1.0442 | 0.8000 | 0.7712 | 0.2735 | 0.0585 |
| 0.6662 | 50.0 | 2500 | 0.7342 | 0.815 | 0.3379 | 1.1009 | 0.815 | 0.7815 | 0.2964 | 0.0584 |
| 0.6662 | 51.0 | 2550 | 0.7340 | 0.805 | 0.3358 | 1.0985 | 0.805 | 0.7723 | 0.2635 | 0.0593 |
| 0.6662 | 52.0 | 2600 | 0.7370 | 0.8 | 0.3429 | 1.1227 | 0.8000 | 0.7709 | 0.2841 | 0.0603 |
| 0.6662 | 53.0 | 2650 | 0.7325 | 0.81 | 0.3380 | 1.1110 | 0.81 | 0.7790 | 0.3022 | 0.0601 |
| 0.6662 | 54.0 | 2700 | 0.7320 | 0.8 | 0.3363 | 1.0621 | 0.8000 | 0.7647 | 0.2815 | 0.0607 |
| 0.6662 | 55.0 | 2750 | 0.7324 | 0.805 | 0.3321 | 0.9926 | 0.805 | 0.7693 | 0.2972 | 0.0600 |
| 0.6662 | 56.0 | 2800 | 0.7318 | 0.805 | 0.3364 | 1.0537 | 0.805 | 0.7681 | 0.2554 | 0.0612 |
| 0.6662 | 57.0 | 2850 | 0.7311 | 0.82 | 0.3355 | 1.1133 | 0.82 | 0.7862 | 0.2776 | 0.0594 |
| 0.6662 | 58.0 | 2900 | 0.7317 | 0.81 | 0.3331 | 1.0662 | 0.81 | 0.7797 | 0.2600 | 0.0579 |
| 0.6662 | 59.0 | 2950 | 0.7327 | 0.805 | 0.3382 | 1.1876 | 0.805 | 0.7735 | 0.2797 | 0.0621 |
| 0.6577 | 60.0 | 3000 | 0.7322 | 0.8 | 0.3356 | 1.1864 | 0.8000 | 0.7680 | 0.2797 | 0.0612 |
| 0.6577 | 61.0 | 3050 | 0.7327 | 0.795 | 0.3391 | 1.1347 | 0.795 | 0.7614 | 0.2883 | 0.0641 |
| 0.6577 | 62.0 | 3100 | 0.7315 | 0.815 | 0.3364 | 1.1227 | 0.815 | 0.7848 | 0.2681 | 0.0599 |
| 0.6577 | 63.0 | 3150 | 0.7316 | 0.805 | 0.3392 | 1.0608 | 0.805 | 0.7717 | 0.2742 | 0.0632 |
| 0.6577 | 64.0 | 3200 | 0.7313 | 0.82 | 0.3341 | 1.0601 | 0.82 | 0.7878 | 0.2950 | 0.0583 |
| 0.6577 | 65.0 | 3250 | 0.7322 | 0.805 | 0.3388 | 1.1837 | 0.805 | 0.7747 | 0.2806 | 0.0638 |
| 0.6577 | 66.0 | 3300 | 0.7311 | 0.805 | 0.3373 | 1.0157 | 0.805 | 0.7757 | 0.2880 | 0.0629 |
| 0.6577 | 67.0 | 3350 | 0.7310 | 0.805 | 0.3344 | 1.1878 | 0.805 | 0.7766 | 0.2499 | 0.0609 |
| 0.6577 | 68.0 | 3400 | 0.7326 | 0.805 | 0.3391 | 1.0847 | 0.805 | 0.7729 | 0.2824 | 0.0636 |
| 0.6577 | 69.0 | 3450 | 0.7302 | 0.805 | 0.3376 | 1.1932 | 0.805 | 0.7778 | 0.2789 | 0.0617 |
| 0.6528 | 70.0 | 3500 | 0.7305 | 0.81 | 0.3359 | 0.9988 | 0.81 | 0.7787 | 0.2769 | 0.0622 |
| 0.6528 | 71.0 | 3550 | 0.7300 | 0.81 | 0.3328 | 1.0833 | 0.81 | 0.7776 | 0.2914 | 0.0594 |
| 0.6528 | 72.0 | 3600 | 0.7300 | 0.81 | 0.3343 | 1.1426 | 0.81 | 0.7776 | 0.2843 | 0.0594 |
| 0.6528 | 73.0 | 3650 | 0.7285 | 0.805 | 0.3341 | 1.1237 | 0.805 | 0.7701 | 0.2723 | 0.0614 |
| 0.6528 | 74.0 | 3700 | 0.7303 | 0.81 | 0.3368 | 1.1928 | 0.81 | 0.7768 | 0.2926 | 0.0612 |
| 0.6528 | 75.0 | 3750 | 0.7290 | 0.805 | 0.3318 | 1.0669 | 0.805 | 0.7709 | 0.2810 | 0.0603 |
| 0.6528 | 76.0 | 3800 | 0.7316 | 0.8 | 0.3382 | 1.1392 | 0.8000 | 0.7687 | 0.2505 | 0.0636 |
| 0.6528 | 77.0 | 3850 | 0.7284 | 0.8 | 0.3337 | 1.1338 | 0.8000 | 0.7720 | 0.2677 | 0.0610 |
| 0.6528 | 78.0 | 3900 | 0.7303 | 0.805 | 0.3373 | 1.1969 | 0.805 | 0.7729 | 0.2745 | 0.0618 |
| 0.6528 | 79.0 | 3950 | 0.7297 | 0.805 | 0.3369 | 1.1970 | 0.805 | 0.7743 | 0.2731 | 0.0606 |
| 0.6489 | 80.0 | 4000 | 0.7296 | 0.795 | 0.3362 | 1.1328 | 0.795 | 0.7656 | 0.2620 | 0.0627 |
| 0.6489 | 81.0 | 4050 | 0.7295 | 0.805 | 0.3363 | 1.1358 | 0.805 | 0.7726 | 0.2540 | 0.0608 |
| 0.6489 | 82.0 | 4100 | 0.7290 | 0.795 | 0.3341 | 1.1389 | 0.795 | 0.7668 | 0.2661 | 0.0630 |
| 0.6489 | 83.0 | 4150 | 0.7289 | 0.8 | 0.3364 | 1.0597 | 0.8000 | 0.7678 | 0.2838 | 0.0615 |
| 0.6489 | 84.0 | 4200 | 0.7291 | 0.805 | 0.3351 | 1.1277 | 0.805 | 0.7743 | 0.2621 | 0.0608 |
| 0.6489 | 85.0 | 4250 | 0.7297 | 0.795 | 0.3353 | 1.1953 | 0.795 | 0.7668 | 0.2666 | 0.0622 |
| 0.6489 | 86.0 | 4300 | 0.7286 | 0.805 | 0.3339 | 1.1278 | 0.805 | 0.7735 | 0.2668 | 0.0608 |
| 0.6489 | 87.0 | 4350 | 0.7298 | 0.8 | 0.3361 | 1.1423 | 0.8000 | 0.7677 | 0.2613 | 0.0614 |
| 0.6489 | 88.0 | 4400 | 0.7296 | 0.805 | 0.3346 | 1.1927 | 0.805 | 0.7743 | 0.2789 | 0.0612 |
| 0.6489 | 89.0 | 4450 | 0.7299 | 0.8 | 0.3359 | 1.1950 | 0.8000 | 0.7686 | 0.2500 | 0.0613 |
| 0.6462 | 90.0 | 4500 | 0.7297 | 0.805 | 0.3354 | 1.1934 | 0.805 | 0.7743 | 0.2939 | 0.0613 |
| 0.6462 | 91.0 | 4550 | 0.7294 | 0.8 | 0.3353 | 1.1313 | 0.8000 | 0.7685 | 0.2808 | 0.0610 |
| 0.6462 | 92.0 | 4600 | 0.7297 | 0.805 | 0.3356 | 1.1349 | 0.805 | 0.7765 | 0.2668 | 0.0614 |
| 0.6462 | 93.0 | 4650 | 0.7298 | 0.8 | 0.3354 | 1.1954 | 0.8000 | 0.7685 | 0.2700 | 0.0613 |
| 0.6462 | 94.0 | 4700 | 0.7301 | 0.8 | 0.3362 | 1.1951 | 0.8000 | 0.7677 | 0.2722 | 0.0616 |
| 0.6462 | 95.0 | 4750 | 0.7299 | 0.805 | 0.3360 | 1.1957 | 0.805 | 0.7743 | 0.2619 | 0.0614 |
| 0.6462 | 96.0 | 4800 | 0.7299 | 0.805 | 0.3357 | 1.1946 | 0.805 | 0.7743 | 0.2892 | 0.0611 |
| 0.6462 | 97.0 | 4850 | 0.7297 | 0.8 | 0.3355 | 1.1954 | 0.8000 | 0.7686 | 0.2703 | 0.0613 |
| 0.6462 | 98.0 | 4900 | 0.7298 | 0.8 | 0.3359 | 1.1952 | 0.8000 | 0.7677 | 0.2892 | 0.0615 |
| 0.6462 | 99.0 | 4950 | 0.7298 | 0.8 | 0.3357 | 1.1951 | 0.8000 | 0.7677 | 0.2720 | 0.0614 |
| 0.645 | 100.0 | 5000 | 0.7298 | 0.8 | 0.3356 | 1.1950 | 0.8000 | 0.7677 | 0.2868 | 0.0614 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/dit-base_tobacco-tiny_tobacco3482_og_simkd
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-tiny_tobacco3482_og_simkd
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 318.4368
- Accuracy: 0.805
- Brier Loss: 0.3825
- Nll: 1.1523
- F1 Micro: 0.805
- F1 Macro: 0.7673
- Ece: 0.2987
- Aurc: 0.0702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 328.9614 | 0.155 | 0.8984 | 7.4608 | 0.155 | 0.0353 | 0.2035 | 0.8760 |
| No log | 2.0 | 14 | 328.8199 | 0.235 | 0.8940 | 6.4907 | 0.235 | 0.1148 | 0.2643 | 0.7444 |
| No log | 3.0 | 21 | 328.4224 | 0.38 | 0.8711 | 2.8184 | 0.38 | 0.3279 | 0.3440 | 0.4817 |
| No log | 4.0 | 28 | 327.5357 | 0.51 | 0.8072 | 2.0744 | 0.51 | 0.4221 | 0.4111 | 0.3319 |
| No log | 5.0 | 35 | 326.2037 | 0.53 | 0.6860 | 2.0669 | 0.53 | 0.4313 | 0.3619 | 0.2744 |
| No log | 6.0 | 42 | 324.8763 | 0.565 | 0.6008 | 1.9437 | 0.565 | 0.4477 | 0.3009 | 0.2469 |
| No log | 7.0 | 49 | 323.9205 | 0.6 | 0.5390 | 1.7694 | 0.6 | 0.4647 | 0.2365 | 0.1978 |
| No log | 8.0 | 56 | 323.2227 | 0.65 | 0.4632 | 1.7803 | 0.65 | 0.5195 | 0.2313 | 0.1422 |
| No log | 9.0 | 63 | 322.5265 | 0.74 | 0.4177 | 1.7538 | 0.74 | 0.6302 | 0.2442 | 0.1113 |
| No log | 10.0 | 70 | 322.1928 | 0.705 | 0.4013 | 1.5880 | 0.705 | 0.5864 | 0.2147 | 0.1118 |
| No log | 11.0 | 77 | 322.2687 | 0.795 | 0.4006 | 1.2854 | 0.795 | 0.7476 | 0.2719 | 0.0942 |
| No log | 12.0 | 84 | 321.6652 | 0.725 | 0.3754 | 1.3462 | 0.7250 | 0.6521 | 0.2238 | 0.0920 |
| No log | 13.0 | 91 | 322.3688 | 0.785 | 0.3951 | 1.3209 | 0.785 | 0.7260 | 0.2712 | 0.0805 |
| No log | 14.0 | 98 | 321.7083 | 0.72 | 0.3915 | 1.4854 | 0.72 | 0.6220 | 0.1963 | 0.0986 |
| No log | 15.0 | 105 | 321.6171 | 0.8 | 0.3614 | 1.3397 | 0.8000 | 0.7427 | 0.2531 | 0.0741 |
| No log | 16.0 | 112 | 321.0427 | 0.77 | 0.3502 | 1.1461 | 0.7700 | 0.7082 | 0.1976 | 0.0769 |
| No log | 17.0 | 119 | 321.1529 | 0.735 | 0.3827 | 1.5751 | 0.735 | 0.6769 | 0.1926 | 0.0973 |
| No log | 18.0 | 126 | 321.0808 | 0.78 | 0.3611 | 1.2529 | 0.78 | 0.7199 | 0.2242 | 0.0762 |
| No log | 19.0 | 133 | 321.6684 | 0.795 | 0.3835 | 1.1789 | 0.795 | 0.7506 | 0.2823 | 0.0712 |
| No log | 20.0 | 140 | 321.2322 | 0.78 | 0.3682 | 1.1715 | 0.78 | 0.7356 | 0.2532 | 0.0752 |
| No log | 21.0 | 147 | 320.4927 | 0.795 | 0.3458 | 1.3764 | 0.795 | 0.7504 | 0.2178 | 0.0710 |
| No log | 22.0 | 154 | 320.8896 | 0.8 | 0.3568 | 1.0908 | 0.8000 | 0.7536 | 0.2709 | 0.0677 |
| No log | 23.0 | 161 | 320.9060 | 0.785 | 0.3774 | 1.1571 | 0.785 | 0.7414 | 0.2712 | 0.0719 |
| No log | 24.0 | 168 | 320.9026 | 0.795 | 0.3718 | 1.0871 | 0.795 | 0.7465 | 0.2718 | 0.0690 |
| No log | 25.0 | 175 | 320.7932 | 0.805 | 0.3601 | 1.0998 | 0.805 | 0.7699 | 0.2620 | 0.0614 |
| No log | 26.0 | 182 | 321.2285 | 0.735 | 0.4164 | 1.8530 | 0.735 | 0.7051 | 0.2814 | 0.0889 |
| No log | 27.0 | 189 | 320.8364 | 0.775 | 0.4028 | 1.4063 | 0.775 | 0.7412 | 0.2687 | 0.0836 |
| No log | 28.0 | 196 | 320.0800 | 0.785 | 0.3548 | 1.2123 | 0.785 | 0.7394 | 0.2055 | 0.0740 |
| No log | 29.0 | 203 | 319.9995 | 0.79 | 0.3526 | 1.2296 | 0.79 | 0.7381 | 0.2363 | 0.0691 |
| No log | 30.0 | 210 | 320.0685 | 0.795 | 0.3588 | 1.2765 | 0.795 | 0.7447 | 0.2310 | 0.0725 |
| No log | 31.0 | 217 | 320.0981 | 0.805 | 0.3699 | 1.0128 | 0.805 | 0.7690 | 0.2868 | 0.0701 |
| No log | 32.0 | 224 | 320.5063 | 0.8 | 0.3900 | 1.1437 | 0.8000 | 0.7650 | 0.3141 | 0.0679 |
| No log | 33.0 | 231 | 319.8609 | 0.795 | 0.3549 | 1.2051 | 0.795 | 0.7526 | 0.2485 | 0.0697 |
| No log | 34.0 | 238 | 319.6974 | 0.81 | 0.3600 | 1.0124 | 0.81 | 0.7724 | 0.2671 | 0.0672 |
| No log | 35.0 | 245 | 319.5988 | 0.795 | 0.3513 | 1.1480 | 0.795 | 0.7540 | 0.2425 | 0.0679 |
| No log | 36.0 | 252 | 319.6317 | 0.8 | 0.3544 | 1.2190 | 0.8000 | 0.7607 | 0.2449 | 0.0674 |
| No log | 37.0 | 259 | 319.6821 | 0.81 | 0.3531 | 1.0714 | 0.81 | 0.7672 | 0.2590 | 0.0662 |
| No log | 38.0 | 266 | 319.7618 | 0.805 | 0.3754 | 1.0421 | 0.805 | 0.7625 | 0.2973 | 0.0701 |
| No log | 39.0 | 273 | 319.9920 | 0.775 | 0.3843 | 1.0821 | 0.775 | 0.7374 | 0.2801 | 0.0723 |
| No log | 40.0 | 280 | 319.3407 | 0.765 | 0.3633 | 1.2213 | 0.765 | 0.7041 | 0.2274 | 0.0767 |
| No log | 41.0 | 287 | 319.2732 | 0.765 | 0.3696 | 1.2638 | 0.765 | 0.7184 | 0.2315 | 0.0835 |
| No log | 42.0 | 294 | 319.5948 | 0.805 | 0.3685 | 1.0782 | 0.805 | 0.7625 | 0.2678 | 0.0661 |
| No log | 43.0 | 301 | 319.7181 | 0.8 | 0.3776 | 1.0004 | 0.8000 | 0.7507 | 0.2598 | 0.0672 |
| No log | 44.0 | 308 | 319.1170 | 0.77 | 0.3619 | 1.2129 | 0.7700 | 0.7159 | 0.2557 | 0.0787 |
| No log | 45.0 | 315 | 319.5949 | 0.8 | 0.3809 | 1.1448 | 0.8000 | 0.7670 | 0.2868 | 0.0688 |
| No log | 46.0 | 322 | 319.0327 | 0.79 | 0.3675 | 1.2386 | 0.79 | 0.7315 | 0.2546 | 0.0790 |
| No log | 47.0 | 329 | 319.3806 | 0.805 | 0.3665 | 1.1368 | 0.805 | 0.7620 | 0.2737 | 0.0700 |
| No log | 48.0 | 336 | 319.4999 | 0.795 | 0.3836 | 1.0256 | 0.795 | 0.7550 | 0.2800 | 0.0748 |
| No log | 49.0 | 343 | 319.2553 | 0.8 | 0.3660 | 1.2011 | 0.8000 | 0.7573 | 0.2698 | 0.0679 |
| No log | 50.0 | 350 | 319.3495 | 0.805 | 0.3836 | 1.1055 | 0.805 | 0.7634 | 0.3004 | 0.0671 |
| No log | 51.0 | 357 | 319.1643 | 0.8 | 0.3660 | 1.1980 | 0.8000 | 0.7497 | 0.2641 | 0.0709 |
| No log | 52.0 | 364 | 319.1483 | 0.795 | 0.3651 | 1.0776 | 0.795 | 0.7561 | 0.2856 | 0.0683 |
| No log | 53.0 | 371 | 319.0104 | 0.79 | 0.3724 | 1.1653 | 0.79 | 0.7422 | 0.2512 | 0.0724 |
| No log | 54.0 | 378 | 319.1622 | 0.795 | 0.3814 | 1.2807 | 0.795 | 0.7456 | 0.2644 | 0.0759 |
| No log | 55.0 | 385 | 319.1554 | 0.8 | 0.3694 | 1.2710 | 0.8000 | 0.7570 | 0.2877 | 0.0667 |
| No log | 56.0 | 392 | 319.2158 | 0.79 | 0.3795 | 1.1678 | 0.79 | 0.7509 | 0.2942 | 0.0692 |
| No log | 57.0 | 399 | 319.1813 | 0.795 | 0.3839 | 1.1243 | 0.795 | 0.7529 | 0.2835 | 0.0733 |
| No log | 58.0 | 406 | 318.7599 | 0.81 | 0.3632 | 1.1484 | 0.81 | 0.7738 | 0.3030 | 0.0691 |
| No log | 59.0 | 413 | 319.0827 | 0.805 | 0.3792 | 1.2070 | 0.805 | 0.7685 | 0.2901 | 0.0674 |
| No log | 60.0 | 420 | 318.6928 | 0.805 | 0.3661 | 1.1517 | 0.805 | 0.7534 | 0.2492 | 0.0719 |
| No log | 61.0 | 427 | 318.8309 | 0.805 | 0.3714 | 1.2785 | 0.805 | 0.7517 | 0.2674 | 0.0699 |
| No log | 62.0 | 434 | 318.9468 | 0.8 | 0.3794 | 1.1549 | 0.8000 | 0.7566 | 0.2862 | 0.0707 |
| No log | 63.0 | 441 | 318.8059 | 0.785 | 0.3774 | 1.2460 | 0.785 | 0.7487 | 0.2721 | 0.0752 |
| No log | 64.0 | 448 | 318.7155 | 0.81 | 0.3659 | 1.1963 | 0.81 | 0.7660 | 0.2676 | 0.0680 |
| No log | 65.0 | 455 | 318.8439 | 0.795 | 0.3799 | 1.0230 | 0.795 | 0.7464 | 0.2797 | 0.0700 |
| No log | 66.0 | 462 | 318.7784 | 0.79 | 0.3783 | 1.3168 | 0.79 | 0.7503 | 0.2618 | 0.0804 |
| No log | 67.0 | 469 | 318.9019 | 0.795 | 0.3802 | 1.2003 | 0.795 | 0.7503 | 0.2934 | 0.0702 |
| No log | 68.0 | 476 | 318.6647 | 0.8 | 0.3728 | 1.1395 | 0.8000 | 0.7590 | 0.2718 | 0.0699 |
| No log | 69.0 | 483 | 318.3780 | 0.8 | 0.3688 | 1.2812 | 0.8000 | 0.7602 | 0.2690 | 0.0728 |
| No log | 70.0 | 490 | 318.8004 | 0.8 | 0.3779 | 1.0682 | 0.8000 | 0.7607 | 0.2887 | 0.0682 |
| No log | 71.0 | 497 | 318.7021 | 0.8 | 0.3748 | 1.1101 | 0.8000 | 0.7545 | 0.2977 | 0.0691 |
| 322.4844 | 72.0 | 504 | 318.3595 | 0.79 | 0.3779 | 1.2333 | 0.79 | 0.7386 | 0.2617 | 0.0843 |
| 322.4844 | 73.0 | 511 | 318.5725 | 0.805 | 0.3740 | 1.2108 | 0.805 | 0.7674 | 0.2762 | 0.0677 |
| 322.4844 | 74.0 | 518 | 318.7131 | 0.81 | 0.3822 | 1.2048 | 0.81 | 0.7660 | 0.2971 | 0.0696 |
| 322.4844 | 75.0 | 525 | 318.6258 | 0.775 | 0.3806 | 1.1511 | 0.775 | 0.7228 | 0.2824 | 0.0743 |
| 322.4844 | 76.0 | 532 | 318.5414 | 0.8 | 0.3746 | 1.2136 | 0.8000 | 0.7563 | 0.2872 | 0.0708 |
| 322.4844 | 77.0 | 539 | 318.5404 | 0.795 | 0.3765 | 1.1414 | 0.795 | 0.7551 | 0.2905 | 0.0707 |
| 322.4844 | 78.0 | 546 | 318.5820 | 0.8 | 0.3806 | 1.1653 | 0.8000 | 0.7573 | 0.2888 | 0.0707 |
| 322.4844 | 79.0 | 553 | 318.5909 | 0.8 | 0.3838 | 1.2343 | 0.8000 | 0.7563 | 0.2778 | 0.0754 |
| 322.4844 | 80.0 | 560 | 318.6398 | 0.795 | 0.3874 | 1.1097 | 0.795 | 0.7520 | 0.3045 | 0.0727 |
| 322.4844 | 81.0 | 567 | 318.6250 | 0.795 | 0.3860 | 1.1612 | 0.795 | 0.7542 | 0.3079 | 0.0727 |
| 322.4844 | 82.0 | 574 | 318.5269 | 0.795 | 0.3825 | 1.2812 | 0.795 | 0.7451 | 0.2723 | 0.0737 |
| 322.4844 | 83.0 | 581 | 318.5790 | 0.795 | 0.3846 | 1.1575 | 0.795 | 0.7455 | 0.2984 | 0.0723 |
| 322.4844 | 84.0 | 588 | 318.4343 | 0.795 | 0.3826 | 1.2088 | 0.795 | 0.7532 | 0.2852 | 0.0746 |
| 322.4844 | 85.0 | 595 | 318.3853 | 0.795 | 0.3792 | 1.2784 | 0.795 | 0.7456 | 0.3003 | 0.0729 |
| 322.4844 | 86.0 | 602 | 318.5143 | 0.805 | 0.3854 | 1.1745 | 0.805 | 0.7636 | 0.3071 | 0.0705 |
| 322.4844 | 87.0 | 609 | 318.3533 | 0.805 | 0.3763 | 1.1579 | 0.805 | 0.7679 | 0.2805 | 0.0694 |
| 322.4844 | 88.0 | 616 | 318.4745 | 0.795 | 0.3860 | 1.0964 | 0.795 | 0.7539 | 0.2952 | 0.0712 |
| 322.4844 | 89.0 | 623 | 318.4909 | 0.805 | 0.3829 | 1.1544 | 0.805 | 0.7673 | 0.3035 | 0.0700 |
| 322.4844 | 90.0 | 630 | 318.4910 | 0.8 | 0.3828 | 1.1537 | 0.8000 | 0.7497 | 0.2730 | 0.0717 |
| 322.4844 | 91.0 | 637 | 318.5176 | 0.8 | 0.3855 | 1.1613 | 0.8000 | 0.7552 | 0.2815 | 0.0718 |
| 322.4844 | 92.0 | 644 | 318.4100 | 0.795 | 0.3810 | 1.2215 | 0.795 | 0.7532 | 0.2696 | 0.0731 |
| 322.4844 | 93.0 | 651 | 318.3500 | 0.805 | 0.3765 | 1.2181 | 0.805 | 0.7702 | 0.2790 | 0.0705 |
| 322.4844 | 94.0 | 658 | 318.3257 | 0.805 | 0.3785 | 1.2218 | 0.805 | 0.7678 | 0.3114 | 0.0704 |
| 322.4844 | 95.0 | 665 | 318.3990 | 0.8 | 0.3823 | 1.1485 | 0.8000 | 0.7585 | 0.2901 | 0.0710 |
| 322.4844 | 96.0 | 672 | 318.5006 | 0.81 | 0.3862 | 1.1518 | 0.81 | 0.7724 | 0.2925 | 0.0698 |
| 322.4844 | 97.0 | 679 | 318.3142 | 0.8 | 0.3780 | 1.1608 | 0.8000 | 0.7557 | 0.2916 | 0.0716 |
| 322.4844 | 98.0 | 686 | 318.3767 | 0.795 | 0.3819 | 1.2208 | 0.795 | 0.7526 | 0.2764 | 0.0731 |
| 322.4844 | 99.0 | 693 | 318.4233 | 0.8 | 0.3810 | 1.1532 | 0.8000 | 0.7557 | 0.2786 | 0.0706 |
| 322.4844 | 100.0 | 700 | 318.4368 | 0.805 | 0.3825 | 1.1523 | 0.805 | 0.7673 | 0.2987 | 0.0702 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
hkivancoral/hushem_5x_deit_small_sgd_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_00001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4657
- Accuracy: 0.2857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5422 | 1.0 | 28 | 1.4834 | 0.2619 |
| 1.6237 | 2.0 | 56 | 1.4826 | 0.2619 |
| 1.5562 | 3.0 | 84 | 1.4817 | 0.2619 |
| 1.5833 | 4.0 | 112 | 1.4810 | 0.2619 |
| 1.5467 | 5.0 | 140 | 1.4803 | 0.2619 |
| 1.5372 | 6.0 | 168 | 1.4795 | 0.2857 |
| 1.5683 | 7.0 | 196 | 1.4788 | 0.2857 |
| 1.5057 | 8.0 | 224 | 1.4781 | 0.2857 |
| 1.5994 | 9.0 | 252 | 1.4774 | 0.2857 |
| 1.5076 | 10.0 | 280 | 1.4768 | 0.2857 |
| 1.5466 | 11.0 | 308 | 1.4762 | 0.2857 |
| 1.544 | 12.0 | 336 | 1.4756 | 0.2857 |
| 1.5866 | 13.0 | 364 | 1.4750 | 0.2857 |
| 1.5384 | 14.0 | 392 | 1.4744 | 0.2857 |
| 1.6111 | 15.0 | 420 | 1.4739 | 0.2857 |
| 1.5625 | 16.0 | 448 | 1.4733 | 0.2857 |
| 1.547 | 17.0 | 476 | 1.4728 | 0.2857 |
| 1.5362 | 18.0 | 504 | 1.4723 | 0.2857 |
| 1.5318 | 19.0 | 532 | 1.4718 | 0.2857 |
| 1.5453 | 20.0 | 560 | 1.4714 | 0.2857 |
| 1.5434 | 21.0 | 588 | 1.4709 | 0.2857 |
| 1.548 | 22.0 | 616 | 1.4705 | 0.2857 |
| 1.5105 | 23.0 | 644 | 1.4701 | 0.2857 |
| 1.5176 | 24.0 | 672 | 1.4697 | 0.2857 |
| 1.5194 | 25.0 | 700 | 1.4694 | 0.2857 |
| 1.5543 | 26.0 | 728 | 1.4690 | 0.2857 |
| 1.5727 | 27.0 | 756 | 1.4687 | 0.2857 |
| 1.5476 | 28.0 | 784 | 1.4684 | 0.2857 |
| 1.5163 | 29.0 | 812 | 1.4681 | 0.2857 |
| 1.4767 | 30.0 | 840 | 1.4678 | 0.2857 |
| 1.5623 | 31.0 | 868 | 1.4676 | 0.2857 |
| 1.4924 | 32.0 | 896 | 1.4674 | 0.2857 |
| 1.5673 | 33.0 | 924 | 1.4672 | 0.2857 |
| 1.4842 | 34.0 | 952 | 1.4670 | 0.2857 |
| 1.4908 | 35.0 | 980 | 1.4668 | 0.2857 |
| 1.5184 | 36.0 | 1008 | 1.4666 | 0.2857 |
| 1.5315 | 37.0 | 1036 | 1.4664 | 0.2857 |
| 1.4892 | 38.0 | 1064 | 1.4663 | 0.2857 |
| 1.5241 | 39.0 | 1092 | 1.4662 | 0.2857 |
| 1.5587 | 40.0 | 1120 | 1.4661 | 0.2857 |
| 1.5867 | 41.0 | 1148 | 1.4660 | 0.2857 |
| 1.5357 | 42.0 | 1176 | 1.4659 | 0.2857 |
| 1.479 | 43.0 | 1204 | 1.4659 | 0.2857 |
| 1.4798 | 44.0 | 1232 | 1.4658 | 0.2857 |
| 1.5998 | 45.0 | 1260 | 1.4658 | 0.2857 |
| 1.5487 | 46.0 | 1288 | 1.4658 | 0.2857 |
| 1.5234 | 47.0 | 1316 | 1.4657 | 0.2857 |
| 1.5142 | 48.0 | 1344 | 1.4657 | 0.2857 |
| 1.5259 | 49.0 | 1372 | 1.4657 | 0.2857 |
| 1.5344 | 50.0 | 1400 | 1.4657 | 0.2857 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_small_sgd_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_00001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4968
- Accuracy: 0.2439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5975 | 1.0 | 28 | 1.5197 | 0.2195 |
| 1.5191 | 2.0 | 56 | 1.5186 | 0.2195 |
| 1.5652 | 3.0 | 84 | 1.5176 | 0.2195 |
| 1.5368 | 4.0 | 112 | 1.5166 | 0.2195 |
| 1.5533 | 5.0 | 140 | 1.5156 | 0.2195 |
| 1.5934 | 6.0 | 168 | 1.5147 | 0.2195 |
| 1.5997 | 7.0 | 196 | 1.5138 | 0.2195 |
| 1.543 | 8.0 | 224 | 1.5129 | 0.2195 |
| 1.5785 | 9.0 | 252 | 1.5120 | 0.2195 |
| 1.5476 | 10.0 | 280 | 1.5112 | 0.2195 |
| 1.5374 | 11.0 | 308 | 1.5104 | 0.2195 |
| 1.5776 | 12.0 | 336 | 1.5096 | 0.2195 |
| 1.552 | 13.0 | 364 | 1.5088 | 0.2195 |
| 1.5084 | 14.0 | 392 | 1.5081 | 0.2195 |
| 1.5475 | 15.0 | 420 | 1.5073 | 0.2195 |
| 1.5527 | 16.0 | 448 | 1.5067 | 0.2195 |
| 1.5461 | 17.0 | 476 | 1.5060 | 0.2195 |
| 1.553 | 18.0 | 504 | 1.5054 | 0.2195 |
| 1.5466 | 19.0 | 532 | 1.5047 | 0.2195 |
| 1.5068 | 20.0 | 560 | 1.5041 | 0.2195 |
| 1.5792 | 21.0 | 588 | 1.5036 | 0.2195 |
| 1.5408 | 22.0 | 616 | 1.5030 | 0.2195 |
| 1.4869 | 23.0 | 644 | 1.5025 | 0.2195 |
| 1.5203 | 24.0 | 672 | 1.5020 | 0.2439 |
| 1.5205 | 25.0 | 700 | 1.5016 | 0.2439 |
| 1.5334 | 26.0 | 728 | 1.5011 | 0.2439 |
| 1.5195 | 27.0 | 756 | 1.5007 | 0.2439 |
| 1.555 | 28.0 | 784 | 1.5003 | 0.2439 |
| 1.5231 | 29.0 | 812 | 1.4999 | 0.2439 |
| 1.5521 | 30.0 | 840 | 1.4996 | 0.2439 |
| 1.5405 | 31.0 | 868 | 1.4992 | 0.2439 |
| 1.5223 | 32.0 | 896 | 1.4989 | 0.2439 |
| 1.533 | 33.0 | 924 | 1.4986 | 0.2439 |
| 1.5569 | 34.0 | 952 | 1.4984 | 0.2439 |
| 1.5415 | 35.0 | 980 | 1.4981 | 0.2439 |
| 1.5242 | 36.0 | 1008 | 1.4979 | 0.2439 |
| 1.5342 | 37.0 | 1036 | 1.4977 | 0.2439 |
| 1.51 | 38.0 | 1064 | 1.4975 | 0.2439 |
| 1.4915 | 39.0 | 1092 | 1.4974 | 0.2439 |
| 1.533 | 40.0 | 1120 | 1.4972 | 0.2439 |
| 1.559 | 41.0 | 1148 | 1.4971 | 0.2439 |
| 1.5496 | 42.0 | 1176 | 1.4970 | 0.2439 |
| 1.5368 | 43.0 | 1204 | 1.4969 | 0.2439 |
| 1.5602 | 44.0 | 1232 | 1.4969 | 0.2439 |
| 1.5291 | 45.0 | 1260 | 1.4968 | 0.2439 |
| 1.5316 | 46.0 | 1288 | 1.4968 | 0.2439 |
| 1.5518 | 47.0 | 1316 | 1.4968 | 0.2439 |
| 1.5141 | 48.0 | 1344 | 1.4968 | 0.2439 |
| 1.515 | 49.0 | 1372 | 1.4968 | 0.2439 |
| 1.544 | 50.0 | 1400 | 1.4968 | 0.2439 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
jordyvl/dit-base_tobacco-small_tobacco3482_kd
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-small_tobacco3482_kd
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5105
- Accuracy: 0.815
- Brier Loss: 0.2790
- Nll: 1.4944
- F1 Micro: 0.815
- F1 Macro: 0.7942
- Ece: 0.1287
- Aurc: 0.0524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 2.2378 | 0.17 | 0.8975 | 4.4036 | 0.17 | 0.1418 | 0.2519 | 0.8078 |
| No log | 2.0 | 14 | 1.7484 | 0.38 | 0.7667 | 4.1809 | 0.38 | 0.2513 | 0.3132 | 0.4423 |
| No log | 3.0 | 21 | 1.1417 | 0.55 | 0.5683 | 1.8669 | 0.55 | 0.4592 | 0.2551 | 0.2287 |
| No log | 4.0 | 28 | 0.8020 | 0.685 | 0.4327 | 1.7476 | 0.685 | 0.6393 | 0.2274 | 0.1292 |
| No log | 5.0 | 35 | 0.8347 | 0.645 | 0.4502 | 1.6809 | 0.645 | 0.6306 | 0.1939 | 0.1346 |
| No log | 6.0 | 42 | 0.6546 | 0.735 | 0.3657 | 1.5210 | 0.735 | 0.7191 | 0.1995 | 0.0901 |
| No log | 7.0 | 49 | 0.6447 | 0.76 | 0.3375 | 1.5117 | 0.76 | 0.7450 | 0.1781 | 0.0875 |
| No log | 8.0 | 56 | 0.7089 | 0.775 | 0.3650 | 1.4823 | 0.775 | 0.7554 | 0.2026 | 0.0971 |
| No log | 9.0 | 63 | 0.5721 | 0.785 | 0.3083 | 1.4053 | 0.785 | 0.7633 | 0.1647 | 0.0651 |
| No log | 10.0 | 70 | 0.5953 | 0.795 | 0.3130 | 1.4301 | 0.795 | 0.7971 | 0.1661 | 0.0701 |
| No log | 11.0 | 77 | 0.6352 | 0.79 | 0.3131 | 1.5018 | 0.79 | 0.7607 | 0.1503 | 0.0789 |
| No log | 12.0 | 84 | 0.7999 | 0.735 | 0.3916 | 1.7141 | 0.735 | 0.7065 | 0.2143 | 0.1178 |
| No log | 13.0 | 91 | 0.6602 | 0.8 | 0.3099 | 1.8022 | 0.8000 | 0.7746 | 0.1709 | 0.0805 |
| No log | 14.0 | 98 | 0.6529 | 0.785 | 0.3298 | 1.3607 | 0.785 | 0.7658 | 0.1771 | 0.0858 |
| No log | 15.0 | 105 | 0.6170 | 0.8 | 0.3098 | 1.3676 | 0.8000 | 0.7838 | 0.1630 | 0.0723 |
| No log | 16.0 | 112 | 0.6484 | 0.775 | 0.3342 | 1.2826 | 0.775 | 0.7752 | 0.1837 | 0.0827 |
| No log | 17.0 | 119 | 0.5817 | 0.78 | 0.3019 | 1.6577 | 0.78 | 0.7730 | 0.1566 | 0.0582 |
| No log | 18.0 | 126 | 0.6528 | 0.78 | 0.3376 | 1.5044 | 0.78 | 0.7788 | 0.1687 | 0.0768 |
| No log | 19.0 | 133 | 0.6241 | 0.805 | 0.3038 | 1.3465 | 0.805 | 0.7796 | 0.1498 | 0.0759 |
| No log | 20.0 | 140 | 0.5610 | 0.79 | 0.2948 | 1.4395 | 0.79 | 0.7716 | 0.1515 | 0.0708 |
| No log | 21.0 | 147 | 0.6829 | 0.78 | 0.3241 | 1.3252 | 0.78 | 0.7687 | 0.1782 | 0.0852 |
| No log | 22.0 | 154 | 0.5443 | 0.795 | 0.3117 | 1.4374 | 0.795 | 0.7822 | 0.1730 | 0.0679 |
| No log | 23.0 | 161 | 0.6968 | 0.78 | 0.3474 | 1.7830 | 0.78 | 0.7880 | 0.1745 | 0.0813 |
| No log | 24.0 | 168 | 0.7422 | 0.75 | 0.3639 | 1.5379 | 0.75 | 0.7238 | 0.1982 | 0.0940 |
| No log | 25.0 | 175 | 0.5756 | 0.785 | 0.3150 | 1.4739 | 0.785 | 0.7723 | 0.1615 | 0.0675 |
| No log | 26.0 | 182 | 0.6127 | 0.805 | 0.3036 | 1.5553 | 0.805 | 0.7990 | 0.1416 | 0.0659 |
| No log | 27.0 | 189 | 0.5852 | 0.795 | 0.3104 | 1.5149 | 0.795 | 0.7808 | 0.1583 | 0.0625 |
| No log | 28.0 | 196 | 0.5421 | 0.83 | 0.2808 | 1.4320 | 0.83 | 0.8147 | 0.1475 | 0.0558 |
| No log | 29.0 | 203 | 0.5588 | 0.79 | 0.2888 | 1.5801 | 0.79 | 0.7723 | 0.1465 | 0.0580 |
| No log | 30.0 | 210 | 0.5532 | 0.795 | 0.2892 | 1.5724 | 0.795 | 0.7790 | 0.1453 | 0.0576 |
| No log | 31.0 | 217 | 0.5050 | 0.835 | 0.2685 | 1.4206 | 0.835 | 0.8221 | 0.1459 | 0.0549 |
| No log | 32.0 | 224 | 0.5067 | 0.82 | 0.2762 | 1.4460 | 0.82 | 0.8017 | 0.1494 | 0.0538 |
| No log | 33.0 | 231 | 0.5200 | 0.815 | 0.2798 | 1.5300 | 0.815 | 0.7973 | 0.1442 | 0.0541 |
| No log | 34.0 | 238 | 0.5110 | 0.825 | 0.2802 | 1.6009 | 0.825 | 0.8095 | 0.1462 | 0.0537 |
| No log | 35.0 | 245 | 0.5125 | 0.815 | 0.2804 | 1.5209 | 0.815 | 0.8013 | 0.1555 | 0.0540 |
| No log | 36.0 | 252 | 0.4981 | 0.82 | 0.2728 | 1.4498 | 0.82 | 0.8032 | 0.1557 | 0.0522 |
| No log | 37.0 | 259 | 0.5196 | 0.82 | 0.2796 | 1.5297 | 0.82 | 0.8057 | 0.1396 | 0.0523 |
| No log | 38.0 | 266 | 0.5034 | 0.82 | 0.2755 | 1.4577 | 0.82 | 0.8000 | 0.1449 | 0.0524 |
| No log | 39.0 | 273 | 0.5190 | 0.815 | 0.2810 | 1.5240 | 0.815 | 0.8003 | 0.1516 | 0.0533 |
| No log | 40.0 | 280 | 0.4926 | 0.83 | 0.2697 | 1.4598 | 0.83 | 0.8161 | 0.1248 | 0.0514 |
| No log | 41.0 | 287 | 0.5117 | 0.815 | 0.2808 | 1.5168 | 0.815 | 0.7965 | 0.1306 | 0.0525 |
| No log | 42.0 | 294 | 0.5034 | 0.825 | 0.2721 | 1.5263 | 0.825 | 0.8143 | 0.1389 | 0.0533 |
| No log | 43.0 | 301 | 0.5073 | 0.815 | 0.2762 | 1.5308 | 0.815 | 0.7916 | 0.1452 | 0.0511 |
| No log | 44.0 | 308 | 0.5017 | 0.825 | 0.2751 | 1.5202 | 0.825 | 0.8095 | 0.1473 | 0.0525 |
| No log | 45.0 | 315 | 0.5052 | 0.815 | 0.2783 | 1.5143 | 0.815 | 0.7965 | 0.1451 | 0.0525 |
| No log | 46.0 | 322 | 0.5043 | 0.83 | 0.2743 | 1.5172 | 0.83 | 0.8172 | 0.1481 | 0.0517 |
| No log | 47.0 | 329 | 0.5057 | 0.825 | 0.2767 | 1.5164 | 0.825 | 0.8089 | 0.1325 | 0.0520 |
| No log | 48.0 | 336 | 0.5033 | 0.82 | 0.2752 | 1.5168 | 0.82 | 0.8061 | 0.1430 | 0.0523 |
| No log | 49.0 | 343 | 0.5042 | 0.82 | 0.2755 | 1.5163 | 0.82 | 0.8061 | 0.1394 | 0.0517 |
| No log | 50.0 | 350 | 0.5068 | 0.82 | 0.2767 | 1.5153 | 0.82 | 0.8061 | 0.1471 | 0.0517 |
| No log | 51.0 | 357 | 0.5048 | 0.82 | 0.2759 | 1.5137 | 0.82 | 0.8061 | 0.1419 | 0.0519 |
| No log | 52.0 | 364 | 0.5044 | 0.825 | 0.2759 | 1.5112 | 0.825 | 0.8064 | 0.1342 | 0.0518 |
| No log | 53.0 | 371 | 0.5046 | 0.825 | 0.2756 | 1.5122 | 0.825 | 0.8116 | 0.1388 | 0.0514 |
| No log | 54.0 | 378 | 0.5078 | 0.815 | 0.2777 | 1.5111 | 0.815 | 0.7984 | 0.1442 | 0.0519 |
| No log | 55.0 | 385 | 0.5059 | 0.815 | 0.2767 | 1.5109 | 0.815 | 0.7984 | 0.1351 | 0.0518 |
| No log | 56.0 | 392 | 0.5087 | 0.82 | 0.2779 | 1.5089 | 0.82 | 0.8061 | 0.1391 | 0.0518 |
| No log | 57.0 | 399 | 0.5072 | 0.82 | 0.2771 | 1.5094 | 0.82 | 0.8061 | 0.1339 | 0.0517 |
| No log | 58.0 | 406 | 0.5079 | 0.82 | 0.2776 | 1.5074 | 0.82 | 0.8061 | 0.1366 | 0.0518 |
| No log | 59.0 | 413 | 0.5072 | 0.82 | 0.2771 | 1.5072 | 0.82 | 0.8061 | 0.1308 | 0.0518 |
| No log | 60.0 | 420 | 0.5084 | 0.825 | 0.2776 | 1.5059 | 0.825 | 0.8116 | 0.1303 | 0.0520 |
| No log | 61.0 | 427 | 0.5074 | 0.82 | 0.2772 | 1.5066 | 0.82 | 0.8038 | 0.1244 | 0.0520 |
| No log | 62.0 | 434 | 0.5090 | 0.82 | 0.2781 | 1.5053 | 0.82 | 0.8061 | 0.1367 | 0.0519 |
| No log | 63.0 | 441 | 0.5094 | 0.825 | 0.2779 | 1.5050 | 0.825 | 0.8116 | 0.1305 | 0.0520 |
| No log | 64.0 | 448 | 0.5098 | 0.82 | 0.2782 | 1.5049 | 0.82 | 0.8038 | 0.1314 | 0.0520 |
| No log | 65.0 | 455 | 0.5086 | 0.82 | 0.2780 | 1.5038 | 0.82 | 0.8038 | 0.1249 | 0.0520 |
| No log | 66.0 | 462 | 0.5103 | 0.82 | 0.2787 | 1.5023 | 0.82 | 0.8038 | 0.1222 | 0.0522 |
| No log | 67.0 | 469 | 0.5095 | 0.82 | 0.2782 | 1.5025 | 0.82 | 0.8038 | 0.1228 | 0.0521 |
| No log | 68.0 | 476 | 0.5095 | 0.82 | 0.2783 | 1.5027 | 0.82 | 0.8038 | 0.1330 | 0.0522 |
| No log | 69.0 | 483 | 0.5097 | 0.82 | 0.2785 | 1.5015 | 0.82 | 0.8038 | 0.1228 | 0.0521 |
| No log | 70.0 | 490 | 0.5109 | 0.82 | 0.2788 | 1.5005 | 0.82 | 0.8038 | 0.1322 | 0.0520 |
| No log | 71.0 | 497 | 0.5096 | 0.82 | 0.2784 | 1.5012 | 0.82 | 0.8038 | 0.1320 | 0.0522 |
| 0.1366 | 72.0 | 504 | 0.5095 | 0.82 | 0.2784 | 1.5011 | 0.82 | 0.8038 | 0.1219 | 0.0522 |
| 0.1366 | 73.0 | 511 | 0.5109 | 0.82 | 0.2791 | 1.4998 | 0.82 | 0.8038 | 0.1249 | 0.0523 |
| 0.1366 | 74.0 | 518 | 0.5100 | 0.82 | 0.2786 | 1.5000 | 0.82 | 0.8038 | 0.1219 | 0.0521 |
| 0.1366 | 75.0 | 525 | 0.5096 | 0.82 | 0.2784 | 1.5000 | 0.82 | 0.8038 | 0.1238 | 0.0521 |
| 0.1366 | 76.0 | 532 | 0.5104 | 0.82 | 0.2787 | 1.4988 | 0.82 | 0.8038 | 0.1341 | 0.0523 |
| 0.1366 | 77.0 | 539 | 0.5105 | 0.82 | 0.2788 | 1.4985 | 0.82 | 0.8038 | 0.1340 | 0.0521 |
| 0.1366 | 78.0 | 546 | 0.5103 | 0.82 | 0.2788 | 1.4985 | 0.82 | 0.8038 | 0.1338 | 0.0520 |
| 0.1366 | 79.0 | 553 | 0.5105 | 0.82 | 0.2788 | 1.4983 | 0.82 | 0.8038 | 0.1317 | 0.0522 |
| 0.1366 | 80.0 | 560 | 0.5106 | 0.82 | 0.2789 | 1.4977 | 0.82 | 0.8038 | 0.1337 | 0.0523 |
| 0.1366 | 81.0 | 567 | 0.5108 | 0.82 | 0.2790 | 1.4971 | 0.82 | 0.8038 | 0.1339 | 0.0523 |
| 0.1366 | 82.0 | 574 | 0.5107 | 0.82 | 0.2790 | 1.4970 | 0.82 | 0.8038 | 0.1317 | 0.0521 |
| 0.1366 | 83.0 | 581 | 0.5108 | 0.82 | 0.2790 | 1.4968 | 0.82 | 0.8038 | 0.1339 | 0.0522 |
| 0.1366 | 84.0 | 588 | 0.5105 | 0.82 | 0.2789 | 1.4966 | 0.82 | 0.8038 | 0.1340 | 0.0522 |
| 0.1366 | 85.0 | 595 | 0.5106 | 0.82 | 0.2789 | 1.4961 | 0.82 | 0.8038 | 0.1338 | 0.0523 |
| 0.1366 | 86.0 | 602 | 0.5109 | 0.82 | 0.2790 | 1.4958 | 0.82 | 0.8038 | 0.1336 | 0.0524 |
| 0.1366 | 87.0 | 609 | 0.5105 | 0.815 | 0.2789 | 1.4956 | 0.815 | 0.7942 | 0.1290 | 0.0525 |
| 0.1366 | 88.0 | 616 | 0.5105 | 0.815 | 0.2790 | 1.4954 | 0.815 | 0.7942 | 0.1290 | 0.0525 |
| 0.1366 | 89.0 | 623 | 0.5106 | 0.815 | 0.2790 | 1.4952 | 0.815 | 0.7942 | 0.1290 | 0.0526 |
| 0.1366 | 90.0 | 630 | 0.5106 | 0.82 | 0.2790 | 1.4951 | 0.82 | 0.8038 | 0.1338 | 0.0523 |
| 0.1366 | 91.0 | 637 | 0.5107 | 0.815 | 0.2790 | 1.4949 | 0.815 | 0.7942 | 0.1289 | 0.0526 |
| 0.1366 | 92.0 | 644 | 0.5107 | 0.815 | 0.2790 | 1.4947 | 0.815 | 0.7942 | 0.1289 | 0.0526 |
| 0.1366 | 93.0 | 651 | 0.5107 | 0.815 | 0.2790 | 1.4947 | 0.815 | 0.7942 | 0.1289 | 0.0525 |
| 0.1366 | 94.0 | 658 | 0.5107 | 0.82 | 0.2790 | 1.4946 | 0.82 | 0.8038 | 0.1335 | 0.0523 |
| 0.1366 | 95.0 | 665 | 0.5106 | 0.82 | 0.2790 | 1.4946 | 0.82 | 0.8038 | 0.1335 | 0.0523 |
| 0.1366 | 96.0 | 672 | 0.5105 | 0.815 | 0.2790 | 1.4945 | 0.815 | 0.7942 | 0.1289 | 0.0524 |
| 0.1366 | 97.0 | 679 | 0.5105 | 0.815 | 0.2790 | 1.4945 | 0.815 | 0.7942 | 0.1289 | 0.0524 |
| 0.1366 | 98.0 | 686 | 0.5105 | 0.815 | 0.2790 | 1.4944 | 0.815 | 0.7942 | 0.1289 | 0.0524 |
| 0.1366 | 99.0 | 693 | 0.5105 | 0.815 | 0.2790 | 1.4944 | 0.815 | 0.7942 | 0.1287 | 0.0524 |
| 0.1366 | 100.0 | 700 | 0.5105 | 0.815 | 0.2790 | 1.4944 | 0.815 | 0.7942 | 0.1287 | 0.0524 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
hkivancoral/hushem_5x_deit_small_sgd_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_0001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3695
- Accuracy: 0.3111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5195 | 1.0 | 27 | 1.4999 | 0.2889 |
| 1.4914 | 2.0 | 54 | 1.4892 | 0.2889 |
| 1.5108 | 3.0 | 81 | 1.4789 | 0.2889 |
| 1.5345 | 4.0 | 108 | 1.4698 | 0.2667 |
| 1.4684 | 5.0 | 135 | 1.4617 | 0.2667 |
| 1.4525 | 6.0 | 162 | 1.4534 | 0.2667 |
| 1.4298 | 7.0 | 189 | 1.4465 | 0.2667 |
| 1.4569 | 8.0 | 216 | 1.4397 | 0.2444 |
| 1.4283 | 9.0 | 243 | 1.4337 | 0.2444 |
| 1.4203 | 10.0 | 270 | 1.4280 | 0.2444 |
| 1.3871 | 11.0 | 297 | 1.4228 | 0.2444 |
| 1.4156 | 12.0 | 324 | 1.4180 | 0.2444 |
| 1.4346 | 13.0 | 351 | 1.4134 | 0.2444 |
| 1.4076 | 14.0 | 378 | 1.4093 | 0.2444 |
| 1.425 | 15.0 | 405 | 1.4059 | 0.2444 |
| 1.4406 | 16.0 | 432 | 1.4025 | 0.2444 |
| 1.4069 | 17.0 | 459 | 1.3996 | 0.2444 |
| 1.3779 | 18.0 | 486 | 1.3968 | 0.2444 |
| 1.3991 | 19.0 | 513 | 1.3941 | 0.2667 |
| 1.3962 | 20.0 | 540 | 1.3918 | 0.2667 |
| 1.3954 | 21.0 | 567 | 1.3897 | 0.2889 |
| 1.3886 | 22.0 | 594 | 1.3877 | 0.2889 |
| 1.3775 | 23.0 | 621 | 1.3858 | 0.2889 |
| 1.3714 | 24.0 | 648 | 1.3842 | 0.2889 |
| 1.4056 | 25.0 | 675 | 1.3826 | 0.2889 |
| 1.4026 | 26.0 | 702 | 1.3812 | 0.2889 |
| 1.359 | 27.0 | 729 | 1.3799 | 0.2889 |
| 1.3709 | 28.0 | 756 | 1.3787 | 0.2889 |
| 1.3667 | 29.0 | 783 | 1.3776 | 0.2889 |
| 1.3672 | 30.0 | 810 | 1.3766 | 0.2889 |
| 1.3762 | 31.0 | 837 | 1.3757 | 0.2889 |
| 1.3384 | 32.0 | 864 | 1.3749 | 0.2889 |
| 1.3698 | 33.0 | 891 | 1.3742 | 0.2889 |
| 1.3636 | 34.0 | 918 | 1.3735 | 0.3111 |
| 1.3439 | 35.0 | 945 | 1.3729 | 0.3111 |
| 1.3571 | 36.0 | 972 | 1.3723 | 0.3111 |
| 1.3688 | 37.0 | 999 | 1.3718 | 0.3111 |
| 1.3527 | 38.0 | 1026 | 1.3714 | 0.3111 |
| 1.3641 | 39.0 | 1053 | 1.3710 | 0.3111 |
| 1.3538 | 40.0 | 1080 | 1.3707 | 0.3111 |
| 1.3693 | 41.0 | 1107 | 1.3704 | 0.3111 |
| 1.3789 | 42.0 | 1134 | 1.3701 | 0.3111 |
| 1.3917 | 43.0 | 1161 | 1.3699 | 0.3111 |
| 1.3524 | 44.0 | 1188 | 1.3698 | 0.3111 |
| 1.367 | 45.0 | 1215 | 1.3696 | 0.3111 |
| 1.3553 | 46.0 | 1242 | 1.3696 | 0.3111 |
| 1.3523 | 47.0 | 1269 | 1.3695 | 0.3111 |
| 1.3646 | 48.0 | 1296 | 1.3695 | 0.3111 |
| 1.3891 | 49.0 | 1323 | 1.3695 | 0.3111 |
| 1.3396 | 50.0 | 1350 | 1.3695 | 0.3111 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_small_sgd_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_0001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3896
- Accuracy: 0.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5183 | 1.0 | 27 | 1.5038 | 0.1778 |
| 1.502 | 2.0 | 54 | 1.4946 | 0.2 |
| 1.5109 | 3.0 | 81 | 1.4859 | 0.2222 |
| 1.5446 | 4.0 | 108 | 1.4781 | 0.2444 |
| 1.4687 | 5.0 | 135 | 1.4710 | 0.2444 |
| 1.4554 | 6.0 | 162 | 1.4641 | 0.2889 |
| 1.4113 | 7.0 | 189 | 1.4582 | 0.2889 |
| 1.4434 | 8.0 | 216 | 1.4525 | 0.2667 |
| 1.4243 | 9.0 | 243 | 1.4473 | 0.2667 |
| 1.4268 | 10.0 | 270 | 1.4425 | 0.2889 |
| 1.386 | 11.0 | 297 | 1.4382 | 0.2889 |
| 1.4235 | 12.0 | 324 | 1.4341 | 0.2667 |
| 1.4228 | 13.0 | 351 | 1.4304 | 0.2667 |
| 1.4091 | 14.0 | 378 | 1.4269 | 0.2889 |
| 1.4135 | 15.0 | 405 | 1.4239 | 0.2667 |
| 1.4228 | 16.0 | 432 | 1.4210 | 0.2889 |
| 1.4188 | 17.0 | 459 | 1.4184 | 0.2889 |
| 1.3824 | 18.0 | 486 | 1.4159 | 0.3333 |
| 1.3861 | 19.0 | 513 | 1.4136 | 0.3111 |
| 1.393 | 20.0 | 540 | 1.4115 | 0.3111 |
| 1.4051 | 21.0 | 567 | 1.4096 | 0.3111 |
| 1.373 | 22.0 | 594 | 1.4077 | 0.3333 |
| 1.3737 | 23.0 | 621 | 1.4060 | 0.3333 |
| 1.3668 | 24.0 | 648 | 1.4044 | 0.3556 |
| 1.362 | 25.0 | 675 | 1.4030 | 0.3556 |
| 1.3931 | 26.0 | 702 | 1.4016 | 0.3556 |
| 1.3504 | 27.0 | 729 | 1.4003 | 0.3556 |
| 1.3706 | 28.0 | 756 | 1.3992 | 0.3556 |
| 1.359 | 29.0 | 783 | 1.3981 | 0.3556 |
| 1.3774 | 30.0 | 810 | 1.3972 | 0.3556 |
| 1.3678 | 31.0 | 837 | 1.3963 | 0.3556 |
| 1.3418 | 32.0 | 864 | 1.3955 | 0.3556 |
| 1.3702 | 33.0 | 891 | 1.3947 | 0.3556 |
| 1.3589 | 34.0 | 918 | 1.3940 | 0.3556 |
| 1.3212 | 35.0 | 945 | 1.3933 | 0.3333 |
| 1.3648 | 36.0 | 972 | 1.3928 | 0.3333 |
| 1.3509 | 37.0 | 999 | 1.3922 | 0.3333 |
| 1.3506 | 38.0 | 1026 | 1.3917 | 0.3333 |
| 1.3673 | 39.0 | 1053 | 1.3913 | 0.3333 |
| 1.3657 | 40.0 | 1080 | 1.3910 | 0.3333 |
| 1.3651 | 41.0 | 1107 | 1.3906 | 0.3333 |
| 1.3688 | 42.0 | 1134 | 1.3904 | 0.3333 |
| 1.3871 | 43.0 | 1161 | 1.3901 | 0.3333 |
| 1.3307 | 44.0 | 1188 | 1.3899 | 0.3333 |
| 1.3505 | 45.0 | 1215 | 1.3898 | 0.3333 |
| 1.3367 | 46.0 | 1242 | 1.3897 | 0.3333 |
| 1.3605 | 47.0 | 1269 | 1.3896 | 0.3333 |
| 1.3556 | 48.0 | 1296 | 1.3896 | 0.3333 |
| 1.3876 | 49.0 | 1323 | 1.3896 | 0.3333 |
| 1.3357 | 50.0 | 1350 | 1.3896 | 0.3333 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_small_sgd_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_0001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4264
- Accuracy: 0.2093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5059 | 1.0 | 28 | 1.5906 | 0.2093 |
| 1.4969 | 2.0 | 56 | 1.5795 | 0.2093 |
| 1.4242 | 3.0 | 84 | 1.5692 | 0.2093 |
| 1.4628 | 4.0 | 112 | 1.5599 | 0.2093 |
| 1.5851 | 5.0 | 140 | 1.5506 | 0.2093 |
| 1.4881 | 6.0 | 168 | 1.5418 | 0.2093 |
| 1.4744 | 7.0 | 196 | 1.5338 | 0.2326 |
| 1.427 | 8.0 | 224 | 1.5266 | 0.2558 |
| 1.4274 | 9.0 | 252 | 1.5192 | 0.2558 |
| 1.4168 | 10.0 | 280 | 1.5125 | 0.2558 |
| 1.4418 | 11.0 | 308 | 1.5064 | 0.2558 |
| 1.4361 | 12.0 | 336 | 1.5007 | 0.2558 |
| 1.4139 | 13.0 | 364 | 1.4950 | 0.2558 |
| 1.3932 | 14.0 | 392 | 1.4898 | 0.2558 |
| 1.4041 | 15.0 | 420 | 1.4850 | 0.2326 |
| 1.3745 | 16.0 | 448 | 1.4806 | 0.2326 |
| 1.3653 | 17.0 | 476 | 1.4764 | 0.2093 |
| 1.3841 | 18.0 | 504 | 1.4723 | 0.2093 |
| 1.3735 | 19.0 | 532 | 1.4687 | 0.2093 |
| 1.3391 | 20.0 | 560 | 1.4653 | 0.2093 |
| 1.3879 | 21.0 | 588 | 1.4620 | 0.2093 |
| 1.3861 | 22.0 | 616 | 1.4589 | 0.2093 |
| 1.3726 | 23.0 | 644 | 1.4561 | 0.2093 |
| 1.3725 | 24.0 | 672 | 1.4534 | 0.2093 |
| 1.3587 | 25.0 | 700 | 1.4508 | 0.2093 |
| 1.3359 | 26.0 | 728 | 1.4485 | 0.2093 |
| 1.3627 | 27.0 | 756 | 1.4462 | 0.2326 |
| 1.3855 | 28.0 | 784 | 1.4442 | 0.2326 |
| 1.353 | 29.0 | 812 | 1.4424 | 0.2093 |
| 1.301 | 30.0 | 840 | 1.4407 | 0.2093 |
| 1.3248 | 31.0 | 868 | 1.4390 | 0.2093 |
| 1.3654 | 32.0 | 896 | 1.4375 | 0.2093 |
| 1.364 | 33.0 | 924 | 1.4361 | 0.2093 |
| 1.322 | 34.0 | 952 | 1.4347 | 0.2093 |
| 1.3619 | 35.0 | 980 | 1.4335 | 0.2093 |
| 1.3562 | 36.0 | 1008 | 1.4324 | 0.2093 |
| 1.4034 | 37.0 | 1036 | 1.4314 | 0.2093 |
| 1.3401 | 38.0 | 1064 | 1.4304 | 0.2093 |
| 1.3307 | 39.0 | 1092 | 1.4297 | 0.2093 |
| 1.3736 | 40.0 | 1120 | 1.4290 | 0.2093 |
| 1.3675 | 41.0 | 1148 | 1.4284 | 0.2093 |
| 1.3234 | 42.0 | 1176 | 1.4279 | 0.2093 |
| 1.3321 | 43.0 | 1204 | 1.4274 | 0.2093 |
| 1.3436 | 44.0 | 1232 | 1.4270 | 0.2093 |
| 1.3719 | 45.0 | 1260 | 1.4268 | 0.2093 |
| 1.3462 | 46.0 | 1288 | 1.4266 | 0.2093 |
| 1.3448 | 47.0 | 1316 | 1.4265 | 0.2093 |
| 1.3465 | 48.0 | 1344 | 1.4264 | 0.2093 |
| 1.2951 | 49.0 | 1372 | 1.4264 | 0.2093 |
| 1.3665 | 50.0 | 1400 | 1.4264 | 0.2093 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_small_sgd_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_0001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3721
- Accuracy: 0.2619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5364 | 1.0 | 28 | 1.4759 | 0.2857 |
| 1.6032 | 2.0 | 56 | 1.4677 | 0.2857 |
| 1.5235 | 3.0 | 84 | 1.4598 | 0.2857 |
| 1.5363 | 4.0 | 112 | 1.4530 | 0.2857 |
| 1.4963 | 5.0 | 140 | 1.4466 | 0.2857 |
| 1.4798 | 6.0 | 168 | 1.4404 | 0.2857 |
| 1.4963 | 7.0 | 196 | 1.4349 | 0.2857 |
| 1.441 | 8.0 | 224 | 1.4297 | 0.3095 |
| 1.5032 | 9.0 | 252 | 1.4249 | 0.3095 |
| 1.4231 | 10.0 | 280 | 1.4205 | 0.3095 |
| 1.4482 | 11.0 | 308 | 1.4164 | 0.3095 |
| 1.4398 | 12.0 | 336 | 1.4127 | 0.3095 |
| 1.468 | 13.0 | 364 | 1.4093 | 0.3095 |
| 1.4278 | 14.0 | 392 | 1.4061 | 0.3095 |
| 1.4624 | 15.0 | 420 | 1.4032 | 0.3095 |
| 1.438 | 16.0 | 448 | 1.4004 | 0.2857 |
| 1.4401 | 17.0 | 476 | 1.3979 | 0.2857 |
| 1.416 | 18.0 | 504 | 1.3956 | 0.3095 |
| 1.4033 | 19.0 | 532 | 1.3934 | 0.3333 |
| 1.4123 | 20.0 | 560 | 1.3916 | 0.3333 |
| 1.4056 | 21.0 | 588 | 1.3899 | 0.3095 |
| 1.4089 | 22.0 | 616 | 1.3883 | 0.3333 |
| 1.3801 | 23.0 | 644 | 1.3868 | 0.3333 |
| 1.3733 | 24.0 | 672 | 1.3854 | 0.3095 |
| 1.3798 | 25.0 | 700 | 1.3840 | 0.3095 |
| 1.4051 | 26.0 | 728 | 1.3828 | 0.3095 |
| 1.4017 | 27.0 | 756 | 1.3817 | 0.3095 |
| 1.4006 | 28.0 | 784 | 1.3807 | 0.3095 |
| 1.368 | 29.0 | 812 | 1.3797 | 0.3095 |
| 1.3628 | 30.0 | 840 | 1.3788 | 0.3333 |
| 1.3803 | 31.0 | 868 | 1.3780 | 0.2619 |
| 1.3495 | 32.0 | 896 | 1.3773 | 0.2619 |
| 1.393 | 33.0 | 924 | 1.3766 | 0.2619 |
| 1.3379 | 34.0 | 952 | 1.3760 | 0.2619 |
| 1.3609 | 35.0 | 980 | 1.3754 | 0.2619 |
| 1.3521 | 36.0 | 1008 | 1.3748 | 0.2619 |
| 1.3648 | 37.0 | 1036 | 1.3744 | 0.2619 |
| 1.341 | 38.0 | 1064 | 1.3740 | 0.2619 |
| 1.3689 | 39.0 | 1092 | 1.3736 | 0.2619 |
| 1.3877 | 40.0 | 1120 | 1.3733 | 0.2619 |
| 1.4062 | 41.0 | 1148 | 1.3730 | 0.2619 |
| 1.3585 | 42.0 | 1176 | 1.3727 | 0.2619 |
| 1.3339 | 43.0 | 1204 | 1.3725 | 0.2619 |
| 1.3351 | 44.0 | 1232 | 1.3724 | 0.2619 |
| 1.3978 | 45.0 | 1260 | 1.3722 | 0.2619 |
| 1.3819 | 46.0 | 1288 | 1.3721 | 0.2619 |
| 1.3511 | 47.0 | 1316 | 1.3721 | 0.2619 |
| 1.3593 | 48.0 | 1344 | 1.3721 | 0.2619 |
| 1.3691 | 49.0 | 1372 | 1.3721 | 0.2619 |
| 1.3757 | 50.0 | 1400 | 1.3721 | 0.2619 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
jordyvl/dit-base_tobacco-small_tobacco3482_kd_CEKD_t2.5_a0.5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-small_tobacco3482_kd_CEKD_t2.5_a0.5
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6146
- Accuracy: 0.8
- Brier Loss: 0.2784
- Nll: 1.4268
- F1 Micro: 0.8000
- F1 Macro: 0.7846
- Ece: 0.1626
- Aurc: 0.0474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 4.1581 | 0.18 | 0.8974 | 4.2254 | 0.18 | 0.1559 | 0.2651 | 0.8061 |
| No log | 2.0 | 14 | 3.2929 | 0.355 | 0.7710 | 4.0541 | 0.3550 | 0.2167 | 0.2742 | 0.4326 |
| No log | 3.0 | 21 | 2.2155 | 0.55 | 0.5837 | 2.0462 | 0.55 | 0.4296 | 0.2323 | 0.2481 |
| No log | 4.0 | 28 | 1.5197 | 0.7 | 0.4370 | 1.7716 | 0.7 | 0.6411 | 0.2342 | 0.1327 |
| No log | 5.0 | 35 | 1.2831 | 0.715 | 0.4289 | 1.7142 | 0.715 | 0.6859 | 0.2047 | 0.1211 |
| No log | 6.0 | 42 | 1.2204 | 0.72 | 0.3989 | 1.6102 | 0.72 | 0.6999 | 0.1961 | 0.1066 |
| No log | 7.0 | 49 | 0.9767 | 0.755 | 0.3317 | 1.5919 | 0.755 | 0.7148 | 0.1724 | 0.0775 |
| No log | 8.0 | 56 | 0.8875 | 0.785 | 0.3049 | 1.4209 | 0.785 | 0.7633 | 0.1478 | 0.0716 |
| No log | 9.0 | 63 | 0.9311 | 0.79 | 0.3185 | 1.5420 | 0.79 | 0.7474 | 0.1645 | 0.0741 |
| No log | 10.0 | 70 | 0.8116 | 0.835 | 0.2672 | 1.5127 | 0.835 | 0.8232 | 0.1463 | 0.0563 |
| No log | 11.0 | 77 | 0.8315 | 0.805 | 0.3054 | 1.6275 | 0.805 | 0.7897 | 0.1695 | 0.0618 |
| No log | 12.0 | 84 | 0.7678 | 0.815 | 0.2917 | 1.5009 | 0.815 | 0.8012 | 0.1469 | 0.0542 |
| No log | 13.0 | 91 | 0.7249 | 0.81 | 0.2816 | 1.4685 | 0.81 | 0.7880 | 0.1437 | 0.0576 |
| No log | 14.0 | 98 | 0.8116 | 0.815 | 0.2894 | 1.5975 | 0.815 | 0.7941 | 0.1481 | 0.0604 |
| No log | 15.0 | 105 | 0.7985 | 0.81 | 0.3098 | 1.4721 | 0.81 | 0.7819 | 0.1646 | 0.0662 |
| No log | 16.0 | 112 | 0.6839 | 0.815 | 0.2781 | 1.4357 | 0.815 | 0.7992 | 0.1589 | 0.0529 |
| No log | 17.0 | 119 | 0.6590 | 0.82 | 0.2670 | 1.4487 | 0.82 | 0.8061 | 0.1336 | 0.0461 |
| No log | 18.0 | 126 | 0.7253 | 0.81 | 0.2938 | 1.5163 | 0.81 | 0.7951 | 0.1617 | 0.0558 |
| No log | 19.0 | 133 | 0.6935 | 0.795 | 0.2949 | 1.4516 | 0.795 | 0.7758 | 0.1736 | 0.0531 |
| No log | 20.0 | 140 | 0.6991 | 0.795 | 0.2875 | 1.3932 | 0.795 | 0.7735 | 0.1584 | 0.0519 |
| No log | 21.0 | 147 | 0.7059 | 0.815 | 0.2966 | 1.5011 | 0.815 | 0.7927 | 0.1579 | 0.0565 |
| No log | 22.0 | 154 | 0.6754 | 0.79 | 0.2896 | 1.4549 | 0.79 | 0.7742 | 0.1534 | 0.0531 |
| No log | 23.0 | 161 | 0.6981 | 0.785 | 0.2989 | 1.4261 | 0.785 | 0.7705 | 0.1490 | 0.0530 |
| No log | 24.0 | 168 | 0.6503 | 0.805 | 0.2842 | 1.4998 | 0.805 | 0.7885 | 0.1415 | 0.0512 |
| No log | 25.0 | 175 | 0.6680 | 0.79 | 0.2891 | 1.4228 | 0.79 | 0.7742 | 0.1504 | 0.0519 |
| No log | 26.0 | 182 | 0.6835 | 0.81 | 0.2948 | 1.4400 | 0.81 | 0.7944 | 0.1545 | 0.0516 |
| No log | 27.0 | 189 | 0.6495 | 0.81 | 0.2846 | 1.4433 | 0.81 | 0.7868 | 0.1552 | 0.0503 |
| No log | 28.0 | 196 | 0.6450 | 0.81 | 0.2851 | 1.4037 | 0.81 | 0.7913 | 0.1476 | 0.0498 |
| No log | 29.0 | 203 | 0.6634 | 0.815 | 0.2861 | 1.4186 | 0.815 | 0.7966 | 0.1397 | 0.0521 |
| No log | 30.0 | 210 | 0.6212 | 0.805 | 0.2739 | 1.4265 | 0.805 | 0.7902 | 0.1444 | 0.0482 |
| No log | 31.0 | 217 | 0.6271 | 0.815 | 0.2800 | 1.4392 | 0.815 | 0.7986 | 0.1370 | 0.0494 |
| No log | 32.0 | 224 | 0.6256 | 0.8 | 0.2786 | 1.3677 | 0.8000 | 0.7811 | 0.1454 | 0.0496 |
| No log | 33.0 | 231 | 0.6219 | 0.805 | 0.2779 | 1.4276 | 0.805 | 0.7857 | 0.1580 | 0.0465 |
| No log | 34.0 | 238 | 0.6203 | 0.81 | 0.2779 | 1.4392 | 0.81 | 0.7914 | 0.1275 | 0.0470 |
| No log | 35.0 | 245 | 0.6193 | 0.81 | 0.2793 | 1.4258 | 0.81 | 0.7934 | 0.1438 | 0.0483 |
| No log | 36.0 | 252 | 0.6261 | 0.83 | 0.2743 | 1.4227 | 0.83 | 0.8098 | 0.1482 | 0.0501 |
| No log | 37.0 | 259 | 0.6190 | 0.815 | 0.2776 | 1.4301 | 0.815 | 0.7977 | 0.1446 | 0.0484 |
| No log | 38.0 | 266 | 0.6210 | 0.805 | 0.2867 | 1.4958 | 0.805 | 0.7878 | 0.1477 | 0.0496 |
| No log | 39.0 | 273 | 0.5974 | 0.805 | 0.2771 | 1.5068 | 0.805 | 0.7901 | 0.1381 | 0.0476 |
| No log | 40.0 | 280 | 0.6224 | 0.8 | 0.2869 | 1.4325 | 0.8000 | 0.7869 | 0.1443 | 0.0472 |
| No log | 41.0 | 287 | 0.6178 | 0.805 | 0.2796 | 1.4316 | 0.805 | 0.7912 | 0.1454 | 0.0471 |
| No log | 42.0 | 294 | 0.6194 | 0.825 | 0.2765 | 1.5001 | 0.825 | 0.8059 | 0.1401 | 0.0474 |
| No log | 43.0 | 301 | 0.6224 | 0.805 | 0.2769 | 1.4268 | 0.805 | 0.7888 | 0.1398 | 0.0493 |
| No log | 44.0 | 308 | 0.6265 | 0.8 | 0.2819 | 1.4401 | 0.8000 | 0.7846 | 0.1422 | 0.0481 |
| No log | 45.0 | 315 | 0.6275 | 0.8 | 0.2819 | 1.4206 | 0.8000 | 0.7847 | 0.1465 | 0.0487 |
| No log | 46.0 | 322 | 0.6173 | 0.805 | 0.2806 | 1.3618 | 0.805 | 0.7870 | 0.1383 | 0.0478 |
| No log | 47.0 | 329 | 0.6177 | 0.81 | 0.2804 | 1.4988 | 0.81 | 0.7906 | 0.1468 | 0.0488 |
| No log | 48.0 | 336 | 0.6175 | 0.81 | 0.2788 | 1.4356 | 0.81 | 0.7917 | 0.1460 | 0.0476 |
| No log | 49.0 | 343 | 0.6209 | 0.81 | 0.2775 | 1.4290 | 0.81 | 0.7925 | 0.1603 | 0.0478 |
| No log | 50.0 | 350 | 0.6244 | 0.815 | 0.2780 | 1.3662 | 0.815 | 0.7974 | 0.1322 | 0.0480 |
| No log | 51.0 | 357 | 0.6176 | 0.81 | 0.2777 | 1.4307 | 0.81 | 0.7941 | 0.1258 | 0.0478 |
| No log | 52.0 | 364 | 0.6150 | 0.805 | 0.2774 | 1.4310 | 0.805 | 0.7896 | 0.1369 | 0.0477 |
| No log | 53.0 | 371 | 0.6164 | 0.81 | 0.2772 | 1.4298 | 0.81 | 0.7941 | 0.1391 | 0.0479 |
| No log | 54.0 | 378 | 0.6137 | 0.81 | 0.2766 | 1.4291 | 0.81 | 0.7928 | 0.1358 | 0.0474 |
| No log | 55.0 | 385 | 0.6163 | 0.81 | 0.2776 | 1.4298 | 0.81 | 0.7928 | 0.1278 | 0.0475 |
| No log | 56.0 | 392 | 0.6148 | 0.81 | 0.2776 | 1.4286 | 0.81 | 0.7928 | 0.1480 | 0.0471 |
| No log | 57.0 | 399 | 0.6154 | 0.81 | 0.2773 | 1.4290 | 0.81 | 0.7928 | 0.1485 | 0.0474 |
| No log | 58.0 | 406 | 0.6143 | 0.8 | 0.2781 | 1.4281 | 0.8000 | 0.7852 | 0.1405 | 0.0473 |
| No log | 59.0 | 413 | 0.6158 | 0.805 | 0.2785 | 1.4295 | 0.805 | 0.7899 | 0.1455 | 0.0473 |
| No log | 60.0 | 420 | 0.6146 | 0.805 | 0.2774 | 1.4310 | 0.805 | 0.7899 | 0.1346 | 0.0472 |
| No log | 61.0 | 427 | 0.6154 | 0.805 | 0.2780 | 1.4292 | 0.805 | 0.7899 | 0.1451 | 0.0472 |
| No log | 62.0 | 434 | 0.6148 | 0.805 | 0.2780 | 1.4304 | 0.805 | 0.7905 | 0.1543 | 0.0473 |
| No log | 63.0 | 441 | 0.6150 | 0.8 | 0.2783 | 1.4284 | 0.8000 | 0.7846 | 0.1502 | 0.0473 |
| No log | 64.0 | 448 | 0.6143 | 0.805 | 0.2780 | 1.4294 | 0.805 | 0.7899 | 0.1453 | 0.0470 |
| No log | 65.0 | 455 | 0.6152 | 0.805 | 0.2782 | 1.4298 | 0.805 | 0.7899 | 0.1373 | 0.0469 |
| No log | 66.0 | 462 | 0.6148 | 0.8 | 0.2781 | 1.4287 | 0.8000 | 0.7852 | 0.1492 | 0.0475 |
| No log | 67.0 | 469 | 0.6134 | 0.805 | 0.2776 | 1.4286 | 0.805 | 0.7899 | 0.1526 | 0.0470 |
| No log | 68.0 | 476 | 0.6150 | 0.8 | 0.2785 | 1.4270 | 0.8000 | 0.7846 | 0.1497 | 0.0474 |
| No log | 69.0 | 483 | 0.6145 | 0.8 | 0.2783 | 1.4281 | 0.8000 | 0.7846 | 0.1483 | 0.0471 |
| No log | 70.0 | 490 | 0.6145 | 0.805 | 0.2778 | 1.4292 | 0.805 | 0.7899 | 0.1472 | 0.0471 |
| No log | 71.0 | 497 | 0.6143 | 0.805 | 0.2779 | 1.4284 | 0.805 | 0.7899 | 0.1529 | 0.0470 |
| 0.2616 | 72.0 | 504 | 0.6148 | 0.805 | 0.2780 | 1.4276 | 0.805 | 0.7899 | 0.1414 | 0.0471 |
| 0.2616 | 73.0 | 511 | 0.6147 | 0.8 | 0.2781 | 1.4285 | 0.8000 | 0.7852 | 0.1400 | 0.0473 |
| 0.2616 | 74.0 | 518 | 0.6147 | 0.8 | 0.2783 | 1.4281 | 0.8000 | 0.7846 | 0.1501 | 0.0473 |
| 0.2616 | 75.0 | 525 | 0.6150 | 0.8 | 0.2784 | 1.4269 | 0.8000 | 0.7846 | 0.1417 | 0.0473 |
| 0.2616 | 76.0 | 532 | 0.6143 | 0.805 | 0.2782 | 1.4273 | 0.805 | 0.7899 | 0.1524 | 0.0470 |
| 0.2616 | 77.0 | 539 | 0.6147 | 0.805 | 0.2782 | 1.4277 | 0.805 | 0.7899 | 0.1526 | 0.0470 |
| 0.2616 | 78.0 | 546 | 0.6149 | 0.8 | 0.2785 | 1.4277 | 0.8000 | 0.7846 | 0.1572 | 0.0474 |
| 0.2616 | 79.0 | 553 | 0.6147 | 0.805 | 0.2782 | 1.4276 | 0.805 | 0.7899 | 0.1529 | 0.0471 |
| 0.2616 | 80.0 | 560 | 0.6145 | 0.805 | 0.2783 | 1.4278 | 0.805 | 0.7899 | 0.1527 | 0.0471 |
| 0.2616 | 81.0 | 567 | 0.6147 | 0.8 | 0.2783 | 1.4277 | 0.8000 | 0.7846 | 0.1483 | 0.0472 |
| 0.2616 | 82.0 | 574 | 0.6146 | 0.8 | 0.2783 | 1.4275 | 0.8000 | 0.7846 | 0.1623 | 0.0473 |
| 0.2616 | 83.0 | 581 | 0.6145 | 0.8 | 0.2783 | 1.4274 | 0.8000 | 0.7846 | 0.1571 | 0.0473 |
| 0.2616 | 84.0 | 588 | 0.6146 | 0.8 | 0.2782 | 1.4276 | 0.8000 | 0.7846 | 0.1538 | 0.0473 |
| 0.2616 | 85.0 | 595 | 0.6146 | 0.805 | 0.2783 | 1.4274 | 0.805 | 0.7899 | 0.1493 | 0.0471 |
| 0.2616 | 86.0 | 602 | 0.6147 | 0.8 | 0.2784 | 1.4269 | 0.8000 | 0.7846 | 0.1627 | 0.0473 |
| 0.2616 | 87.0 | 609 | 0.6146 | 0.8 | 0.2783 | 1.4270 | 0.8000 | 0.7846 | 0.1623 | 0.0472 |
| 0.2616 | 88.0 | 616 | 0.6145 | 0.805 | 0.2783 | 1.4272 | 0.805 | 0.7899 | 0.1579 | 0.0470 |
| 0.2616 | 89.0 | 623 | 0.6146 | 0.8 | 0.2784 | 1.4272 | 0.8000 | 0.7846 | 0.1627 | 0.0474 |
| 0.2616 | 90.0 | 630 | 0.6147 | 0.8 | 0.2783 | 1.4270 | 0.8000 | 0.7846 | 0.1536 | 0.0473 |
| 0.2616 | 91.0 | 637 | 0.6147 | 0.8 | 0.2784 | 1.4268 | 0.8000 | 0.7846 | 0.1627 | 0.0475 |
| 0.2616 | 92.0 | 644 | 0.6145 | 0.805 | 0.2783 | 1.4268 | 0.805 | 0.7899 | 0.1582 | 0.0471 |
| 0.2616 | 93.0 | 651 | 0.6145 | 0.8 | 0.2784 | 1.4269 | 0.8000 | 0.7846 | 0.1626 | 0.0474 |
| 0.2616 | 94.0 | 658 | 0.6146 | 0.8 | 0.2784 | 1.4268 | 0.8000 | 0.7846 | 0.1626 | 0.0473 |
| 0.2616 | 95.0 | 665 | 0.6147 | 0.8 | 0.2784 | 1.4268 | 0.8000 | 0.7846 | 0.1626 | 0.0473 |
| 0.2616 | 96.0 | 672 | 0.6146 | 0.8 | 0.2784 | 1.4269 | 0.8000 | 0.7846 | 0.1626 | 0.0474 |
| 0.2616 | 97.0 | 679 | 0.6146 | 0.8 | 0.2784 | 1.4269 | 0.8000 | 0.7846 | 0.1626 | 0.0474 |
| 0.2616 | 98.0 | 686 | 0.6146 | 0.8 | 0.2784 | 1.4269 | 0.8000 | 0.7846 | 0.1626 | 0.0474 |
| 0.2616 | 99.0 | 693 | 0.6146 | 0.8 | 0.2784 | 1.4268 | 0.8000 | 0.7846 | 0.1626 | 0.0474 |
| 0.2616 | 100.0 | 700 | 0.6146 | 0.8 | 0.2784 | 1.4268 | 0.8000 | 0.7846 | 0.1626 | 0.0474 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
hkivancoral/hushem_5x_deit_small_sgd_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_0001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3702
- Accuracy: 0.2683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5914 | 1.0 | 28 | 1.5102 | 0.2195 |
| 1.502 | 2.0 | 56 | 1.4998 | 0.2439 |
| 1.5359 | 3.0 | 84 | 1.4897 | 0.2439 |
| 1.4953 | 4.0 | 112 | 1.4806 | 0.2195 |
| 1.505 | 5.0 | 140 | 1.4721 | 0.2439 |
| 1.5366 | 6.0 | 168 | 1.4645 | 0.2439 |
| 1.5251 | 7.0 | 196 | 1.4572 | 0.2439 |
| 1.4698 | 8.0 | 224 | 1.4506 | 0.2439 |
| 1.4915 | 9.0 | 252 | 1.4443 | 0.2439 |
| 1.4618 | 10.0 | 280 | 1.4384 | 0.2439 |
| 1.4473 | 11.0 | 308 | 1.4329 | 0.2439 |
| 1.4682 | 12.0 | 336 | 1.4279 | 0.2439 |
| 1.4426 | 13.0 | 364 | 1.4233 | 0.2439 |
| 1.4128 | 14.0 | 392 | 1.4190 | 0.2683 |
| 1.4363 | 15.0 | 420 | 1.4150 | 0.2683 |
| 1.4383 | 16.0 | 448 | 1.4113 | 0.2683 |
| 1.4168 | 17.0 | 476 | 1.4079 | 0.2683 |
| 1.4317 | 18.0 | 504 | 1.4047 | 0.2683 |
| 1.4208 | 19.0 | 532 | 1.4016 | 0.2927 |
| 1.4021 | 20.0 | 560 | 1.3989 | 0.2927 |
| 1.4325 | 21.0 | 588 | 1.3963 | 0.2927 |
| 1.4072 | 22.0 | 616 | 1.3940 | 0.2927 |
| 1.3729 | 23.0 | 644 | 1.3918 | 0.2927 |
| 1.3955 | 24.0 | 672 | 1.3898 | 0.2927 |
| 1.3868 | 25.0 | 700 | 1.3879 | 0.2927 |
| 1.3985 | 26.0 | 728 | 1.3861 | 0.2683 |
| 1.3854 | 27.0 | 756 | 1.3845 | 0.2683 |
| 1.3968 | 28.0 | 784 | 1.3830 | 0.2683 |
| 1.3689 | 29.0 | 812 | 1.3816 | 0.2683 |
| 1.4069 | 30.0 | 840 | 1.3803 | 0.2683 |
| 1.387 | 31.0 | 868 | 1.3791 | 0.2683 |
| 1.3786 | 32.0 | 896 | 1.3780 | 0.2683 |
| 1.3773 | 33.0 | 924 | 1.3769 | 0.2683 |
| 1.3779 | 34.0 | 952 | 1.3760 | 0.2683 |
| 1.3797 | 35.0 | 980 | 1.3751 | 0.2683 |
| 1.3671 | 36.0 | 1008 | 1.3744 | 0.2683 |
| 1.3638 | 37.0 | 1036 | 1.3737 | 0.2683 |
| 1.3614 | 38.0 | 1064 | 1.3731 | 0.2683 |
| 1.3646 | 39.0 | 1092 | 1.3725 | 0.2683 |
| 1.3609 | 40.0 | 1120 | 1.3720 | 0.2683 |
| 1.3899 | 41.0 | 1148 | 1.3716 | 0.2683 |
| 1.3896 | 42.0 | 1176 | 1.3712 | 0.2683 |
| 1.3725 | 43.0 | 1204 | 1.3709 | 0.2683 |
| 1.3896 | 44.0 | 1232 | 1.3706 | 0.2683 |
| 1.3695 | 45.0 | 1260 | 1.3704 | 0.2683 |
| 1.3698 | 46.0 | 1288 | 1.3703 | 0.2683 |
| 1.3813 | 47.0 | 1316 | 1.3702 | 0.2683 |
| 1.3636 | 48.0 | 1344 | 1.3702 | 0.2683 |
| 1.3528 | 49.0 | 1372 | 1.3702 | 0.2683 |
| 1.3747 | 50.0 | 1400 | 1.3702 | 0.2683 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_small_sgd_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2065
- Accuracy: 0.4444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.469 | 1.0 | 27 | 1.4260 | 0.2444 |
| 1.3863 | 2.0 | 54 | 1.3825 | 0.2889 |
| 1.3604 | 3.0 | 81 | 1.3624 | 0.3333 |
| 1.3505 | 4.0 | 108 | 1.3566 | 0.4 |
| 1.2837 | 5.0 | 135 | 1.3537 | 0.4 |
| 1.2588 | 6.0 | 162 | 1.3503 | 0.4 |
| 1.243 | 7.0 | 189 | 1.3481 | 0.4 |
| 1.2286 | 8.0 | 216 | 1.3453 | 0.4222 |
| 1.1857 | 9.0 | 243 | 1.3417 | 0.4222 |
| 1.1652 | 10.0 | 270 | 1.3373 | 0.4222 |
| 1.135 | 11.0 | 297 | 1.3328 | 0.4222 |
| 1.1257 | 12.0 | 324 | 1.3276 | 0.4222 |
| 1.1465 | 13.0 | 351 | 1.3209 | 0.4444 |
| 1.0952 | 14.0 | 378 | 1.3151 | 0.4444 |
| 1.0919 | 15.0 | 405 | 1.3098 | 0.4444 |
| 1.0847 | 16.0 | 432 | 1.3049 | 0.4222 |
| 1.0446 | 17.0 | 459 | 1.2985 | 0.4667 |
| 1.0054 | 18.0 | 486 | 1.2928 | 0.4444 |
| 1.0169 | 19.0 | 513 | 1.2866 | 0.4222 |
| 1.0001 | 20.0 | 540 | 1.2827 | 0.4222 |
| 0.9603 | 21.0 | 567 | 1.2785 | 0.4222 |
| 0.9735 | 22.0 | 594 | 1.2740 | 0.4444 |
| 0.9444 | 23.0 | 621 | 1.2697 | 0.4667 |
| 0.9156 | 24.0 | 648 | 1.2621 | 0.4444 |
| 0.9658 | 25.0 | 675 | 1.2568 | 0.4444 |
| 0.9699 | 26.0 | 702 | 1.2527 | 0.4444 |
| 0.8985 | 27.0 | 729 | 1.2504 | 0.4444 |
| 0.8958 | 28.0 | 756 | 1.2453 | 0.4222 |
| 0.9214 | 29.0 | 783 | 1.2429 | 0.4444 |
| 0.9068 | 30.0 | 810 | 1.2374 | 0.4222 |
| 0.8659 | 31.0 | 837 | 1.2336 | 0.4222 |
| 0.8774 | 32.0 | 864 | 1.2303 | 0.4222 |
| 0.8587 | 33.0 | 891 | 1.2279 | 0.4222 |
| 0.8719 | 34.0 | 918 | 1.2241 | 0.4222 |
| 0.8448 | 35.0 | 945 | 1.2216 | 0.4222 |
| 0.8563 | 36.0 | 972 | 1.2203 | 0.4222 |
| 0.8555 | 37.0 | 999 | 1.2179 | 0.4222 |
| 0.8252 | 38.0 | 1026 | 1.2169 | 0.4222 |
| 0.83 | 39.0 | 1053 | 1.2146 | 0.4222 |
| 0.8062 | 40.0 | 1080 | 1.2129 | 0.4222 |
| 0.8472 | 41.0 | 1107 | 1.2110 | 0.4222 |
| 0.8075 | 42.0 | 1134 | 1.2099 | 0.4222 |
| 0.8415 | 43.0 | 1161 | 1.2087 | 0.4222 |
| 0.8064 | 44.0 | 1188 | 1.2081 | 0.4444 |
| 0.8219 | 45.0 | 1215 | 1.2076 | 0.4444 |
| 0.8297 | 46.0 | 1242 | 1.2069 | 0.4444 |
| 0.8108 | 47.0 | 1269 | 1.2066 | 0.4444 |
| 0.8128 | 48.0 | 1296 | 1.2065 | 0.4444 |
| 0.8385 | 49.0 | 1323 | 1.2065 | 0.4444 |
| 0.8247 | 50.0 | 1350 | 1.2065 | 0.4444 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_small_sgd_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3629
- Accuracy: 0.4444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4659 | 1.0 | 27 | 1.4409 | 0.2889 |
| 1.3866 | 2.0 | 54 | 1.4026 | 0.3556 |
| 1.3486 | 3.0 | 81 | 1.3816 | 0.3333 |
| 1.3477 | 4.0 | 108 | 1.3676 | 0.3111 |
| 1.2816 | 5.0 | 135 | 1.3557 | 0.3333 |
| 1.2558 | 6.0 | 162 | 1.3444 | 0.3556 |
| 1.2259 | 7.0 | 189 | 1.3343 | 0.3556 |
| 1.2042 | 8.0 | 216 | 1.3245 | 0.3556 |
| 1.1683 | 9.0 | 243 | 1.3158 | 0.4 |
| 1.1515 | 10.0 | 270 | 1.3086 | 0.4222 |
| 1.1156 | 11.0 | 297 | 1.3037 | 0.4222 |
| 1.1061 | 12.0 | 324 | 1.2999 | 0.4444 |
| 1.0903 | 13.0 | 351 | 1.3002 | 0.4444 |
| 1.0661 | 14.0 | 378 | 1.3028 | 0.4444 |
| 1.0598 | 15.0 | 405 | 1.3085 | 0.4444 |
| 1.0378 | 16.0 | 432 | 1.3130 | 0.4444 |
| 1.0191 | 17.0 | 459 | 1.3179 | 0.4444 |
| 0.9884 | 18.0 | 486 | 1.3238 | 0.4444 |
| 0.9629 | 19.0 | 513 | 1.3282 | 0.4444 |
| 0.9575 | 20.0 | 540 | 1.3319 | 0.4222 |
| 0.9397 | 21.0 | 567 | 1.3353 | 0.4222 |
| 0.9296 | 22.0 | 594 | 1.3380 | 0.4222 |
| 0.9149 | 23.0 | 621 | 1.3408 | 0.4222 |
| 0.9023 | 24.0 | 648 | 1.3446 | 0.4222 |
| 0.8747 | 25.0 | 675 | 1.3454 | 0.4667 |
| 0.9184 | 26.0 | 702 | 1.3472 | 0.4444 |
| 0.8454 | 27.0 | 729 | 1.3479 | 0.4444 |
| 0.8505 | 28.0 | 756 | 1.3510 | 0.4444 |
| 0.8567 | 29.0 | 783 | 1.3517 | 0.4444 |
| 0.8854 | 30.0 | 810 | 1.3544 | 0.4667 |
| 0.834 | 31.0 | 837 | 1.3546 | 0.4444 |
| 0.8438 | 32.0 | 864 | 1.3560 | 0.4444 |
| 0.8236 | 33.0 | 891 | 1.3564 | 0.4444 |
| 0.8208 | 34.0 | 918 | 1.3570 | 0.4444 |
| 0.8066 | 35.0 | 945 | 1.3589 | 0.4444 |
| 0.8073 | 36.0 | 972 | 1.3591 | 0.4444 |
| 0.8089 | 37.0 | 999 | 1.3595 | 0.4444 |
| 0.777 | 38.0 | 1026 | 1.3599 | 0.4444 |
| 0.7828 | 39.0 | 1053 | 1.3610 | 0.4444 |
| 0.787 | 40.0 | 1080 | 1.3609 | 0.4444 |
| 0.8016 | 41.0 | 1107 | 1.3612 | 0.4444 |
| 0.7822 | 42.0 | 1134 | 1.3619 | 0.4444 |
| 0.8105 | 43.0 | 1161 | 1.3621 | 0.4444 |
| 0.7646 | 44.0 | 1188 | 1.3622 | 0.4444 |
| 0.7928 | 45.0 | 1215 | 1.3624 | 0.4444 |
| 0.7714 | 46.0 | 1242 | 1.3625 | 0.4444 |
| 0.7741 | 47.0 | 1269 | 1.3627 | 0.4444 |
| 0.7688 | 48.0 | 1296 | 1.3629 | 0.4444 |
| 0.834 | 49.0 | 1323 | 1.3629 | 0.4444 |
| 0.7751 | 50.0 | 1350 | 1.3629 | 0.4444 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_small_sgd_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1692
- Accuracy: 0.4186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4612 | 1.0 | 28 | 1.5118 | 0.2558 |
| 1.3805 | 2.0 | 56 | 1.4497 | 0.2093 |
| 1.3041 | 3.0 | 84 | 1.4123 | 0.2093 |
| 1.3096 | 4.0 | 112 | 1.3866 | 0.2558 |
| 1.3306 | 5.0 | 140 | 1.3677 | 0.2558 |
| 1.2628 | 6.0 | 168 | 1.3531 | 0.2791 |
| 1.2467 | 7.0 | 196 | 1.3404 | 0.2791 |
| 1.2266 | 8.0 | 224 | 1.3306 | 0.3023 |
| 1.194 | 9.0 | 252 | 1.3214 | 0.3023 |
| 1.1663 | 10.0 | 280 | 1.3122 | 0.2791 |
| 1.1821 | 11.0 | 308 | 1.3053 | 0.3023 |
| 1.1428 | 12.0 | 336 | 1.2975 | 0.3023 |
| 1.1221 | 13.0 | 364 | 1.2914 | 0.3023 |
| 1.0981 | 14.0 | 392 | 1.2852 | 0.3023 |
| 1.1081 | 15.0 | 420 | 1.2792 | 0.3488 |
| 1.0434 | 16.0 | 448 | 1.2755 | 0.3488 |
| 1.0661 | 17.0 | 476 | 1.2676 | 0.3721 |
| 1.0349 | 18.0 | 504 | 1.2624 | 0.3721 |
| 1.0251 | 19.0 | 532 | 1.2563 | 0.3721 |
| 0.9878 | 20.0 | 560 | 1.2508 | 0.3721 |
| 1.0276 | 21.0 | 588 | 1.2459 | 0.3721 |
| 0.9848 | 22.0 | 616 | 1.2419 | 0.3721 |
| 0.9825 | 23.0 | 644 | 1.2362 | 0.3721 |
| 0.9289 | 24.0 | 672 | 1.2312 | 0.3721 |
| 0.9221 | 25.0 | 700 | 1.2274 | 0.3721 |
| 0.9187 | 26.0 | 728 | 1.2222 | 0.3721 |
| 0.9248 | 27.0 | 756 | 1.2177 | 0.3953 |
| 0.9505 | 28.0 | 784 | 1.2135 | 0.3721 |
| 0.9022 | 29.0 | 812 | 1.2094 | 0.3721 |
| 0.8445 | 30.0 | 840 | 1.2060 | 0.3721 |
| 0.861 | 31.0 | 868 | 1.2023 | 0.3953 |
| 0.9005 | 32.0 | 896 | 1.1985 | 0.3953 |
| 0.8936 | 33.0 | 924 | 1.1956 | 0.3953 |
| 0.8469 | 34.0 | 952 | 1.1923 | 0.3953 |
| 0.9675 | 35.0 | 980 | 1.1892 | 0.3953 |
| 0.8615 | 36.0 | 1008 | 1.1869 | 0.3953 |
| 0.8944 | 37.0 | 1036 | 1.1838 | 0.3953 |
| 0.8374 | 38.0 | 1064 | 1.1820 | 0.3953 |
| 0.8431 | 39.0 | 1092 | 1.1797 | 0.3953 |
| 0.8451 | 40.0 | 1120 | 1.1776 | 0.3953 |
| 0.8614 | 41.0 | 1148 | 1.1756 | 0.3953 |
| 0.8347 | 42.0 | 1176 | 1.1741 | 0.3953 |
| 0.8159 | 43.0 | 1204 | 1.1724 | 0.4419 |
| 0.8229 | 44.0 | 1232 | 1.1713 | 0.4419 |
| 0.8046 | 45.0 | 1260 | 1.1703 | 0.4186 |
| 0.8129 | 46.0 | 1288 | 1.1698 | 0.4186 |
| 0.8156 | 47.0 | 1316 | 1.1693 | 0.4186 |
| 0.8259 | 48.0 | 1344 | 1.1692 | 0.4186 |
| 0.7943 | 49.0 | 1372 | 1.1692 | 0.4186 |
| 0.8288 | 50.0 | 1400 | 1.1692 | 0.4186 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
jordyvl/dit-base_tobacco-small_tobacco3482_kd_MSE
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-small_tobacco3482_kd_MSE
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7746
- Accuracy: 0.81
- Brier Loss: 0.2775
- Nll: 1.1981
- F1 Micro: 0.81
- F1 Macro: 0.7980
- Ece: 0.1403
- Aurc: 0.0500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 7.0083 | 0.14 | 0.9250 | 5.4655 | 0.14 | 0.1468 | 0.2824 | 0.8920 |
| No log | 2.0 | 14 | 5.8247 | 0.35 | 0.7844 | 3.6804 | 0.35 | 0.2240 | 0.2541 | 0.5601 |
| No log | 3.0 | 21 | 4.1788 | 0.49 | 0.6140 | 1.8305 | 0.49 | 0.4563 | 0.2512 | 0.2825 |
| No log | 4.0 | 28 | 2.7911 | 0.66 | 0.4534 | 1.6541 | 0.66 | 0.5604 | 0.2299 | 0.1475 |
| No log | 5.0 | 35 | 2.3354 | 0.74 | 0.3892 | 1.8678 | 0.74 | 0.6851 | 0.2104 | 0.0989 |
| No log | 6.0 | 42 | 1.9675 | 0.73 | 0.3585 | 1.3943 | 0.7300 | 0.6822 | 0.1846 | 0.0930 |
| No log | 7.0 | 49 | 1.7187 | 0.79 | 0.3190 | 1.3921 | 0.79 | 0.7510 | 0.1739 | 0.0760 |
| No log | 8.0 | 56 | 1.6507 | 0.77 | 0.3469 | 1.3682 | 0.7700 | 0.7289 | 0.1834 | 0.0851 |
| No log | 9.0 | 63 | 1.2713 | 0.79 | 0.3040 | 1.4042 | 0.79 | 0.7622 | 0.1505 | 0.0540 |
| No log | 10.0 | 70 | 1.1461 | 0.805 | 0.2852 | 1.3953 | 0.805 | 0.7849 | 0.1371 | 0.0522 |
| No log | 11.0 | 77 | 1.1328 | 0.81 | 0.2713 | 1.3113 | 0.81 | 0.7901 | 0.1371 | 0.0442 |
| No log | 12.0 | 84 | 1.2818 | 0.8 | 0.3192 | 1.2680 | 0.8000 | 0.7808 | 0.1674 | 0.0725 |
| No log | 13.0 | 91 | 1.0493 | 0.805 | 0.2767 | 1.2512 | 0.805 | 0.7846 | 0.1451 | 0.0535 |
| No log | 14.0 | 98 | 0.9657 | 0.815 | 0.2802 | 1.1796 | 0.815 | 0.7965 | 0.1680 | 0.0487 |
| No log | 15.0 | 105 | 0.9910 | 0.82 | 0.2695 | 1.3658 | 0.82 | 0.8000 | 0.1400 | 0.0475 |
| No log | 16.0 | 112 | 0.9828 | 0.81 | 0.2823 | 1.3175 | 0.81 | 0.7974 | 0.1390 | 0.0549 |
| No log | 17.0 | 119 | 0.9279 | 0.8 | 0.2815 | 1.3727 | 0.8000 | 0.7882 | 0.1599 | 0.0454 |
| No log | 18.0 | 126 | 1.0076 | 0.805 | 0.2929 | 1.2999 | 0.805 | 0.7825 | 0.1480 | 0.0562 |
| No log | 19.0 | 133 | 0.9524 | 0.82 | 0.2705 | 1.3029 | 0.82 | 0.8122 | 0.1481 | 0.0454 |
| No log | 20.0 | 140 | 1.0584 | 0.795 | 0.3010 | 1.3019 | 0.795 | 0.7699 | 0.1669 | 0.0650 |
| No log | 21.0 | 147 | 0.9390 | 0.805 | 0.2775 | 1.4073 | 0.805 | 0.7888 | 0.1211 | 0.0513 |
| No log | 22.0 | 154 | 0.9857 | 0.81 | 0.2895 | 1.2894 | 0.81 | 0.7879 | 0.1469 | 0.0548 |
| No log | 23.0 | 161 | 0.9137 | 0.795 | 0.2809 | 1.4461 | 0.795 | 0.7872 | 0.1528 | 0.0472 |
| No log | 24.0 | 168 | 0.8545 | 0.815 | 0.2844 | 1.2582 | 0.815 | 0.7981 | 0.1466 | 0.0484 |
| No log | 25.0 | 175 | 0.8860 | 0.81 | 0.2766 | 1.4525 | 0.81 | 0.8010 | 0.1241 | 0.0457 |
| No log | 26.0 | 182 | 0.8624 | 0.83 | 0.2813 | 1.1993 | 0.83 | 0.8222 | 0.1536 | 0.0512 |
| No log | 27.0 | 189 | 0.9119 | 0.805 | 0.2894 | 1.4164 | 0.805 | 0.7869 | 0.1576 | 0.0519 |
| No log | 28.0 | 196 | 0.9072 | 0.82 | 0.2753 | 1.2927 | 0.82 | 0.8149 | 0.1292 | 0.0514 |
| No log | 29.0 | 203 | 0.8428 | 0.8 | 0.2805 | 1.3065 | 0.8000 | 0.7820 | 0.1368 | 0.0502 |
| No log | 30.0 | 210 | 0.8696 | 0.81 | 0.2858 | 1.2825 | 0.81 | 0.7989 | 0.1454 | 0.0524 |
| No log | 31.0 | 217 | 0.8542 | 0.8 | 0.2861 | 1.2029 | 0.8000 | 0.7766 | 0.1412 | 0.0496 |
| No log | 32.0 | 224 | 0.8576 | 0.805 | 0.2896 | 1.3371 | 0.805 | 0.7814 | 0.1513 | 0.0515 |
| No log | 33.0 | 231 | 0.8615 | 0.8 | 0.2859 | 1.2347 | 0.8000 | 0.7826 | 0.1473 | 0.0522 |
| No log | 34.0 | 238 | 0.8474 | 0.805 | 0.2807 | 1.3510 | 0.805 | 0.7946 | 0.1493 | 0.0524 |
| No log | 35.0 | 245 | 0.9058 | 0.79 | 0.3035 | 1.2005 | 0.79 | 0.7768 | 0.1497 | 0.0553 |
| No log | 36.0 | 252 | 0.8461 | 0.805 | 0.2897 | 1.2770 | 0.805 | 0.7906 | 0.1599 | 0.0513 |
| No log | 37.0 | 259 | 0.8461 | 0.805 | 0.2962 | 1.1989 | 0.805 | 0.7912 | 0.1527 | 0.0533 |
| No log | 38.0 | 266 | 0.8646 | 0.815 | 0.2817 | 1.3653 | 0.815 | 0.8031 | 0.1355 | 0.0499 |
| No log | 39.0 | 273 | 0.8306 | 0.8 | 0.2905 | 1.1852 | 0.8000 | 0.7862 | 0.1528 | 0.0549 |
| No log | 40.0 | 280 | 0.8561 | 0.815 | 0.2838 | 1.2577 | 0.815 | 0.8005 | 0.1431 | 0.0544 |
| No log | 41.0 | 287 | 0.8236 | 0.805 | 0.2836 | 1.2093 | 0.805 | 0.7925 | 0.1376 | 0.0490 |
| No log | 42.0 | 294 | 0.8221 | 0.805 | 0.2853 | 1.1929 | 0.805 | 0.7805 | 0.1397 | 0.0524 |
| No log | 43.0 | 301 | 0.7834 | 0.815 | 0.2666 | 1.2720 | 0.815 | 0.8006 | 0.1316 | 0.0496 |
| No log | 44.0 | 308 | 0.8022 | 0.8 | 0.2839 | 1.2009 | 0.8000 | 0.7870 | 0.1457 | 0.0514 |
| No log | 45.0 | 315 | 0.8009 | 0.81 | 0.2735 | 1.3505 | 0.81 | 0.7970 | 0.1359 | 0.0494 |
| No log | 46.0 | 322 | 0.8029 | 0.81 | 0.2775 | 1.1956 | 0.81 | 0.7983 | 0.1476 | 0.0509 |
| No log | 47.0 | 329 | 0.7979 | 0.82 | 0.2818 | 1.2005 | 0.82 | 0.8049 | 0.1466 | 0.0488 |
| No log | 48.0 | 336 | 0.7763 | 0.815 | 0.2784 | 1.1905 | 0.815 | 0.7970 | 0.1358 | 0.0512 |
| No log | 49.0 | 343 | 0.7917 | 0.81 | 0.2802 | 1.2136 | 0.81 | 0.7989 | 0.1429 | 0.0486 |
| No log | 50.0 | 350 | 0.8223 | 0.825 | 0.2809 | 1.1860 | 0.825 | 0.8042 | 0.1567 | 0.0520 |
| No log | 51.0 | 357 | 0.7952 | 0.82 | 0.2747 | 1.2074 | 0.82 | 0.8078 | 0.1377 | 0.0484 |
| No log | 52.0 | 364 | 0.7868 | 0.825 | 0.2714 | 1.2850 | 0.825 | 0.8170 | 0.1371 | 0.0476 |
| No log | 53.0 | 371 | 0.8111 | 0.805 | 0.2869 | 1.1892 | 0.805 | 0.7954 | 0.1467 | 0.0524 |
| No log | 54.0 | 378 | 0.7739 | 0.81 | 0.2755 | 1.1946 | 0.81 | 0.7953 | 0.1567 | 0.0486 |
| No log | 55.0 | 385 | 0.7930 | 0.825 | 0.2825 | 1.2000 | 0.825 | 0.8087 | 0.1546 | 0.0518 |
| No log | 56.0 | 392 | 0.7826 | 0.815 | 0.2789 | 1.1953 | 0.815 | 0.8031 | 0.1353 | 0.0514 |
| No log | 57.0 | 399 | 0.7716 | 0.82 | 0.2714 | 1.3115 | 0.82 | 0.8079 | 0.1207 | 0.0470 |
| No log | 58.0 | 406 | 0.8036 | 0.815 | 0.2878 | 1.1875 | 0.815 | 0.7945 | 0.1469 | 0.0531 |
| No log | 59.0 | 413 | 0.7714 | 0.82 | 0.2722 | 1.2787 | 0.82 | 0.8128 | 0.1264 | 0.0467 |
| No log | 60.0 | 420 | 0.7671 | 0.825 | 0.2720 | 1.2722 | 0.825 | 0.8136 | 0.1378 | 0.0476 |
| No log | 61.0 | 427 | 0.7885 | 0.815 | 0.2834 | 1.1798 | 0.815 | 0.8007 | 0.1480 | 0.0526 |
| No log | 62.0 | 434 | 0.7621 | 0.82 | 0.2706 | 1.3459 | 0.82 | 0.8102 | 0.1156 | 0.0482 |
| No log | 63.0 | 441 | 0.7691 | 0.81 | 0.2797 | 1.1379 | 0.81 | 0.7959 | 0.1429 | 0.0506 |
| No log | 64.0 | 448 | 0.7699 | 0.81 | 0.2776 | 1.1964 | 0.81 | 0.7974 | 0.1473 | 0.0494 |
| No log | 65.0 | 455 | 0.7693 | 0.82 | 0.2739 | 1.2089 | 0.82 | 0.8106 | 0.1390 | 0.0481 |
| No log | 66.0 | 462 | 0.7891 | 0.81 | 0.2805 | 1.1989 | 0.81 | 0.7927 | 0.1530 | 0.0513 |
| No log | 67.0 | 469 | 0.7806 | 0.82 | 0.2798 | 1.2033 | 0.82 | 0.8068 | 0.1408 | 0.0485 |
| No log | 68.0 | 476 | 0.7877 | 0.82 | 0.2815 | 1.1896 | 0.82 | 0.8054 | 0.1376 | 0.0501 |
| No log | 69.0 | 483 | 0.7649 | 0.825 | 0.2731 | 1.1567 | 0.825 | 0.8155 | 0.1371 | 0.0479 |
| No log | 70.0 | 490 | 0.7740 | 0.82 | 0.2764 | 1.1929 | 0.82 | 0.8107 | 0.1250 | 0.0511 |
| No log | 71.0 | 497 | 0.7657 | 0.82 | 0.2744 | 1.2762 | 0.82 | 0.8068 | 0.1374 | 0.0488 |
| 0.4804 | 72.0 | 504 | 0.7887 | 0.805 | 0.2839 | 1.1851 | 0.805 | 0.7914 | 0.1524 | 0.0513 |
| 0.4804 | 73.0 | 511 | 0.7662 | 0.815 | 0.2759 | 1.1973 | 0.815 | 0.8010 | 0.1395 | 0.0496 |
| 0.4804 | 74.0 | 518 | 0.7706 | 0.825 | 0.2742 | 1.2020 | 0.825 | 0.8196 | 0.1398 | 0.0492 |
| 0.4804 | 75.0 | 525 | 0.7780 | 0.815 | 0.2802 | 1.1881 | 0.815 | 0.7970 | 0.1392 | 0.0505 |
| 0.4804 | 76.0 | 532 | 0.7731 | 0.825 | 0.2745 | 1.2695 | 0.825 | 0.8152 | 0.1548 | 0.0485 |
| 0.4804 | 77.0 | 539 | 0.7743 | 0.825 | 0.2762 | 1.2039 | 0.825 | 0.8109 | 0.1326 | 0.0490 |
| 0.4804 | 78.0 | 546 | 0.7782 | 0.805 | 0.2792 | 1.2001 | 0.805 | 0.7905 | 0.1381 | 0.0506 |
| 0.4804 | 79.0 | 553 | 0.7786 | 0.81 | 0.2807 | 1.1929 | 0.81 | 0.7980 | 0.1394 | 0.0505 |
| 0.4804 | 80.0 | 560 | 0.7759 | 0.82 | 0.2772 | 1.1973 | 0.82 | 0.8081 | 0.1296 | 0.0494 |
| 0.4804 | 81.0 | 567 | 0.7703 | 0.82 | 0.2758 | 1.2069 | 0.82 | 0.8096 | 0.1405 | 0.0491 |
| 0.4804 | 82.0 | 574 | 0.7749 | 0.81 | 0.2777 | 1.1996 | 0.81 | 0.7980 | 0.1502 | 0.0501 |
| 0.4804 | 83.0 | 581 | 0.7768 | 0.815 | 0.2777 | 1.2009 | 0.815 | 0.8052 | 0.1237 | 0.0496 |
| 0.4804 | 84.0 | 588 | 0.7761 | 0.815 | 0.2778 | 1.1986 | 0.815 | 0.8008 | 0.1333 | 0.0495 |
| 0.4804 | 85.0 | 595 | 0.7771 | 0.815 | 0.2780 | 1.1984 | 0.815 | 0.8008 | 0.1335 | 0.0497 |
| 0.4804 | 86.0 | 602 | 0.7755 | 0.81 | 0.2777 | 1.1987 | 0.81 | 0.7980 | 0.1327 | 0.0501 |
| 0.4804 | 87.0 | 609 | 0.7749 | 0.81 | 0.2776 | 1.1974 | 0.81 | 0.7980 | 0.1261 | 0.0499 |
| 0.4804 | 88.0 | 616 | 0.7746 | 0.815 | 0.2776 | 1.1981 | 0.815 | 0.8052 | 0.1238 | 0.0497 |
| 0.4804 | 89.0 | 623 | 0.7744 | 0.81 | 0.2776 | 1.1981 | 0.81 | 0.7980 | 0.1283 | 0.0500 |
| 0.4804 | 90.0 | 630 | 0.7743 | 0.81 | 0.2774 | 1.1987 | 0.81 | 0.7980 | 0.1346 | 0.0499 |
| 0.4804 | 91.0 | 637 | 0.7741 | 0.81 | 0.2774 | 1.1981 | 0.81 | 0.7980 | 0.1379 | 0.0499 |
| 0.4804 | 92.0 | 644 | 0.7742 | 0.81 | 0.2774 | 1.1982 | 0.81 | 0.7980 | 0.1403 | 0.0499 |
| 0.4804 | 93.0 | 651 | 0.7745 | 0.81 | 0.2775 | 1.1982 | 0.81 | 0.7980 | 0.1403 | 0.0500 |
| 0.4804 | 94.0 | 658 | 0.7746 | 0.81 | 0.2776 | 1.1978 | 0.81 | 0.7980 | 0.1316 | 0.0500 |
| 0.4804 | 95.0 | 665 | 0.7745 | 0.81 | 0.2775 | 1.1982 | 0.81 | 0.7980 | 0.1380 | 0.0499 |
| 0.4804 | 96.0 | 672 | 0.7746 | 0.81 | 0.2775 | 1.1981 | 0.81 | 0.7980 | 0.1315 | 0.0500 |
| 0.4804 | 97.0 | 679 | 0.7746 | 0.81 | 0.2775 | 1.1981 | 0.81 | 0.7980 | 0.1403 | 0.0500 |
| 0.4804 | 98.0 | 686 | 0.7746 | 0.81 | 0.2775 | 1.1981 | 0.81 | 0.7980 | 0.1403 | 0.0500 |
| 0.4804 | 99.0 | 693 | 0.7746 | 0.81 | 0.2775 | 1.1981 | 0.81 | 0.7980 | 0.1403 | 0.0500 |
| 0.4804 | 100.0 | 700 | 0.7746 | 0.81 | 0.2775 | 1.1981 | 0.81 | 0.7980 | 0.1403 | 0.0500 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
hkivancoral/hushem_5x_deit_small_sgd_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0586
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.487 | 1.0 | 28 | 1.4172 | 0.3095 |
| 1.4655 | 2.0 | 56 | 1.3829 | 0.3333 |
| 1.3519 | 3.0 | 84 | 1.3645 | 0.2619 |
| 1.3235 | 4.0 | 112 | 1.3524 | 0.2619 |
| 1.3019 | 5.0 | 140 | 1.3421 | 0.2857 |
| 1.2683 | 6.0 | 168 | 1.3287 | 0.2857 |
| 1.2448 | 7.0 | 196 | 1.3147 | 0.2857 |
| 1.2154 | 8.0 | 224 | 1.3011 | 0.2619 |
| 1.1886 | 9.0 | 252 | 1.2876 | 0.3571 |
| 1.1547 | 10.0 | 280 | 1.2739 | 0.3810 |
| 1.1374 | 11.0 | 308 | 1.2618 | 0.3810 |
| 1.1111 | 12.0 | 336 | 1.2488 | 0.3810 |
| 1.1298 | 13.0 | 364 | 1.2398 | 0.4048 |
| 1.0797 | 14.0 | 392 | 1.2302 | 0.4048 |
| 1.0414 | 15.0 | 420 | 1.2217 | 0.4286 |
| 1.061 | 16.0 | 448 | 1.2120 | 0.4286 |
| 1.0634 | 17.0 | 476 | 1.2016 | 0.4524 |
| 1.0054 | 18.0 | 504 | 1.1928 | 0.4524 |
| 0.9762 | 19.0 | 532 | 1.1844 | 0.4524 |
| 1.0106 | 20.0 | 560 | 1.1764 | 0.4524 |
| 0.9235 | 21.0 | 588 | 1.1685 | 0.4524 |
| 0.9458 | 22.0 | 616 | 1.1599 | 0.4762 |
| 0.9326 | 23.0 | 644 | 1.1543 | 0.5 |
| 0.9222 | 24.0 | 672 | 1.1465 | 0.5 |
| 0.8846 | 25.0 | 700 | 1.1391 | 0.5 |
| 0.8795 | 26.0 | 728 | 1.1307 | 0.4762 |
| 0.8711 | 27.0 | 756 | 1.1242 | 0.5238 |
| 0.8921 | 28.0 | 784 | 1.1184 | 0.5238 |
| 0.8796 | 29.0 | 812 | 1.1133 | 0.5238 |
| 0.8567 | 30.0 | 840 | 1.1054 | 0.5238 |
| 0.8632 | 31.0 | 868 | 1.1011 | 0.5238 |
| 0.8179 | 32.0 | 896 | 1.0965 | 0.5238 |
| 0.8418 | 33.0 | 924 | 1.0917 | 0.5238 |
| 0.8097 | 34.0 | 952 | 1.0866 | 0.5238 |
| 0.8474 | 35.0 | 980 | 1.0818 | 0.5238 |
| 0.7989 | 36.0 | 1008 | 1.0776 | 0.5238 |
| 0.7935 | 37.0 | 1036 | 1.0750 | 0.5238 |
| 0.8104 | 38.0 | 1064 | 1.0725 | 0.5238 |
| 0.8018 | 39.0 | 1092 | 1.0698 | 0.5238 |
| 0.797 | 40.0 | 1120 | 1.0673 | 0.5238 |
| 0.8004 | 41.0 | 1148 | 1.0654 | 0.5238 |
| 0.775 | 42.0 | 1176 | 1.0641 | 0.5 |
| 0.7606 | 43.0 | 1204 | 1.0623 | 0.5 |
| 0.7649 | 44.0 | 1232 | 1.0613 | 0.5 |
| 0.7627 | 45.0 | 1260 | 1.0601 | 0.5 |
| 0.7807 | 46.0 | 1288 | 1.0595 | 0.5 |
| 0.7697 | 47.0 | 1316 | 1.0588 | 0.5 |
| 0.7683 | 48.0 | 1344 | 1.0586 | 0.5 |
| 0.783 | 49.0 | 1372 | 1.0586 | 0.5 |
| 0.7862 | 50.0 | 1400 | 1.0586 | 0.5 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_small_sgd_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_small_sgd_001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0219
- Accuracy: 0.5854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5389 | 1.0 | 28 | 1.4343 | 0.2439 |
| 1.3921 | 2.0 | 56 | 1.3847 | 0.2683 |
| 1.3749 | 3.0 | 84 | 1.3585 | 0.3659 |
| 1.3126 | 4.0 | 112 | 1.3409 | 0.3659 |
| 1.3117 | 5.0 | 140 | 1.3251 | 0.3659 |
| 1.3121 | 6.0 | 168 | 1.3119 | 0.3415 |
| 1.2628 | 7.0 | 196 | 1.2980 | 0.3415 |
| 1.2308 | 8.0 | 224 | 1.2843 | 0.3659 |
| 1.2428 | 9.0 | 252 | 1.2711 | 0.4146 |
| 1.1961 | 10.0 | 280 | 1.2591 | 0.4146 |
| 1.1795 | 11.0 | 308 | 1.2486 | 0.3902 |
| 1.1594 | 12.0 | 336 | 1.2381 | 0.3902 |
| 1.1371 | 13.0 | 364 | 1.2260 | 0.3902 |
| 1.1217 | 14.0 | 392 | 1.2140 | 0.4390 |
| 1.0975 | 15.0 | 420 | 1.2018 | 0.4634 |
| 1.1139 | 16.0 | 448 | 1.1910 | 0.4878 |
| 1.0797 | 17.0 | 476 | 1.1802 | 0.4878 |
| 1.0813 | 18.0 | 504 | 1.1682 | 0.4878 |
| 1.0619 | 19.0 | 532 | 1.1572 | 0.4878 |
| 1.0398 | 20.0 | 560 | 1.1467 | 0.5122 |
| 1.0215 | 21.0 | 588 | 1.1362 | 0.5122 |
| 1.0014 | 22.0 | 616 | 1.1280 | 0.5366 |
| 1.0047 | 23.0 | 644 | 1.1216 | 0.5610 |
| 0.9823 | 24.0 | 672 | 1.1144 | 0.5610 |
| 0.9814 | 25.0 | 700 | 1.1058 | 0.5610 |
| 0.9822 | 26.0 | 728 | 1.0976 | 0.5610 |
| 0.9448 | 27.0 | 756 | 1.0916 | 0.5366 |
| 0.9805 | 28.0 | 784 | 1.0839 | 0.5366 |
| 0.9187 | 29.0 | 812 | 1.0780 | 0.5366 |
| 0.9659 | 30.0 | 840 | 1.0725 | 0.5366 |
| 0.9135 | 31.0 | 868 | 1.0663 | 0.5610 |
| 0.889 | 32.0 | 896 | 1.0628 | 0.5610 |
| 0.9089 | 33.0 | 924 | 1.0587 | 0.5610 |
| 0.9062 | 34.0 | 952 | 1.0524 | 0.5610 |
| 0.9029 | 35.0 | 980 | 1.0479 | 0.5610 |
| 0.8924 | 36.0 | 1008 | 1.0439 | 0.5854 |
| 0.8694 | 37.0 | 1036 | 1.0402 | 0.5854 |
| 0.8578 | 38.0 | 1064 | 1.0365 | 0.5610 |
| 0.8992 | 39.0 | 1092 | 1.0340 | 0.5854 |
| 0.8586 | 40.0 | 1120 | 1.0317 | 0.5854 |
| 0.8737 | 41.0 | 1148 | 1.0296 | 0.5854 |
| 0.8517 | 42.0 | 1176 | 1.0278 | 0.5854 |
| 0.8537 | 43.0 | 1204 | 1.0257 | 0.5854 |
| 0.8642 | 44.0 | 1232 | 1.0243 | 0.5854 |
| 0.871 | 45.0 | 1260 | 1.0234 | 0.5854 |
| 0.8594 | 46.0 | 1288 | 1.0226 | 0.5854 |
| 0.8675 | 47.0 | 1316 | 1.0221 | 0.5854 |
| 0.874 | 48.0 | 1344 | 1.0219 | 0.5854 |
| 0.8459 | 49.0 | 1372 | 1.0219 | 0.5854 |
| 0.8538 | 50.0 | 1400 | 1.0219 | 0.5854 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
piecurus/convnext-tiny-224-finetuned-eurosat-albumentations
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-albumentations
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2257
- Accuracy: 0.9622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2446 | 1.0 | 190 | 0.2257 | 0.9622 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"annualcrop",
"forest",
"herbaceousvegetation",
"highway",
"industrial",
"pasture",
"permanentcrop",
"residential",
"river",
"sealake"
] |
hkivancoral/hushem_5x_deit_base_adamax_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_adamax_0001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1576
- Accuracy: 0.7111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.238 | 1.0 | 27 | 1.3398 | 0.2667 |
| 0.9269 | 2.0 | 54 | 1.2685 | 0.4444 |
| 0.6591 | 3.0 | 81 | 1.1740 | 0.5333 |
| 0.5112 | 4.0 | 108 | 1.1289 | 0.5556 |
| 0.3169 | 5.0 | 135 | 1.0720 | 0.5778 |
| 0.2415 | 6.0 | 162 | 0.9458 | 0.6 |
| 0.1769 | 7.0 | 189 | 0.9250 | 0.6 |
| 0.0983 | 8.0 | 216 | 0.8893 | 0.6667 |
| 0.0567 | 9.0 | 243 | 0.8959 | 0.7111 |
| 0.0295 | 10.0 | 270 | 1.0130 | 0.5778 |
| 0.0202 | 11.0 | 297 | 0.9509 | 0.6889 |
| 0.0113 | 12.0 | 324 | 0.9586 | 0.7111 |
| 0.0094 | 13.0 | 351 | 0.9844 | 0.6889 |
| 0.0072 | 14.0 | 378 | 0.9965 | 0.7333 |
| 0.0063 | 15.0 | 405 | 1.0087 | 0.7111 |
| 0.005 | 16.0 | 432 | 1.0089 | 0.6889 |
| 0.0041 | 17.0 | 459 | 1.0347 | 0.6889 |
| 0.004 | 18.0 | 486 | 1.0569 | 0.6889 |
| 0.0034 | 19.0 | 513 | 1.0522 | 0.6889 |
| 0.0031 | 20.0 | 540 | 1.0681 | 0.6889 |
| 0.0027 | 21.0 | 567 | 1.0686 | 0.6889 |
| 0.0026 | 22.0 | 594 | 1.0745 | 0.6889 |
| 0.0023 | 23.0 | 621 | 1.0948 | 0.6889 |
| 0.0022 | 24.0 | 648 | 1.0979 | 0.6889 |
| 0.0021 | 25.0 | 675 | 1.0958 | 0.6889 |
| 0.0021 | 26.0 | 702 | 1.1008 | 0.6889 |
| 0.0018 | 27.0 | 729 | 1.1079 | 0.6889 |
| 0.0017 | 28.0 | 756 | 1.1114 | 0.6889 |
| 0.0019 | 29.0 | 783 | 1.1187 | 0.6889 |
| 0.0017 | 30.0 | 810 | 1.1246 | 0.6889 |
| 0.0016 | 31.0 | 837 | 1.1229 | 0.6889 |
| 0.0016 | 32.0 | 864 | 1.1290 | 0.6889 |
| 0.0014 | 33.0 | 891 | 1.1312 | 0.6889 |
| 0.0015 | 34.0 | 918 | 1.1349 | 0.6889 |
| 0.0014 | 35.0 | 945 | 1.1402 | 0.6889 |
| 0.0013 | 36.0 | 972 | 1.1442 | 0.6889 |
| 0.0013 | 37.0 | 999 | 1.1434 | 0.6889 |
| 0.0012 | 38.0 | 1026 | 1.1425 | 0.7111 |
| 0.0012 | 39.0 | 1053 | 1.1512 | 0.6889 |
| 0.0011 | 40.0 | 1080 | 1.1497 | 0.6889 |
| 0.0012 | 41.0 | 1107 | 1.1525 | 0.6889 |
| 0.0012 | 42.0 | 1134 | 1.1548 | 0.6889 |
| 0.0012 | 43.0 | 1161 | 1.1577 | 0.6889 |
| 0.0011 | 44.0 | 1188 | 1.1573 | 0.6889 |
| 0.0011 | 45.0 | 1215 | 1.1575 | 0.6889 |
| 0.0011 | 46.0 | 1242 | 1.1575 | 0.7111 |
| 0.0011 | 47.0 | 1269 | 1.1575 | 0.7111 |
| 0.0011 | 48.0 | 1296 | 1.1576 | 0.7111 |
| 0.0011 | 49.0 | 1323 | 1.1576 | 0.7111 |
| 0.0011 | 50.0 | 1350 | 1.1576 | 0.7111 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_adamax_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_adamax_0001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3720
- Accuracy: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2573 | 1.0 | 27 | 1.3177 | 0.3778 |
| 0.9474 | 2.0 | 54 | 1.2698 | 0.4667 |
| 0.6743 | 3.0 | 81 | 1.1709 | 0.5333 |
| 0.53 | 4.0 | 108 | 1.1238 | 0.6 |
| 0.3327 | 5.0 | 135 | 1.1060 | 0.6 |
| 0.2187 | 6.0 | 162 | 1.0991 | 0.6444 |
| 0.1497 | 7.0 | 189 | 1.1072 | 0.6444 |
| 0.086 | 8.0 | 216 | 1.1220 | 0.6444 |
| 0.0449 | 9.0 | 243 | 1.1215 | 0.6444 |
| 0.0257 | 10.0 | 270 | 1.1368 | 0.6667 |
| 0.0174 | 11.0 | 297 | 1.1587 | 0.6667 |
| 0.0102 | 12.0 | 324 | 1.1715 | 0.6889 |
| 0.0083 | 13.0 | 351 | 1.2117 | 0.6889 |
| 0.0067 | 14.0 | 378 | 1.2042 | 0.6889 |
| 0.0061 | 15.0 | 405 | 1.2320 | 0.6889 |
| 0.0048 | 16.0 | 432 | 1.2396 | 0.6889 |
| 0.0043 | 17.0 | 459 | 1.2501 | 0.6889 |
| 0.0039 | 18.0 | 486 | 1.2585 | 0.6667 |
| 0.0034 | 19.0 | 513 | 1.2714 | 0.6889 |
| 0.0031 | 20.0 | 540 | 1.2786 | 0.6889 |
| 0.0029 | 21.0 | 567 | 1.2831 | 0.6667 |
| 0.0026 | 22.0 | 594 | 1.2886 | 0.6667 |
| 0.0022 | 23.0 | 621 | 1.2985 | 0.6667 |
| 0.0022 | 24.0 | 648 | 1.3036 | 0.6667 |
| 0.002 | 25.0 | 675 | 1.3071 | 0.6667 |
| 0.002 | 26.0 | 702 | 1.3150 | 0.6667 |
| 0.0017 | 27.0 | 729 | 1.3222 | 0.6667 |
| 0.0018 | 28.0 | 756 | 1.3235 | 0.6667 |
| 0.0018 | 29.0 | 783 | 1.3294 | 0.6667 |
| 0.0017 | 30.0 | 810 | 1.3351 | 0.6667 |
| 0.0015 | 31.0 | 837 | 1.3358 | 0.6667 |
| 0.0016 | 32.0 | 864 | 1.3406 | 0.6667 |
| 0.0015 | 33.0 | 891 | 1.3434 | 0.6667 |
| 0.0014 | 34.0 | 918 | 1.3481 | 0.6667 |
| 0.0013 | 35.0 | 945 | 1.3523 | 0.6667 |
| 0.0013 | 36.0 | 972 | 1.3535 | 0.6667 |
| 0.0013 | 37.0 | 999 | 1.3558 | 0.6667 |
| 0.0012 | 38.0 | 1026 | 1.3590 | 0.6667 |
| 0.0012 | 39.0 | 1053 | 1.3619 | 0.6667 |
| 0.0011 | 40.0 | 1080 | 1.3634 | 0.6667 |
| 0.0012 | 41.0 | 1107 | 1.3657 | 0.6667 |
| 0.0011 | 42.0 | 1134 | 1.3669 | 0.6667 |
| 0.0011 | 43.0 | 1161 | 1.3696 | 0.6667 |
| 0.0011 | 44.0 | 1188 | 1.3699 | 0.6667 |
| 0.0011 | 45.0 | 1215 | 1.3707 | 0.6667 |
| 0.0011 | 46.0 | 1242 | 1.3712 | 0.6667 |
| 0.0011 | 47.0 | 1269 | 1.3718 | 0.6667 |
| 0.0011 | 48.0 | 1296 | 1.3720 | 0.6667 |
| 0.0011 | 49.0 | 1323 | 1.3720 | 0.6667 |
| 0.0011 | 50.0 | 1350 | 1.3720 | 0.6667 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
jordyvl/dit-base_tobacco-small_tobacco3482_kd_NKD_t1.0_g1.5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-small_tobacco3482_kd_NKD_t1.0_g1.5
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1084
- Accuracy: 0.825
- Brier Loss: 0.2907
- Nll: 1.2013
- F1 Micro: 0.825
- F1 Macro: 0.8171
- Ece: 0.1500
- Aurc: 0.0459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 5.5631 | 0.135 | 0.9164 | 5.3726 | 0.135 | 0.1126 | 0.2543 | 0.8397 |
| No log | 2.0 | 14 | 4.9048 | 0.35 | 0.8238 | 3.0911 | 0.35 | 0.2637 | 0.3348 | 0.6709 |
| No log | 3.0 | 21 | 4.1439 | 0.49 | 0.6650 | 1.8580 | 0.49 | 0.4532 | 0.2990 | 0.2902 |
| No log | 4.0 | 28 | 3.5518 | 0.66 | 0.4867 | 1.6397 | 0.66 | 0.6303 | 0.2902 | 0.1489 |
| No log | 5.0 | 35 | 3.3371 | 0.755 | 0.3981 | 1.6213 | 0.755 | 0.7261 | 0.2670 | 0.0984 |
| No log | 6.0 | 42 | 3.4978 | 0.69 | 0.4211 | 1.5668 | 0.69 | 0.6792 | 0.2240 | 0.1170 |
| No log | 7.0 | 49 | 3.0945 | 0.795 | 0.3094 | 1.5507 | 0.795 | 0.7653 | 0.1765 | 0.0622 |
| No log | 8.0 | 56 | 3.0882 | 0.775 | 0.3056 | 1.5470 | 0.775 | 0.7500 | 0.1826 | 0.0634 |
| No log | 9.0 | 63 | 3.1861 | 0.745 | 0.3331 | 1.6432 | 0.745 | 0.7362 | 0.1822 | 0.0754 |
| No log | 10.0 | 70 | 2.9849 | 0.81 | 0.2789 | 1.5850 | 0.81 | 0.7802 | 0.1559 | 0.0548 |
| No log | 11.0 | 77 | 3.0131 | 0.795 | 0.3012 | 1.4820 | 0.795 | 0.7720 | 0.1627 | 0.0567 |
| No log | 12.0 | 84 | 2.9054 | 0.795 | 0.2734 | 1.4141 | 0.795 | 0.7843 | 0.1501 | 0.0535 |
| No log | 13.0 | 91 | 2.9704 | 0.815 | 0.2720 | 1.4241 | 0.815 | 0.8144 | 0.1584 | 0.0536 |
| No log | 14.0 | 98 | 2.9393 | 0.815 | 0.2627 | 1.4735 | 0.815 | 0.7902 | 0.1582 | 0.0504 |
| No log | 15.0 | 105 | 3.0346 | 0.805 | 0.2963 | 1.3649 | 0.805 | 0.7973 | 0.1617 | 0.0564 |
| No log | 16.0 | 112 | 2.9648 | 0.79 | 0.2839 | 1.6270 | 0.79 | 0.7722 | 0.1418 | 0.0525 |
| No log | 17.0 | 119 | 3.0458 | 0.82 | 0.2960 | 1.3476 | 0.82 | 0.8048 | 0.1575 | 0.0622 |
| No log | 18.0 | 126 | 2.8571 | 0.82 | 0.2754 | 1.3958 | 0.82 | 0.8081 | 0.1482 | 0.0493 |
| No log | 19.0 | 133 | 2.9429 | 0.775 | 0.2971 | 1.4302 | 0.775 | 0.7617 | 0.1616 | 0.0575 |
| No log | 20.0 | 140 | 2.8274 | 0.825 | 0.2698 | 1.3759 | 0.825 | 0.8081 | 0.1520 | 0.0449 |
| No log | 21.0 | 147 | 2.8769 | 0.81 | 0.2713 | 1.3604 | 0.81 | 0.8086 | 0.1390 | 0.0466 |
| No log | 22.0 | 154 | 2.8787 | 0.805 | 0.2694 | 1.3016 | 0.805 | 0.7975 | 0.1522 | 0.0435 |
| No log | 23.0 | 161 | 2.8771 | 0.825 | 0.2646 | 1.4753 | 0.825 | 0.8215 | 0.1414 | 0.0485 |
| No log | 24.0 | 168 | 2.8950 | 0.805 | 0.2774 | 1.2783 | 0.805 | 0.7754 | 0.1406 | 0.0495 |
| No log | 25.0 | 175 | 2.9780 | 0.825 | 0.2829 | 1.3207 | 0.825 | 0.8332 | 0.1402 | 0.0496 |
| No log | 26.0 | 182 | 2.8906 | 0.82 | 0.2653 | 1.3097 | 0.82 | 0.8007 | 0.1380 | 0.0454 |
| No log | 27.0 | 189 | 2.9385 | 0.82 | 0.2778 | 1.3039 | 0.82 | 0.8211 | 0.1489 | 0.0469 |
| No log | 28.0 | 196 | 2.8644 | 0.83 | 0.2618 | 1.4004 | 0.83 | 0.8325 | 0.1358 | 0.0494 |
| No log | 29.0 | 203 | 2.8761 | 0.82 | 0.2720 | 1.2220 | 0.82 | 0.8192 | 0.1411 | 0.0463 |
| No log | 30.0 | 210 | 2.8594 | 0.83 | 0.2620 | 1.3323 | 0.83 | 0.8130 | 0.1257 | 0.0448 |
| No log | 31.0 | 217 | 2.8946 | 0.825 | 0.2658 | 1.3388 | 0.825 | 0.8236 | 0.1322 | 0.0427 |
| No log | 32.0 | 224 | 2.8698 | 0.825 | 0.2712 | 1.3141 | 0.825 | 0.8107 | 0.1467 | 0.0473 |
| No log | 33.0 | 231 | 2.8106 | 0.83 | 0.2563 | 1.3750 | 0.83 | 0.8178 | 0.1126 | 0.0422 |
| No log | 34.0 | 238 | 2.9752 | 0.8 | 0.2881 | 1.3007 | 0.8000 | 0.7902 | 0.1522 | 0.0499 |
| No log | 35.0 | 245 | 2.8919 | 0.815 | 0.2886 | 1.3057 | 0.815 | 0.8149 | 0.1472 | 0.0468 |
| No log | 36.0 | 252 | 2.8863 | 0.81 | 0.2833 | 1.1973 | 0.81 | 0.8006 | 0.1453 | 0.0458 |
| No log | 37.0 | 259 | 2.8283 | 0.845 | 0.2685 | 1.2743 | 0.845 | 0.8438 | 0.1481 | 0.0451 |
| No log | 38.0 | 266 | 2.9174 | 0.815 | 0.2825 | 1.2658 | 0.815 | 0.7965 | 0.1408 | 0.0530 |
| No log | 39.0 | 273 | 2.8837 | 0.82 | 0.2775 | 1.2946 | 0.82 | 0.8050 | 0.1440 | 0.0472 |
| No log | 40.0 | 280 | 2.8585 | 0.835 | 0.2654 | 1.2830 | 0.835 | 0.8169 | 0.1450 | 0.0467 |
| No log | 41.0 | 287 | 2.9323 | 0.82 | 0.2809 | 1.2833 | 0.82 | 0.8085 | 0.1342 | 0.0490 |
| No log | 42.0 | 294 | 2.9525 | 0.82 | 0.2847 | 1.2331 | 0.82 | 0.8055 | 0.1352 | 0.0481 |
| No log | 43.0 | 301 | 2.9005 | 0.83 | 0.2819 | 1.2643 | 0.83 | 0.8225 | 0.1548 | 0.0482 |
| No log | 44.0 | 308 | 2.8388 | 0.83 | 0.2634 | 1.2662 | 0.83 | 0.8152 | 0.1286 | 0.0460 |
| No log | 45.0 | 315 | 2.8962 | 0.82 | 0.2752 | 1.3291 | 0.82 | 0.8127 | 0.1442 | 0.0496 |
| No log | 46.0 | 322 | 2.9479 | 0.815 | 0.2883 | 1.2433 | 0.815 | 0.7968 | 0.1540 | 0.0523 |
| No log | 47.0 | 329 | 2.8795 | 0.825 | 0.2737 | 1.2477 | 0.825 | 0.8260 | 0.1295 | 0.0447 |
| No log | 48.0 | 336 | 2.9872 | 0.815 | 0.2992 | 1.2556 | 0.815 | 0.8029 | 0.1379 | 0.0510 |
| No log | 49.0 | 343 | 2.8156 | 0.84 | 0.2536 | 1.2715 | 0.8400 | 0.8263 | 0.1240 | 0.0422 |
| No log | 50.0 | 350 | 2.9534 | 0.81 | 0.2924 | 1.3383 | 0.81 | 0.7937 | 0.1471 | 0.0478 |
| No log | 51.0 | 357 | 2.8604 | 0.855 | 0.2549 | 1.2566 | 0.855 | 0.8547 | 0.1318 | 0.0411 |
| No log | 52.0 | 364 | 2.9769 | 0.825 | 0.2828 | 1.2325 | 0.825 | 0.8160 | 0.1407 | 0.0480 |
| No log | 53.0 | 371 | 2.8717 | 0.84 | 0.2635 | 1.2511 | 0.8400 | 0.8342 | 0.1254 | 0.0434 |
| No log | 54.0 | 378 | 2.9313 | 0.825 | 0.2704 | 1.2676 | 0.825 | 0.8159 | 0.1310 | 0.0477 |
| No log | 55.0 | 385 | 2.8552 | 0.82 | 0.2638 | 1.2417 | 0.82 | 0.8031 | 0.1490 | 0.0435 |
| No log | 56.0 | 392 | 2.9680 | 0.845 | 0.2729 | 1.2530 | 0.845 | 0.8414 | 0.1349 | 0.0452 |
| No log | 57.0 | 399 | 2.9440 | 0.83 | 0.2796 | 1.2344 | 0.83 | 0.8222 | 0.1367 | 0.0450 |
| No log | 58.0 | 406 | 3.0577 | 0.815 | 0.2913 | 1.2232 | 0.815 | 0.8068 | 0.1447 | 0.0488 |
| No log | 59.0 | 413 | 2.8861 | 0.835 | 0.2643 | 1.2618 | 0.835 | 0.8280 | 0.1354 | 0.0422 |
| No log | 60.0 | 420 | 3.0007 | 0.825 | 0.2822 | 1.2352 | 0.825 | 0.8136 | 0.1342 | 0.0449 |
| No log | 61.0 | 427 | 2.9368 | 0.835 | 0.2746 | 1.2437 | 0.835 | 0.8258 | 0.1402 | 0.0437 |
| No log | 62.0 | 434 | 2.9202 | 0.835 | 0.2709 | 1.2281 | 0.835 | 0.8258 | 0.1435 | 0.0435 |
| No log | 63.0 | 441 | 2.9720 | 0.835 | 0.2768 | 1.2129 | 0.835 | 0.8354 | 0.1444 | 0.0460 |
| No log | 64.0 | 448 | 2.9993 | 0.835 | 0.2815 | 1.2250 | 0.835 | 0.8245 | 0.1526 | 0.0451 |
| No log | 65.0 | 455 | 2.9628 | 0.83 | 0.2725 | 1.2477 | 0.83 | 0.8190 | 0.1405 | 0.0439 |
| No log | 66.0 | 462 | 3.0418 | 0.825 | 0.2863 | 1.2244 | 0.825 | 0.8142 | 0.1447 | 0.0473 |
| No log | 67.0 | 469 | 3.0196 | 0.83 | 0.2797 | 1.2317 | 0.83 | 0.8223 | 0.1450 | 0.0463 |
| No log | 68.0 | 476 | 3.0227 | 0.835 | 0.2834 | 1.2362 | 0.835 | 0.8270 | 0.1416 | 0.0446 |
| No log | 69.0 | 483 | 3.0343 | 0.835 | 0.2837 | 1.2377 | 0.835 | 0.8310 | 0.1423 | 0.0455 |
| No log | 70.0 | 490 | 2.9982 | 0.835 | 0.2755 | 1.2247 | 0.835 | 0.8245 | 0.1306 | 0.0443 |
| No log | 71.0 | 497 | 3.0230 | 0.825 | 0.2860 | 1.2302 | 0.825 | 0.8171 | 0.1376 | 0.0464 |
| 2.5595 | 72.0 | 504 | 3.0254 | 0.83 | 0.2843 | 1.2190 | 0.83 | 0.8222 | 0.1386 | 0.0463 |
| 2.5595 | 73.0 | 511 | 3.0295 | 0.825 | 0.2851 | 1.2206 | 0.825 | 0.8192 | 0.1417 | 0.0462 |
| 2.5595 | 74.0 | 518 | 3.0381 | 0.83 | 0.2845 | 1.2130 | 0.83 | 0.8243 | 0.1423 | 0.0457 |
| 2.5595 | 75.0 | 525 | 3.0258 | 0.825 | 0.2837 | 1.2210 | 0.825 | 0.8171 | 0.1431 | 0.0460 |
| 2.5595 | 76.0 | 532 | 3.0694 | 0.825 | 0.2886 | 1.2091 | 0.825 | 0.8171 | 0.1533 | 0.0476 |
| 2.5595 | 77.0 | 539 | 3.0924 | 0.825 | 0.2939 | 1.2130 | 0.825 | 0.8171 | 0.1515 | 0.0473 |
| 2.5595 | 78.0 | 546 | 3.0956 | 0.82 | 0.2921 | 1.2081 | 0.82 | 0.8140 | 0.1539 | 0.0482 |
| 2.5595 | 79.0 | 553 | 3.0859 | 0.825 | 0.2884 | 1.2109 | 0.825 | 0.8220 | 0.1480 | 0.0468 |
| 2.5595 | 80.0 | 560 | 3.0740 | 0.825 | 0.2894 | 1.2081 | 0.825 | 0.8136 | 0.1399 | 0.0459 |
| 2.5595 | 81.0 | 567 | 3.0776 | 0.825 | 0.2901 | 1.2066 | 0.825 | 0.8171 | 0.1502 | 0.0462 |
| 2.5595 | 82.0 | 574 | 3.0736 | 0.83 | 0.2869 | 1.2100 | 0.83 | 0.8251 | 0.1405 | 0.0462 |
| 2.5595 | 83.0 | 581 | 3.0943 | 0.825 | 0.2919 | 1.2065 | 0.825 | 0.8171 | 0.1503 | 0.0464 |
| 2.5595 | 84.0 | 588 | 3.0857 | 0.825 | 0.2908 | 1.2080 | 0.825 | 0.8171 | 0.1456 | 0.0461 |
| 2.5595 | 85.0 | 595 | 3.0874 | 0.825 | 0.2890 | 1.2063 | 0.825 | 0.8171 | 0.1457 | 0.0461 |
| 2.5595 | 86.0 | 602 | 3.0863 | 0.825 | 0.2880 | 1.2069 | 0.825 | 0.8171 | 0.1453 | 0.0459 |
| 2.5595 | 87.0 | 609 | 3.0844 | 0.825 | 0.2882 | 1.2059 | 0.825 | 0.8171 | 0.1457 | 0.0456 |
| 2.5595 | 88.0 | 616 | 3.1011 | 0.825 | 0.2909 | 1.2034 | 0.825 | 0.8171 | 0.1557 | 0.0462 |
| 2.5595 | 89.0 | 623 | 3.1033 | 0.825 | 0.2912 | 1.2033 | 0.825 | 0.8171 | 0.1528 | 0.0463 |
| 2.5595 | 90.0 | 630 | 3.1004 | 0.825 | 0.2903 | 1.2029 | 0.825 | 0.8171 | 0.1541 | 0.0461 |
| 2.5595 | 91.0 | 637 | 3.0998 | 0.825 | 0.2900 | 1.2033 | 0.825 | 0.8171 | 0.1499 | 0.0459 |
| 2.5595 | 92.0 | 644 | 3.1039 | 0.825 | 0.2904 | 1.2023 | 0.825 | 0.8171 | 0.1535 | 0.0460 |
| 2.5595 | 93.0 | 651 | 3.1058 | 0.825 | 0.2906 | 1.2020 | 0.825 | 0.8171 | 0.1498 | 0.0460 |
| 2.5595 | 94.0 | 658 | 3.1057 | 0.825 | 0.2906 | 1.2022 | 0.825 | 0.8171 | 0.1504 | 0.0459 |
| 2.5595 | 95.0 | 665 | 3.1066 | 0.825 | 0.2908 | 1.2018 | 0.825 | 0.8171 | 0.1509 | 0.0460 |
| 2.5595 | 96.0 | 672 | 3.1069 | 0.825 | 0.2906 | 1.2018 | 0.825 | 0.8171 | 0.1506 | 0.0459 |
| 2.5595 | 97.0 | 679 | 3.1079 | 0.825 | 0.2906 | 1.2013 | 0.825 | 0.8171 | 0.1497 | 0.0459 |
| 2.5595 | 98.0 | 686 | 3.1085 | 0.825 | 0.2907 | 1.2013 | 0.825 | 0.8171 | 0.1500 | 0.0459 |
| 2.5595 | 99.0 | 693 | 3.1083 | 0.825 | 0.2907 | 1.2013 | 0.825 | 0.8171 | 0.1499 | 0.0460 |
| 2.5595 | 100.0 | 700 | 3.1084 | 0.825 | 0.2907 | 1.2013 | 0.825 | 0.8171 | 0.1500 | 0.0459 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
hkivancoral/hushem_5x_deit_base_adamax_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_adamax_0001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4419
- Accuracy: 0.8372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3275 | 1.0 | 28 | 1.2372 | 0.5814 |
| 1.0641 | 2.0 | 56 | 1.0484 | 0.6977 |
| 0.7591 | 3.0 | 84 | 0.8760 | 0.7442 |
| 0.5652 | 4.0 | 112 | 0.7360 | 0.8140 |
| 0.3906 | 5.0 | 140 | 0.6489 | 0.8372 |
| 0.3059 | 6.0 | 168 | 0.5954 | 0.8605 |
| 0.1994 | 7.0 | 196 | 0.5269 | 0.8372 |
| 0.134 | 8.0 | 224 | 0.5174 | 0.8605 |
| 0.0783 | 9.0 | 252 | 0.4602 | 0.8605 |
| 0.0454 | 10.0 | 280 | 0.4569 | 0.8372 |
| 0.0318 | 11.0 | 308 | 0.4393 | 0.8837 |
| 0.018 | 12.0 | 336 | 0.4222 | 0.8605 |
| 0.0132 | 13.0 | 364 | 0.4453 | 0.8837 |
| 0.0088 | 14.0 | 392 | 0.4098 | 0.8837 |
| 0.0068 | 15.0 | 420 | 0.4226 | 0.8605 |
| 0.0058 | 16.0 | 448 | 0.4268 | 0.8605 |
| 0.0055 | 17.0 | 476 | 0.4132 | 0.8605 |
| 0.0045 | 18.0 | 504 | 0.4342 | 0.8605 |
| 0.004 | 19.0 | 532 | 0.4228 | 0.8605 |
| 0.0033 | 20.0 | 560 | 0.4271 | 0.8372 |
| 0.0033 | 21.0 | 588 | 0.4254 | 0.8372 |
| 0.0029 | 22.0 | 616 | 0.4205 | 0.8372 |
| 0.0027 | 23.0 | 644 | 0.4207 | 0.8372 |
| 0.0024 | 24.0 | 672 | 0.4248 | 0.8605 |
| 0.0022 | 25.0 | 700 | 0.4229 | 0.8372 |
| 0.0021 | 26.0 | 728 | 0.4293 | 0.8372 |
| 0.002 | 27.0 | 756 | 0.4267 | 0.8372 |
| 0.002 | 28.0 | 784 | 0.4239 | 0.8605 |
| 0.0018 | 29.0 | 812 | 0.4273 | 0.8372 |
| 0.0018 | 30.0 | 840 | 0.4313 | 0.8372 |
| 0.0016 | 31.0 | 868 | 0.4289 | 0.8372 |
| 0.0016 | 32.0 | 896 | 0.4329 | 0.8372 |
| 0.0016 | 33.0 | 924 | 0.4313 | 0.8372 |
| 0.0014 | 34.0 | 952 | 0.4362 | 0.8372 |
| 0.0016 | 35.0 | 980 | 0.4336 | 0.8372 |
| 0.0014 | 36.0 | 1008 | 0.4353 | 0.8372 |
| 0.0014 | 37.0 | 1036 | 0.4446 | 0.8372 |
| 0.0013 | 38.0 | 1064 | 0.4482 | 0.8372 |
| 0.0013 | 39.0 | 1092 | 0.4496 | 0.8372 |
| 0.0012 | 40.0 | 1120 | 0.4442 | 0.8372 |
| 0.0013 | 41.0 | 1148 | 0.4456 | 0.8372 |
| 0.0013 | 42.0 | 1176 | 0.4450 | 0.8372 |
| 0.0012 | 43.0 | 1204 | 0.4433 | 0.8372 |
| 0.0012 | 44.0 | 1232 | 0.4424 | 0.8372 |
| 0.0011 | 45.0 | 1260 | 0.4418 | 0.8372 |
| 0.0011 | 46.0 | 1288 | 0.4417 | 0.8372 |
| 0.0011 | 47.0 | 1316 | 0.4421 | 0.8372 |
| 0.0011 | 48.0 | 1344 | 0.4419 | 0.8372 |
| 0.0011 | 49.0 | 1372 | 0.4419 | 0.8372 |
| 0.0011 | 50.0 | 1400 | 0.4419 | 0.8372 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_adamax_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_adamax_0001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0332
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7295 | 1.0 | 28 | 0.4220 | 0.8095 |
| 0.0888 | 2.0 | 56 | 0.1615 | 0.9286 |
| 0.0158 | 3.0 | 84 | 0.0867 | 0.9524 |
| 0.0047 | 4.0 | 112 | 0.1052 | 0.9524 |
| 0.001 | 5.0 | 140 | 0.1157 | 0.9524 |
| 0.0007 | 6.0 | 168 | 0.0506 | 0.9762 |
| 0.0004 | 7.0 | 196 | 0.0523 | 0.9762 |
| 0.0003 | 8.0 | 224 | 0.0521 | 1.0 |
| 0.0003 | 9.0 | 252 | 0.0486 | 1.0 |
| 0.0002 | 10.0 | 280 | 0.0478 | 1.0 |
| 0.0002 | 11.0 | 308 | 0.0461 | 1.0 |
| 0.0002 | 12.0 | 336 | 0.0436 | 1.0 |
| 0.0002 | 13.0 | 364 | 0.0422 | 1.0 |
| 0.0002 | 14.0 | 392 | 0.0427 | 1.0 |
| 0.0002 | 15.0 | 420 | 0.0410 | 1.0 |
| 0.0001 | 16.0 | 448 | 0.0408 | 1.0 |
| 0.0001 | 17.0 | 476 | 0.0398 | 1.0 |
| 0.0001 | 18.0 | 504 | 0.0389 | 1.0 |
| 0.0001 | 19.0 | 532 | 0.0389 | 1.0 |
| 0.0001 | 20.0 | 560 | 0.0386 | 1.0 |
| 0.0001 | 21.0 | 588 | 0.0377 | 1.0 |
| 0.0001 | 22.0 | 616 | 0.0375 | 1.0 |
| 0.0001 | 23.0 | 644 | 0.0368 | 1.0 |
| 0.0001 | 24.0 | 672 | 0.0368 | 1.0 |
| 0.0001 | 25.0 | 700 | 0.0364 | 1.0 |
| 0.0001 | 26.0 | 728 | 0.0360 | 1.0 |
| 0.0001 | 27.0 | 756 | 0.0354 | 1.0 |
| 0.0001 | 28.0 | 784 | 0.0352 | 1.0 |
| 0.0001 | 29.0 | 812 | 0.0345 | 1.0 |
| 0.0001 | 30.0 | 840 | 0.0346 | 1.0 |
| 0.0001 | 31.0 | 868 | 0.0344 | 1.0 |
| 0.0001 | 32.0 | 896 | 0.0342 | 1.0 |
| 0.0001 | 33.0 | 924 | 0.0343 | 1.0 |
| 0.0001 | 34.0 | 952 | 0.0340 | 1.0 |
| 0.0001 | 35.0 | 980 | 0.0336 | 1.0 |
| 0.0001 | 36.0 | 1008 | 0.0333 | 1.0 |
| 0.0001 | 37.0 | 1036 | 0.0331 | 1.0 |
| 0.0001 | 38.0 | 1064 | 0.0333 | 1.0 |
| 0.0001 | 39.0 | 1092 | 0.0331 | 1.0 |
| 0.0001 | 40.0 | 1120 | 0.0332 | 1.0 |
| 0.0001 | 41.0 | 1148 | 0.0332 | 1.0 |
| 0.0001 | 42.0 | 1176 | 0.0331 | 1.0 |
| 0.0001 | 43.0 | 1204 | 0.0330 | 1.0 |
| 0.0001 | 44.0 | 1232 | 0.0331 | 1.0 |
| 0.0001 | 45.0 | 1260 | 0.0330 | 1.0 |
| 0.0001 | 46.0 | 1288 | 0.0331 | 1.0 |
| 0.0001 | 47.0 | 1316 | 0.0332 | 1.0 |
| 0.0001 | 48.0 | 1344 | 0.0332 | 1.0 |
| 0.0001 | 49.0 | 1372 | 0.0332 | 1.0 |
| 0.0001 | 50.0 | 1400 | 0.0332 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
jordyvl/dit-base_tobacco-small_tobacco3482_hint
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-small_tobacco3482_hint
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9099
- Accuracy: 0.85
- Brier Loss: 0.2772
- Nll: 1.4757
- F1 Micro: 0.85
- F1 Macro: 0.8366
- Ece: 0.1392
- Aurc: 0.0460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 50 | 3.2233 | 0.44 | 0.7001 | 2.8339 | 0.44 | 0.3067 | 0.2724 | 0.3661 |
| No log | 2.0 | 100 | 2.3954 | 0.705 | 0.4016 | 1.5814 | 0.705 | 0.6657 | 0.2046 | 0.1093 |
| No log | 3.0 | 150 | 2.1938 | 0.735 | 0.3560 | 1.5685 | 0.735 | 0.7026 | 0.1879 | 0.0858 |
| No log | 4.0 | 200 | 2.0989 | 0.74 | 0.3533 | 1.5416 | 0.74 | 0.7058 | 0.2015 | 0.0896 |
| No log | 5.0 | 250 | 2.0203 | 0.795 | 0.3169 | 1.5407 | 0.795 | 0.7861 | 0.1773 | 0.0919 |
| No log | 6.0 | 300 | 2.1849 | 0.675 | 0.4531 | 1.6333 | 0.675 | 0.6701 | 0.2207 | 0.1166 |
| No log | 7.0 | 350 | 2.2223 | 0.745 | 0.4113 | 1.4333 | 0.745 | 0.7293 | 0.2045 | 0.0980 |
| No log | 8.0 | 400 | 2.1696 | 0.715 | 0.4221 | 1.6537 | 0.715 | 0.6723 | 0.2069 | 0.1040 |
| No log | 9.0 | 450 | 2.4443 | 0.735 | 0.4291 | 1.5392 | 0.735 | 0.7458 | 0.2236 | 0.1323 |
| 1.8536 | 10.0 | 500 | 2.0474 | 0.775 | 0.3649 | 1.6156 | 0.775 | 0.7528 | 0.1915 | 0.0844 |
| 1.8536 | 11.0 | 550 | 2.0046 | 0.81 | 0.3170 | 1.6225 | 0.81 | 0.7920 | 0.1547 | 0.0639 |
| 1.8536 | 12.0 | 600 | 2.4864 | 0.725 | 0.4602 | 1.5678 | 0.7250 | 0.7308 | 0.2415 | 0.1218 |
| 1.8536 | 13.0 | 650 | 1.8413 | 0.83 | 0.2698 | 1.6361 | 0.83 | 0.8117 | 0.1349 | 0.0674 |
| 1.8536 | 14.0 | 700 | 2.1304 | 0.815 | 0.3281 | 1.5685 | 0.815 | 0.7936 | 0.1715 | 0.0703 |
| 1.8536 | 15.0 | 750 | 2.5075 | 0.71 | 0.4652 | 1.9297 | 0.7100 | 0.6877 | 0.2281 | 0.1099 |
| 1.8536 | 16.0 | 800 | 2.4854 | 0.73 | 0.4462 | 1.5241 | 0.7300 | 0.7176 | 0.2282 | 0.1097 |
| 1.8536 | 17.0 | 850 | 2.1252 | 0.805 | 0.3210 | 1.5685 | 0.805 | 0.7907 | 0.1650 | 0.0804 |
| 1.8536 | 18.0 | 900 | 1.9249 | 0.86 | 0.2473 | 1.7031 | 0.8600 | 0.8689 | 0.1244 | 0.0528 |
| 1.8536 | 19.0 | 950 | 2.0943 | 0.835 | 0.2840 | 1.4696 | 0.835 | 0.8267 | 0.1439 | 0.0652 |
| 1.0941 | 20.0 | 1000 | 1.8548 | 0.845 | 0.2566 | 1.3059 | 0.845 | 0.8403 | 0.1333 | 0.0558 |
| 1.0941 | 21.0 | 1050 | 2.1487 | 0.805 | 0.3362 | 1.4556 | 0.805 | 0.8051 | 0.1665 | 0.0764 |
| 1.0941 | 22.0 | 1100 | 2.2147 | 0.81 | 0.3149 | 1.4884 | 0.81 | 0.8081 | 0.1710 | 0.0984 |
| 1.0941 | 23.0 | 1150 | 2.1111 | 0.84 | 0.2898 | 1.5426 | 0.8400 | 0.8410 | 0.1489 | 0.0848 |
| 1.0941 | 24.0 | 1200 | 2.2432 | 0.85 | 0.2884 | 1.7273 | 0.85 | 0.8482 | 0.1532 | 0.0765 |
| 1.0941 | 25.0 | 1250 | 2.3105 | 0.75 | 0.4190 | 1.4648 | 0.75 | 0.7396 | 0.2177 | 0.1074 |
| 1.0941 | 26.0 | 1300 | 2.0587 | 0.795 | 0.3444 | 1.6181 | 0.795 | 0.7960 | 0.1641 | 0.0799 |
| 1.0941 | 27.0 | 1350 | 2.4465 | 0.8 | 0.3517 | 2.0076 | 0.8000 | 0.7770 | 0.1731 | 0.0849 |
| 1.0941 | 28.0 | 1400 | 2.1351 | 0.825 | 0.3132 | 1.5650 | 0.825 | 0.8315 | 0.1631 | 0.0553 |
| 1.0941 | 29.0 | 1450 | 1.9746 | 0.86 | 0.2451 | 1.5908 | 0.8600 | 0.8374 | 0.1267 | 0.0537 |
| 0.9575 | 30.0 | 1500 | 2.0257 | 0.855 | 0.2737 | 1.6541 | 0.855 | 0.8121 | 0.1352 | 0.0480 |
| 0.9575 | 31.0 | 1550 | 1.9631 | 0.84 | 0.3037 | 1.7341 | 0.8400 | 0.8201 | 0.1515 | 0.0423 |
| 0.9575 | 32.0 | 1600 | 2.4215 | 0.785 | 0.3909 | 1.4042 | 0.785 | 0.7740 | 0.2018 | 0.0708 |
| 0.9575 | 33.0 | 1650 | 2.2159 | 0.795 | 0.3492 | 1.7639 | 0.795 | 0.7716 | 0.1721 | 0.0537 |
| 0.9575 | 34.0 | 1700 | 2.3363 | 0.82 | 0.3132 | 1.9858 | 0.82 | 0.7993 | 0.1610 | 0.0845 |
| 0.9575 | 35.0 | 1750 | 2.2187 | 0.84 | 0.2884 | 1.5376 | 0.8400 | 0.8182 | 0.1523 | 0.0803 |
| 0.9575 | 36.0 | 1800 | 2.3407 | 0.825 | 0.3206 | 1.8292 | 0.825 | 0.8028 | 0.1588 | 0.0719 |
| 0.9575 | 37.0 | 1850 | 2.4302 | 0.815 | 0.3353 | 1.7611 | 0.815 | 0.8091 | 0.1654 | 0.0920 |
| 0.9575 | 38.0 | 1900 | 2.3307 | 0.815 | 0.3269 | 1.8263 | 0.815 | 0.8043 | 0.1675 | 0.0876 |
| 0.9575 | 39.0 | 1950 | 2.2905 | 0.825 | 0.3217 | 1.7612 | 0.825 | 0.8116 | 0.1639 | 0.0841 |
| 0.8923 | 40.0 | 2000 | 2.2699 | 0.83 | 0.3225 | 1.7537 | 0.83 | 0.8186 | 0.1655 | 0.0792 |
| 0.8923 | 41.0 | 2050 | 2.2327 | 0.83 | 0.3179 | 1.7534 | 0.83 | 0.8186 | 0.1559 | 0.0764 |
| 0.8923 | 42.0 | 2100 | 2.2852 | 0.825 | 0.3230 | 1.6737 | 0.825 | 0.8150 | 0.1611 | 0.0760 |
| 0.8923 | 43.0 | 2150 | 2.2597 | 0.825 | 0.3221 | 1.6727 | 0.825 | 0.8147 | 0.1610 | 0.0734 |
| 0.8923 | 44.0 | 2200 | 2.2492 | 0.83 | 0.3176 | 1.6692 | 0.83 | 0.8169 | 0.1619 | 0.0720 |
| 0.8923 | 45.0 | 2250 | 2.2208 | 0.825 | 0.3182 | 1.6737 | 0.825 | 0.8124 | 0.1627 | 0.0707 |
| 0.8923 | 46.0 | 2300 | 2.2192 | 0.825 | 0.3209 | 1.6771 | 0.825 | 0.8121 | 0.1650 | 0.0712 |
| 0.8923 | 47.0 | 2350 | 2.2127 | 0.825 | 0.3198 | 1.6187 | 0.825 | 0.8124 | 0.1636 | 0.0684 |
| 0.8923 | 48.0 | 2400 | 2.2079 | 0.825 | 0.3208 | 1.6760 | 0.825 | 0.8121 | 0.1632 | 0.0707 |
| 0.8923 | 49.0 | 2450 | 2.1995 | 0.825 | 0.3187 | 1.5377 | 0.825 | 0.8124 | 0.1656 | 0.0702 |
| 0.8511 | 50.0 | 2500 | 2.1877 | 0.825 | 0.3158 | 1.6098 | 0.825 | 0.8124 | 0.1600 | 0.0690 |
| 0.8511 | 51.0 | 2550 | 2.1698 | 0.825 | 0.3167 | 1.5353 | 0.825 | 0.8124 | 0.1607 | 0.0695 |
| 0.8511 | 52.0 | 2600 | 2.1667 | 0.825 | 0.3133 | 1.5303 | 0.825 | 0.8121 | 0.1596 | 0.0680 |
| 0.8511 | 53.0 | 2650 | 2.1791 | 0.83 | 0.3170 | 1.5332 | 0.83 | 0.8149 | 0.1608 | 0.0690 |
| 0.8511 | 54.0 | 2700 | 2.1621 | 0.83 | 0.3148 | 1.5274 | 0.83 | 0.8146 | 0.1551 | 0.0693 |
| 0.8511 | 55.0 | 2750 | 2.1572 | 0.83 | 0.3119 | 1.5318 | 0.83 | 0.8149 | 0.1532 | 0.0680 |
| 0.8511 | 56.0 | 2800 | 2.1587 | 0.83 | 0.3100 | 1.5232 | 0.83 | 0.8148 | 0.1524 | 0.0712 |
| 0.8511 | 57.0 | 2850 | 2.1596 | 0.83 | 0.3101 | 1.5234 | 0.83 | 0.8146 | 0.1560 | 0.0696 |
| 0.8511 | 58.0 | 2900 | 2.1048 | 0.835 | 0.3047 | 1.5231 | 0.835 | 0.8189 | 0.1442 | 0.0676 |
| 0.8511 | 59.0 | 2950 | 2.4279 | 0.76 | 0.4096 | 1.4535 | 0.76 | 0.7538 | 0.2078 | 0.0731 |
| 0.8335 | 60.0 | 3000 | 2.2098 | 0.775 | 0.4036 | 1.4180 | 0.775 | 0.7565 | 0.2010 | 0.0870 |
| 0.8335 | 61.0 | 3050 | 2.0122 | 0.85 | 0.2596 | 1.5903 | 0.85 | 0.8272 | 0.1349 | 0.0779 |
| 0.8335 | 62.0 | 3100 | 2.2465 | 0.815 | 0.3311 | 1.6852 | 0.815 | 0.7899 | 0.1672 | 0.0658 |
| 0.8335 | 63.0 | 3150 | 2.1239 | 0.84 | 0.2963 | 1.6390 | 0.8400 | 0.8305 | 0.1458 | 0.0878 |
| 0.8335 | 64.0 | 3200 | 2.1931 | 0.82 | 0.3181 | 1.7037 | 0.82 | 0.8199 | 0.1654 | 0.0719 |
| 0.8335 | 65.0 | 3250 | 1.8262 | 0.855 | 0.2493 | 1.4845 | 0.855 | 0.8335 | 0.1297 | 0.0456 |
| 0.8335 | 66.0 | 3300 | 1.9467 | 0.845 | 0.2657 | 1.4217 | 0.845 | 0.8326 | 0.1361 | 0.0498 |
| 0.8335 | 67.0 | 3350 | 1.9371 | 0.85 | 0.2680 | 1.4175 | 0.85 | 0.8405 | 0.1293 | 0.0506 |
| 0.8335 | 68.0 | 3400 | 1.9172 | 0.85 | 0.2656 | 1.4203 | 0.85 | 0.8405 | 0.1331 | 0.0503 |
| 0.8335 | 69.0 | 3450 | 1.8872 | 0.845 | 0.2664 | 1.4327 | 0.845 | 0.8324 | 0.1360 | 0.0493 |
| 0.8281 | 70.0 | 3500 | 1.9045 | 0.845 | 0.2715 | 1.4920 | 0.845 | 0.8324 | 0.1377 | 0.0496 |
| 0.8281 | 71.0 | 3550 | 1.8954 | 0.845 | 0.2684 | 1.4919 | 0.845 | 0.8338 | 0.1385 | 0.0499 |
| 0.8281 | 72.0 | 3600 | 1.9222 | 0.85 | 0.2698 | 1.4870 | 0.85 | 0.8375 | 0.1356 | 0.0499 |
| 0.8281 | 73.0 | 3650 | 1.9004 | 0.845 | 0.2691 | 1.4912 | 0.845 | 0.8335 | 0.1377 | 0.0484 |
| 0.8281 | 74.0 | 3700 | 1.9168 | 0.85 | 0.2693 | 1.4903 | 0.85 | 0.8375 | 0.1338 | 0.0495 |
| 0.8281 | 75.0 | 3750 | 1.8970 | 0.85 | 0.2700 | 1.4908 | 0.85 | 0.8366 | 0.1416 | 0.0477 |
| 0.8281 | 76.0 | 3800 | 1.9089 | 0.85 | 0.2705 | 1.4867 | 0.85 | 0.8366 | 0.1373 | 0.0480 |
| 0.8281 | 77.0 | 3850 | 1.8902 | 0.85 | 0.2697 | 1.4896 | 0.85 | 0.8366 | 0.1407 | 0.0464 |
| 0.8281 | 78.0 | 3900 | 1.8889 | 0.85 | 0.2710 | 1.4882 | 0.85 | 0.8366 | 0.1421 | 0.0472 |
| 0.8281 | 79.0 | 3950 | 1.9080 | 0.85 | 0.2712 | 1.4876 | 0.85 | 0.8366 | 0.1345 | 0.0476 |
| 0.8047 | 80.0 | 4000 | 1.9011 | 0.85 | 0.2703 | 1.4864 | 0.85 | 0.8366 | 0.1373 | 0.0472 |
| 0.8047 | 81.0 | 4050 | 1.9112 | 0.85 | 0.2735 | 1.4867 | 0.85 | 0.8366 | 0.1379 | 0.0465 |
| 0.8047 | 82.0 | 4100 | 1.8850 | 0.85 | 0.2728 | 1.4872 | 0.85 | 0.8366 | 0.1419 | 0.0462 |
| 0.8047 | 83.0 | 4150 | 1.9074 | 0.85 | 0.2740 | 1.4862 | 0.85 | 0.8366 | 0.1369 | 0.0463 |
| 0.8047 | 84.0 | 4200 | 1.8804 | 0.85 | 0.2714 | 1.4818 | 0.85 | 0.8366 | 0.1376 | 0.0461 |
| 0.8047 | 85.0 | 4250 | 1.9092 | 0.85 | 0.2757 | 1.4825 | 0.85 | 0.8366 | 0.1437 | 0.0463 |
| 0.8047 | 86.0 | 4300 | 1.8985 | 0.85 | 0.2745 | 1.4827 | 0.85 | 0.8366 | 0.1390 | 0.0460 |
| 0.8047 | 87.0 | 4350 | 1.9091 | 0.85 | 0.2731 | 1.4808 | 0.85 | 0.8366 | 0.1403 | 0.0466 |
| 0.8047 | 88.0 | 4400 | 1.9037 | 0.85 | 0.2754 | 1.4836 | 0.85 | 0.8366 | 0.1383 | 0.0459 |
| 0.8047 | 89.0 | 4450 | 1.8950 | 0.85 | 0.2750 | 1.4798 | 0.85 | 0.8366 | 0.1386 | 0.0452 |
| 0.7971 | 90.0 | 4500 | 1.9115 | 0.85 | 0.2755 | 1.4785 | 0.85 | 0.8366 | 0.1387 | 0.0461 |
| 0.7971 | 91.0 | 4550 | 1.9061 | 0.85 | 0.2757 | 1.4791 | 0.85 | 0.8366 | 0.1451 | 0.0460 |
| 0.7971 | 92.0 | 4600 | 1.9058 | 0.85 | 0.2757 | 1.4785 | 0.85 | 0.8366 | 0.1392 | 0.0464 |
| 0.7971 | 93.0 | 4650 | 1.9128 | 0.85 | 0.2724 | 1.4769 | 0.85 | 0.8366 | 0.1341 | 0.0468 |
| 0.7971 | 94.0 | 4700 | 1.9115 | 0.85 | 0.2770 | 1.4771 | 0.85 | 0.8366 | 0.1388 | 0.0463 |
| 0.7971 | 95.0 | 4750 | 1.9097 | 0.85 | 0.2761 | 1.4761 | 0.85 | 0.8366 | 0.1382 | 0.0462 |
| 0.7971 | 96.0 | 4800 | 1.9025 | 0.85 | 0.2761 | 1.4759 | 0.85 | 0.8366 | 0.1385 | 0.0460 |
| 0.7971 | 97.0 | 4850 | 1.9153 | 0.85 | 0.2775 | 1.4757 | 0.85 | 0.8366 | 0.1394 | 0.0463 |
| 0.7971 | 98.0 | 4900 | 1.9084 | 0.85 | 0.2765 | 1.4755 | 0.85 | 0.8366 | 0.1388 | 0.0460 |
| 0.7971 | 99.0 | 4950 | 1.9087 | 0.85 | 0.2772 | 1.4757 | 0.85 | 0.8366 | 0.1392 | 0.0460 |
| 0.7931 | 100.0 | 5000 | 1.9099 | 0.85 | 0.2772 | 1.4757 | 0.85 | 0.8366 | 0.1392 | 0.0460 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/dit-base_tobacco-small_tobacco3482_simkd
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-small_tobacco3482_simkd
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6962
- Accuracy: 0.85
- Brier Loss: 0.2700
- Nll: 0.9667
- F1 Micro: 0.85
- F1 Macro: 0.8241
- Ece: 0.2479
- Aurc: 0.0379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 50 | 1.0013 | 0.18 | 0.8965 | 4.5407 | 0.18 | 0.1379 | 0.2160 | 0.6680 |
| No log | 2.0 | 100 | 0.9916 | 0.3 | 0.8871 | 3.1090 | 0.3 | 0.1526 | 0.3057 | 0.4735 |
| No log | 3.0 | 150 | 0.9644 | 0.51 | 0.8433 | 2.4502 | 0.51 | 0.3257 | 0.4499 | 0.2544 |
| No log | 4.0 | 200 | 0.9207 | 0.575 | 0.7585 | 2.1964 | 0.575 | 0.3958 | 0.4563 | 0.2193 |
| No log | 5.0 | 250 | 0.8726 | 0.635 | 0.6620 | 2.3923 | 0.635 | 0.5105 | 0.4321 | 0.1730 |
| No log | 6.0 | 300 | 0.8303 | 0.665 | 0.5604 | 1.4922 | 0.665 | 0.5869 | 0.3717 | 0.1305 |
| No log | 7.0 | 350 | 0.7994 | 0.745 | 0.4490 | 1.3772 | 0.745 | 0.6541 | 0.3557 | 0.0853 |
| No log | 8.0 | 400 | 0.7822 | 0.79 | 0.4124 | 1.2076 | 0.79 | 0.7109 | 0.3035 | 0.0873 |
| No log | 9.0 | 450 | 0.7808 | 0.78 | 0.3955 | 1.5529 | 0.78 | 0.7041 | 0.3123 | 0.0763 |
| 0.8704 | 10.0 | 500 | 0.7923 | 0.695 | 0.4296 | 1.7171 | 0.695 | 0.6150 | 0.3012 | 0.1039 |
| 0.8704 | 11.0 | 550 | 0.7848 | 0.745 | 0.4327 | 1.6327 | 0.745 | 0.6972 | 0.2800 | 0.1321 |
| 0.8704 | 12.0 | 600 | 0.7600 | 0.825 | 0.3579 | 1.2569 | 0.825 | 0.7621 | 0.3015 | 0.0624 |
| 0.8704 | 13.0 | 650 | 0.7570 | 0.79 | 0.3554 | 1.4638 | 0.79 | 0.7706 | 0.2964 | 0.0621 |
| 0.8704 | 14.0 | 700 | 0.7504 | 0.81 | 0.3434 | 1.5597 | 0.81 | 0.7714 | 0.2930 | 0.0589 |
| 0.8704 | 15.0 | 750 | 0.7481 | 0.8 | 0.3439 | 1.3827 | 0.8000 | 0.7641 | 0.2805 | 0.0675 |
| 0.8704 | 16.0 | 800 | 0.7358 | 0.81 | 0.3357 | 1.4522 | 0.81 | 0.7889 | 0.3077 | 0.0610 |
| 0.8704 | 17.0 | 850 | 0.7294 | 0.82 | 0.3179 | 1.0458 | 0.82 | 0.7820 | 0.2909 | 0.0564 |
| 0.8704 | 18.0 | 900 | 0.7229 | 0.815 | 0.3092 | 1.2562 | 0.815 | 0.7862 | 0.2719 | 0.0496 |
| 0.8704 | 19.0 | 950 | 0.7186 | 0.825 | 0.3069 | 1.0425 | 0.825 | 0.7977 | 0.2824 | 0.0558 |
| 0.6968 | 20.0 | 1000 | 0.7156 | 0.83 | 0.3031 | 0.9897 | 0.83 | 0.8039 | 0.2660 | 0.0490 |
| 0.6968 | 21.0 | 1050 | 0.7135 | 0.82 | 0.3014 | 1.0562 | 0.82 | 0.7887 | 0.2745 | 0.0462 |
| 0.6968 | 22.0 | 1100 | 0.7116 | 0.835 | 0.2997 | 0.9822 | 0.835 | 0.8102 | 0.2817 | 0.0452 |
| 0.6968 | 23.0 | 1150 | 0.7114 | 0.82 | 0.3047 | 0.9197 | 0.82 | 0.7937 | 0.2669 | 0.0484 |
| 0.6968 | 24.0 | 1200 | 0.7111 | 0.8 | 0.3032 | 0.9744 | 0.8000 | 0.7690 | 0.2624 | 0.0504 |
| 0.6968 | 25.0 | 1250 | 0.7076 | 0.805 | 0.3025 | 0.9884 | 0.805 | 0.7677 | 0.2538 | 0.0478 |
| 0.6968 | 26.0 | 1300 | 0.7074 | 0.82 | 0.3037 | 0.9954 | 0.82 | 0.7877 | 0.2592 | 0.0496 |
| 0.6968 | 27.0 | 1350 | 0.7053 | 0.825 | 0.2998 | 0.9712 | 0.825 | 0.7885 | 0.2628 | 0.0454 |
| 0.6968 | 28.0 | 1400 | 0.7046 | 0.82 | 0.2936 | 0.9780 | 0.82 | 0.7886 | 0.2573 | 0.0438 |
| 0.6968 | 29.0 | 1450 | 0.7068 | 0.82 | 0.3000 | 0.9943 | 0.82 | 0.7895 | 0.2382 | 0.0447 |
| 0.6551 | 30.0 | 1500 | 0.7045 | 0.83 | 0.2881 | 0.9107 | 0.83 | 0.8010 | 0.2363 | 0.0439 |
| 0.6551 | 31.0 | 1550 | 0.7033 | 0.825 | 0.2936 | 0.9794 | 0.825 | 0.7858 | 0.2556 | 0.0433 |
| 0.6551 | 32.0 | 1600 | 0.7014 | 0.82 | 0.2890 | 0.9799 | 0.82 | 0.7895 | 0.2495 | 0.0418 |
| 0.6551 | 33.0 | 1650 | 0.7020 | 0.815 | 0.2921 | 0.9658 | 0.815 | 0.7820 | 0.2556 | 0.0449 |
| 0.6551 | 34.0 | 1700 | 0.7012 | 0.835 | 0.2885 | 1.0419 | 0.835 | 0.8042 | 0.2581 | 0.0417 |
| 0.6551 | 35.0 | 1750 | 0.7013 | 0.835 | 0.2902 | 0.9773 | 0.835 | 0.8035 | 0.2522 | 0.0435 |
| 0.6551 | 36.0 | 1800 | 0.7016 | 0.825 | 0.2884 | 0.9815 | 0.825 | 0.7851 | 0.2518 | 0.0432 |
| 0.6551 | 37.0 | 1850 | 0.7007 | 0.835 | 0.2888 | 0.9724 | 0.835 | 0.8133 | 0.2486 | 0.0438 |
| 0.6551 | 38.0 | 1900 | 0.6984 | 0.825 | 0.2847 | 0.9650 | 0.825 | 0.7897 | 0.2487 | 0.0415 |
| 0.6551 | 39.0 | 1950 | 0.7001 | 0.84 | 0.2843 | 1.0535 | 0.8400 | 0.8104 | 0.2566 | 0.0418 |
| 0.6381 | 40.0 | 2000 | 0.6990 | 0.825 | 0.2843 | 0.9673 | 0.825 | 0.7963 | 0.2396 | 0.0429 |
| 0.6381 | 41.0 | 2050 | 0.7002 | 0.84 | 0.2875 | 1.0599 | 0.8400 | 0.8098 | 0.2618 | 0.0413 |
| 0.6381 | 42.0 | 2100 | 0.6967 | 0.83 | 0.2791 | 0.9676 | 0.83 | 0.7929 | 0.2441 | 0.0403 |
| 0.6381 | 43.0 | 2150 | 0.6978 | 0.835 | 0.2802 | 0.9771 | 0.835 | 0.8071 | 0.2526 | 0.0416 |
| 0.6381 | 44.0 | 2200 | 0.6969 | 0.84 | 0.2795 | 0.9478 | 0.8400 | 0.8164 | 0.2464 | 0.0418 |
| 0.6381 | 45.0 | 2250 | 0.6971 | 0.835 | 0.2760 | 0.9712 | 0.835 | 0.8030 | 0.2333 | 0.0392 |
| 0.6381 | 46.0 | 2300 | 0.6985 | 0.84 | 0.2813 | 0.9692 | 0.8400 | 0.8072 | 0.2403 | 0.0404 |
| 0.6381 | 47.0 | 2350 | 0.6976 | 0.835 | 0.2796 | 1.0420 | 0.835 | 0.8042 | 0.2374 | 0.0406 |
| 0.6381 | 48.0 | 2400 | 0.6965 | 0.85 | 0.2778 | 0.9753 | 0.85 | 0.8205 | 0.2653 | 0.0403 |
| 0.6381 | 49.0 | 2450 | 0.6969 | 0.825 | 0.2747 | 0.9606 | 0.825 | 0.7871 | 0.2478 | 0.0394 |
| 0.6274 | 50.0 | 2500 | 0.6954 | 0.835 | 0.2746 | 0.9572 | 0.835 | 0.8070 | 0.2395 | 0.0406 |
| 0.6274 | 51.0 | 2550 | 0.6972 | 0.835 | 0.2755 | 1.0383 | 0.835 | 0.8070 | 0.2484 | 0.0391 |
| 0.6274 | 52.0 | 2600 | 0.6955 | 0.83 | 0.2752 | 0.9699 | 0.83 | 0.7998 | 0.2562 | 0.0406 |
| 0.6274 | 53.0 | 2650 | 0.6950 | 0.835 | 0.2693 | 0.9563 | 0.835 | 0.8030 | 0.2300 | 0.0373 |
| 0.6274 | 54.0 | 2700 | 0.6960 | 0.83 | 0.2727 | 0.9646 | 0.83 | 0.7977 | 0.2347 | 0.0399 |
| 0.6274 | 55.0 | 2750 | 0.6946 | 0.83 | 0.2711 | 0.9603 | 0.83 | 0.8058 | 0.2279 | 0.0384 |
| 0.6274 | 56.0 | 2800 | 0.6940 | 0.835 | 0.2726 | 0.9579 | 0.835 | 0.8088 | 0.2478 | 0.0380 |
| 0.6274 | 57.0 | 2850 | 0.6951 | 0.835 | 0.2732 | 0.9594 | 0.835 | 0.8090 | 0.2336 | 0.0418 |
| 0.6274 | 58.0 | 2900 | 0.6936 | 0.84 | 0.2684 | 0.9575 | 0.8400 | 0.8079 | 0.2490 | 0.0373 |
| 0.6274 | 59.0 | 2950 | 0.6949 | 0.835 | 0.2701 | 0.9543 | 0.835 | 0.8088 | 0.2261 | 0.0389 |
| 0.6207 | 60.0 | 3000 | 0.6939 | 0.84 | 0.2697 | 0.9574 | 0.8400 | 0.8161 | 0.2339 | 0.0378 |
| 0.6207 | 61.0 | 3050 | 0.6952 | 0.84 | 0.2706 | 0.9611 | 0.8400 | 0.8080 | 0.2306 | 0.0379 |
| 0.6207 | 62.0 | 3100 | 0.6940 | 0.835 | 0.2691 | 0.9523 | 0.835 | 0.8086 | 0.2451 | 0.0382 |
| 0.6207 | 63.0 | 3150 | 0.6946 | 0.835 | 0.2672 | 0.9627 | 0.835 | 0.8088 | 0.2347 | 0.0374 |
| 0.6207 | 64.0 | 3200 | 0.6949 | 0.84 | 0.2713 | 0.9602 | 0.8400 | 0.8139 | 0.2404 | 0.0384 |
| 0.6207 | 65.0 | 3250 | 0.6944 | 0.835 | 0.2662 | 0.9603 | 0.835 | 0.8079 | 0.2308 | 0.0377 |
| 0.6207 | 66.0 | 3300 | 0.6946 | 0.835 | 0.2698 | 0.9593 | 0.835 | 0.8088 | 0.2352 | 0.0390 |
| 0.6207 | 67.0 | 3350 | 0.6934 | 0.83 | 0.2658 | 0.9558 | 0.83 | 0.8060 | 0.2260 | 0.0384 |
| 0.6207 | 68.0 | 3400 | 0.6944 | 0.83 | 0.2689 | 0.9517 | 0.83 | 0.8058 | 0.2208 | 0.0399 |
| 0.6207 | 69.0 | 3450 | 0.6946 | 0.835 | 0.2698 | 0.9553 | 0.835 | 0.8042 | 0.2331 | 0.0383 |
| 0.6156 | 70.0 | 3500 | 0.6948 | 0.83 | 0.2690 | 0.9549 | 0.83 | 0.8058 | 0.2280 | 0.0391 |
| 0.6156 | 71.0 | 3550 | 0.6936 | 0.84 | 0.2676 | 0.9532 | 0.8400 | 0.8122 | 0.2346 | 0.0383 |
| 0.6156 | 72.0 | 3600 | 0.6946 | 0.835 | 0.2667 | 0.9545 | 0.835 | 0.8088 | 0.2492 | 0.0379 |
| 0.6156 | 73.0 | 3650 | 0.6939 | 0.84 | 0.2670 | 0.9534 | 0.8400 | 0.8139 | 0.2466 | 0.0377 |
| 0.6156 | 74.0 | 3700 | 0.6948 | 0.835 | 0.2695 | 0.9522 | 0.835 | 0.8086 | 0.2312 | 0.0390 |
| 0.6156 | 75.0 | 3750 | 0.6951 | 0.835 | 0.2701 | 0.9622 | 0.835 | 0.8111 | 0.2158 | 0.0397 |
| 0.6156 | 76.0 | 3800 | 0.6949 | 0.84 | 0.2682 | 0.9606 | 0.8400 | 0.8139 | 0.2415 | 0.0382 |
| 0.6156 | 77.0 | 3850 | 0.6950 | 0.84 | 0.2684 | 0.9629 | 0.8400 | 0.8118 | 0.2493 | 0.0381 |
| 0.6156 | 78.0 | 3900 | 0.6946 | 0.835 | 0.2685 | 0.9522 | 0.835 | 0.8111 | 0.2360 | 0.0390 |
| 0.6156 | 79.0 | 3950 | 0.6944 | 0.84 | 0.2668 | 0.9544 | 0.8400 | 0.8118 | 0.2377 | 0.0372 |
| 0.612 | 80.0 | 4000 | 0.6954 | 0.84 | 0.2692 | 0.9579 | 0.8400 | 0.8139 | 0.2321 | 0.0381 |
| 0.612 | 81.0 | 4050 | 0.6956 | 0.84 | 0.2701 | 0.9606 | 0.8400 | 0.8139 | 0.2354 | 0.0382 |
| 0.612 | 82.0 | 4100 | 0.6952 | 0.835 | 0.2686 | 0.9600 | 0.835 | 0.8086 | 0.2540 | 0.0381 |
| 0.612 | 83.0 | 4150 | 0.6955 | 0.835 | 0.2689 | 0.9571 | 0.835 | 0.8086 | 0.2465 | 0.0383 |
| 0.612 | 84.0 | 4200 | 0.6952 | 0.84 | 0.2689 | 0.9583 | 0.8400 | 0.8159 | 0.2308 | 0.0387 |
| 0.612 | 85.0 | 4250 | 0.6956 | 0.835 | 0.2702 | 0.9618 | 0.835 | 0.8042 | 0.2365 | 0.0386 |
| 0.612 | 86.0 | 4300 | 0.6950 | 0.835 | 0.2683 | 0.9572 | 0.835 | 0.8086 | 0.2228 | 0.0382 |
| 0.612 | 87.0 | 4350 | 0.6949 | 0.84 | 0.2692 | 0.9583 | 0.8400 | 0.8118 | 0.2497 | 0.0381 |
| 0.612 | 88.0 | 4400 | 0.6953 | 0.845 | 0.2695 | 0.9617 | 0.845 | 0.8209 | 0.2558 | 0.0386 |
| 0.612 | 89.0 | 4450 | 0.6952 | 0.845 | 0.2689 | 0.9611 | 0.845 | 0.8209 | 0.2251 | 0.0383 |
| 0.6097 | 90.0 | 4500 | 0.6961 | 0.835 | 0.2701 | 0.9645 | 0.835 | 0.8042 | 0.2444 | 0.0386 |
| 0.6097 | 91.0 | 4550 | 0.6954 | 0.845 | 0.2689 | 0.9619 | 0.845 | 0.8209 | 0.2324 | 0.0383 |
| 0.6097 | 92.0 | 4600 | 0.6959 | 0.845 | 0.2700 | 0.9636 | 0.845 | 0.8209 | 0.2277 | 0.0388 |
| 0.6097 | 93.0 | 4650 | 0.6959 | 0.85 | 0.2694 | 0.9654 | 0.85 | 0.8241 | 0.2396 | 0.0379 |
| 0.6097 | 94.0 | 4700 | 0.6960 | 0.85 | 0.2696 | 0.9643 | 0.85 | 0.8241 | 0.2471 | 0.0379 |
| 0.6097 | 95.0 | 4750 | 0.6959 | 0.85 | 0.2694 | 0.9650 | 0.85 | 0.8241 | 0.2233 | 0.0378 |
| 0.6097 | 96.0 | 4800 | 0.6962 | 0.845 | 0.2700 | 0.9666 | 0.845 | 0.8144 | 0.2558 | 0.0382 |
| 0.6097 | 97.0 | 4850 | 0.6962 | 0.85 | 0.2699 | 0.9662 | 0.85 | 0.8241 | 0.2400 | 0.0381 |
| 0.6097 | 98.0 | 4900 | 0.6962 | 0.85 | 0.2700 | 0.9662 | 0.85 | 0.8241 | 0.2396 | 0.0380 |
| 0.6097 | 99.0 | 4950 | 0.6963 | 0.85 | 0.2700 | 0.9667 | 0.85 | 0.8241 | 0.2478 | 0.0379 |
| 0.6083 | 100.0 | 5000 | 0.6962 | 0.85 | 0.2700 | 0.9667 | 0.85 | 0.8241 | 0.2479 | 0.0379 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/dit-base_tobacco-small_tobacco3482_og_simkd
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco-small_tobacco3482_og_simkd
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 309.8690
- Accuracy: 0.815
- Brier Loss: 0.3313
- Nll: 1.1190
- F1 Micro: 0.815
- F1 Macro: 0.7825
- Ece: 0.2569
- Aurc: 0.0659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 50 | 328.2257 | 0.365 | 0.8441 | 5.5835 | 0.3650 | 0.2447 | 0.3371 | 0.4390 |
| No log | 2.0 | 100 | 325.5961 | 0.58 | 0.6442 | 1.8499 | 0.58 | 0.4727 | 0.3233 | 0.2414 |
| No log | 3.0 | 150 | 323.3813 | 0.66 | 0.4759 | 1.5815 | 0.66 | 0.5348 | 0.2704 | 0.1489 |
| No log | 4.0 | 200 | 322.5013 | 0.715 | 0.4234 | 1.6240 | 0.715 | 0.6009 | 0.2382 | 0.1142 |
| No log | 5.0 | 250 | 321.7315 | 0.755 | 0.3532 | 1.2868 | 0.755 | 0.6596 | 0.2141 | 0.0687 |
| No log | 6.0 | 300 | 320.5884 | 0.775 | 0.3668 | 1.4106 | 0.775 | 0.7233 | 0.2284 | 0.0922 |
| No log | 7.0 | 350 | 320.8456 | 0.775 | 0.3638 | 1.4833 | 0.775 | 0.7172 | 0.2487 | 0.0666 |
| No log | 8.0 | 400 | 319.6829 | 0.785 | 0.3308 | 1.3914 | 0.785 | 0.7203 | 0.1959 | 0.0674 |
| No log | 9.0 | 450 | 319.7741 | 0.815 | 0.3459 | 1.3920 | 0.815 | 0.7832 | 0.2541 | 0.0681 |
| 325.907 | 10.0 | 500 | 319.4605 | 0.775 | 0.3162 | 1.2997 | 0.775 | 0.6987 | 0.2140 | 0.0575 |
| 325.907 | 11.0 | 550 | 318.6996 | 0.81 | 0.3190 | 1.2271 | 0.81 | 0.7670 | 0.2110 | 0.0614 |
| 325.907 | 12.0 | 600 | 318.0233 | 0.81 | 0.3183 | 1.2432 | 0.81 | 0.7673 | 0.2134 | 0.0624 |
| 325.907 | 13.0 | 650 | 318.2606 | 0.79 | 0.3259 | 1.2187 | 0.79 | 0.7457 | 0.2299 | 0.0591 |
| 325.907 | 14.0 | 700 | 317.7428 | 0.83 | 0.3183 | 1.3279 | 0.83 | 0.8035 | 0.2449 | 0.0512 |
| 325.907 | 15.0 | 750 | 317.7053 | 0.81 | 0.3251 | 1.2097 | 0.81 | 0.7604 | 0.2193 | 0.0566 |
| 325.907 | 16.0 | 800 | 317.3470 | 0.84 | 0.3142 | 1.2606 | 0.8400 | 0.8132 | 0.2272 | 0.0484 |
| 325.907 | 17.0 | 850 | 316.8029 | 0.815 | 0.3202 | 1.1571 | 0.815 | 0.7748 | 0.2316 | 0.0563 |
| 325.907 | 18.0 | 900 | 316.9777 | 0.805 | 0.3442 | 1.1453 | 0.805 | 0.7645 | 0.2432 | 0.0625 |
| 325.907 | 19.0 | 950 | 316.2359 | 0.815 | 0.3219 | 1.1399 | 0.815 | 0.7717 | 0.2404 | 0.0603 |
| 320.1296 | 20.0 | 1000 | 316.1051 | 0.8 | 0.3220 | 1.1807 | 0.8000 | 0.7500 | 0.2412 | 0.0576 |
| 320.1296 | 21.0 | 1050 | 315.8117 | 0.845 | 0.3099 | 1.0976 | 0.845 | 0.8084 | 0.2530 | 0.0547 |
| 320.1296 | 22.0 | 1100 | 315.7457 | 0.82 | 0.3238 | 1.1904 | 0.82 | 0.7663 | 0.2507 | 0.0548 |
| 320.1296 | 23.0 | 1150 | 315.6591 | 0.82 | 0.3357 | 1.4044 | 0.82 | 0.7925 | 0.2639 | 0.0586 |
| 320.1296 | 24.0 | 1200 | 315.4048 | 0.82 | 0.3270 | 1.0817 | 0.82 | 0.7681 | 0.2575 | 0.0629 |
| 320.1296 | 25.0 | 1250 | 314.9790 | 0.81 | 0.3309 | 1.2002 | 0.81 | 0.7732 | 0.2279 | 0.0656 |
| 320.1296 | 26.0 | 1300 | 314.6778 | 0.79 | 0.3189 | 1.1219 | 0.79 | 0.7464 | 0.2014 | 0.0576 |
| 320.1296 | 27.0 | 1350 | 314.7844 | 0.8 | 0.3345 | 1.0655 | 0.8000 | 0.7555 | 0.2398 | 0.0661 |
| 320.1296 | 28.0 | 1400 | 314.4464 | 0.815 | 0.3175 | 1.1116 | 0.815 | 0.7636 | 0.2426 | 0.0532 |
| 320.1296 | 29.0 | 1450 | 314.3737 | 0.845 | 0.3271 | 1.1042 | 0.845 | 0.8072 | 0.2595 | 0.0531 |
| 317.1926 | 30.0 | 1500 | 313.9464 | 0.82 | 0.3225 | 1.1270 | 0.82 | 0.7841 | 0.2087 | 0.0609 |
| 317.1926 | 31.0 | 1550 | 314.0068 | 0.835 | 0.3187 | 1.1834 | 0.835 | 0.8070 | 0.2470 | 0.0522 |
| 317.1926 | 32.0 | 1600 | 313.8198 | 0.81 | 0.3271 | 1.0324 | 0.81 | 0.7642 | 0.2484 | 0.0609 |
| 317.1926 | 33.0 | 1650 | 313.7599 | 0.83 | 0.3193 | 1.0993 | 0.83 | 0.7910 | 0.2382 | 0.0536 |
| 317.1926 | 34.0 | 1700 | 313.4889 | 0.82 | 0.3224 | 1.1743 | 0.82 | 0.7823 | 0.2587 | 0.0546 |
| 317.1926 | 35.0 | 1750 | 313.2496 | 0.825 | 0.3324 | 1.1041 | 0.825 | 0.7988 | 0.2404 | 0.0652 |
| 317.1926 | 36.0 | 1800 | 313.1823 | 0.83 | 0.3207 | 1.0900 | 0.83 | 0.8007 | 0.2505 | 0.0581 |
| 317.1926 | 37.0 | 1850 | 313.1304 | 0.83 | 0.3367 | 1.2073 | 0.83 | 0.7973 | 0.2615 | 0.0571 |
| 317.1926 | 38.0 | 1900 | 313.2971 | 0.815 | 0.3398 | 1.1045 | 0.815 | 0.7709 | 0.2411 | 0.0641 |
| 317.1926 | 39.0 | 1950 | 313.0526 | 0.815 | 0.3352 | 1.1023 | 0.815 | 0.7744 | 0.2455 | 0.0616 |
| 315.1897 | 40.0 | 2000 | 312.7858 | 0.84 | 0.3231 | 1.0983 | 0.8400 | 0.8096 | 0.2619 | 0.0538 |
| 315.1897 | 41.0 | 2050 | 312.5119 | 0.815 | 0.3290 | 1.1174 | 0.815 | 0.7858 | 0.2540 | 0.0604 |
| 315.1897 | 42.0 | 2100 | 312.5961 | 0.82 | 0.3305 | 1.2144 | 0.82 | 0.7787 | 0.2480 | 0.0572 |
| 315.1897 | 43.0 | 2150 | 312.3510 | 0.825 | 0.3357 | 1.1367 | 0.825 | 0.7936 | 0.2398 | 0.0658 |
| 315.1897 | 44.0 | 2200 | 312.4015 | 0.81 | 0.3303 | 1.1015 | 0.81 | 0.7837 | 0.2488 | 0.0598 |
| 315.1897 | 45.0 | 2250 | 312.2003 | 0.825 | 0.3286 | 1.1810 | 0.825 | 0.7953 | 0.2480 | 0.0614 |
| 315.1897 | 46.0 | 2300 | 312.1683 | 0.825 | 0.3283 | 1.1112 | 0.825 | 0.7881 | 0.2414 | 0.0587 |
| 315.1897 | 47.0 | 2350 | 312.2554 | 0.815 | 0.3433 | 1.1313 | 0.815 | 0.7709 | 0.2579 | 0.0694 |
| 315.1897 | 48.0 | 2400 | 312.0919 | 0.825 | 0.3364 | 1.1074 | 0.825 | 0.7963 | 0.2471 | 0.0636 |
| 315.1897 | 49.0 | 2450 | 312.0760 | 0.82 | 0.3412 | 1.1076 | 0.82 | 0.7859 | 0.2554 | 0.0661 |
| 313.7276 | 50.0 | 2500 | 311.7450 | 0.83 | 0.3245 | 1.1723 | 0.83 | 0.7994 | 0.2512 | 0.0558 |
| 313.7276 | 51.0 | 2550 | 311.5801 | 0.835 | 0.3236 | 1.1056 | 0.835 | 0.7954 | 0.2581 | 0.0576 |
| 313.7276 | 52.0 | 2600 | 311.7016 | 0.83 | 0.3235 | 1.1182 | 0.83 | 0.7988 | 0.2462 | 0.0560 |
| 313.7276 | 53.0 | 2650 | 311.0808 | 0.81 | 0.3308 | 1.0526 | 0.81 | 0.7716 | 0.2401 | 0.0687 |
| 313.7276 | 54.0 | 2700 | 311.3835 | 0.81 | 0.3304 | 1.1210 | 0.81 | 0.7803 | 0.2442 | 0.0604 |
| 313.7276 | 55.0 | 2750 | 311.1007 | 0.825 | 0.3285 | 1.1265 | 0.825 | 0.7931 | 0.2569 | 0.0639 |
| 313.7276 | 56.0 | 2800 | 311.3446 | 0.81 | 0.3273 | 1.1896 | 0.81 | 0.7810 | 0.2342 | 0.0622 |
| 313.7276 | 57.0 | 2850 | 311.0753 | 0.825 | 0.3327 | 1.1225 | 0.825 | 0.7929 | 0.2659 | 0.0668 |
| 313.7276 | 58.0 | 2900 | 311.3600 | 0.825 | 0.3320 | 1.1142 | 0.825 | 0.8000 | 0.2524 | 0.0640 |
| 313.7276 | 59.0 | 2950 | 310.8636 | 0.83 | 0.3242 | 1.1157 | 0.83 | 0.8022 | 0.2416 | 0.0633 |
| 312.6368 | 60.0 | 3000 | 310.7809 | 0.815 | 0.3386 | 1.2166 | 0.815 | 0.7820 | 0.2571 | 0.0702 |
| 312.6368 | 61.0 | 3050 | 310.9625 | 0.825 | 0.3273 | 1.1168 | 0.825 | 0.7923 | 0.2362 | 0.0608 |
| 312.6368 | 62.0 | 3100 | 311.1122 | 0.81 | 0.3369 | 1.1021 | 0.81 | 0.7700 | 0.2433 | 0.0633 |
| 312.6368 | 63.0 | 3150 | 311.1530 | 0.82 | 0.3351 | 1.1108 | 0.82 | 0.7780 | 0.2584 | 0.0615 |
| 312.6368 | 64.0 | 3200 | 310.9366 | 0.8 | 0.3288 | 1.1112 | 0.8000 | 0.7616 | 0.2545 | 0.0609 |
| 312.6368 | 65.0 | 3250 | 310.7639 | 0.82 | 0.3379 | 1.0992 | 0.82 | 0.7898 | 0.2407 | 0.0710 |
| 312.6368 | 66.0 | 3300 | 310.5876 | 0.81 | 0.3287 | 1.1197 | 0.81 | 0.7763 | 0.2270 | 0.0654 |
| 312.6368 | 67.0 | 3350 | 310.7344 | 0.805 | 0.3387 | 1.1279 | 0.805 | 0.7646 | 0.2354 | 0.0679 |
| 312.6368 | 68.0 | 3400 | 310.2750 | 0.825 | 0.3323 | 1.1367 | 0.825 | 0.7971 | 0.2514 | 0.0673 |
| 312.6368 | 69.0 | 3450 | 310.5080 | 0.815 | 0.3298 | 1.1049 | 0.815 | 0.7845 | 0.2329 | 0.0664 |
| 311.7616 | 70.0 | 3500 | 310.6353 | 0.81 | 0.3305 | 1.1098 | 0.81 | 0.7745 | 0.2346 | 0.0633 |
| 311.7616 | 71.0 | 3550 | 310.3249 | 0.825 | 0.3286 | 1.1117 | 0.825 | 0.7951 | 0.2455 | 0.0641 |
| 311.7616 | 72.0 | 3600 | 310.5689 | 0.825 | 0.3248 | 1.1079 | 0.825 | 0.7911 | 0.2388 | 0.0586 |
| 311.7616 | 73.0 | 3650 | 310.4175 | 0.82 | 0.3298 | 1.1169 | 0.82 | 0.7859 | 0.2338 | 0.0630 |
| 311.7616 | 74.0 | 3700 | 310.1338 | 0.815 | 0.3313 | 1.1236 | 0.815 | 0.7902 | 0.2558 | 0.0677 |
| 311.7616 | 75.0 | 3750 | 310.4428 | 0.825 | 0.3310 | 1.1269 | 0.825 | 0.7972 | 0.2458 | 0.0606 |
| 311.7616 | 76.0 | 3800 | 310.3477 | 0.81 | 0.3317 | 1.1060 | 0.81 | 0.7775 | 0.2392 | 0.0654 |
| 311.7616 | 77.0 | 3850 | 310.2144 | 0.815 | 0.3294 | 1.1076 | 0.815 | 0.7857 | 0.2387 | 0.0627 |
| 311.7616 | 78.0 | 3900 | 310.1073 | 0.82 | 0.3296 | 1.1246 | 0.82 | 0.7993 | 0.2496 | 0.0634 |
| 311.7616 | 79.0 | 3950 | 310.1449 | 0.805 | 0.3246 | 1.1134 | 0.805 | 0.7734 | 0.2277 | 0.0627 |
| 311.1587 | 80.0 | 4000 | 310.1684 | 0.81 | 0.3327 | 1.1094 | 0.81 | 0.7781 | 0.2493 | 0.0660 |
| 311.1587 | 81.0 | 4050 | 310.1772 | 0.815 | 0.3311 | 1.1129 | 0.815 | 0.7876 | 0.2447 | 0.0668 |
| 311.1587 | 82.0 | 4100 | 309.9326 | 0.805 | 0.3295 | 1.1172 | 0.805 | 0.7716 | 0.2508 | 0.0666 |
| 311.1587 | 83.0 | 4150 | 310.1067 | 0.805 | 0.3330 | 1.1209 | 0.805 | 0.7756 | 0.2252 | 0.0653 |
| 311.1587 | 84.0 | 4200 | 309.9362 | 0.825 | 0.3288 | 1.1150 | 0.825 | 0.8024 | 0.2500 | 0.0637 |
| 311.1587 | 85.0 | 4250 | 309.6593 | 0.81 | 0.3292 | 1.1226 | 0.81 | 0.7723 | 0.2306 | 0.0680 |
| 311.1587 | 86.0 | 4300 | 309.9828 | 0.8 | 0.3310 | 1.1252 | 0.8000 | 0.7643 | 0.2474 | 0.0662 |
| 311.1587 | 87.0 | 4350 | 310.0325 | 0.825 | 0.3322 | 1.1136 | 0.825 | 0.7935 | 0.2633 | 0.0634 |
| 311.1587 | 88.0 | 4400 | 309.8688 | 0.815 | 0.3320 | 1.1145 | 0.815 | 0.7824 | 0.2478 | 0.0675 |
| 311.1587 | 89.0 | 4450 | 310.0577 | 0.81 | 0.3324 | 1.1160 | 0.81 | 0.7810 | 0.2475 | 0.0648 |
| 310.732 | 90.0 | 4500 | 309.8999 | 0.81 | 0.3273 | 1.1120 | 0.81 | 0.7720 | 0.2356 | 0.0624 |
| 310.732 | 91.0 | 4550 | 309.7399 | 0.815 | 0.3256 | 1.1164 | 0.815 | 0.7824 | 0.2502 | 0.0649 |
| 310.732 | 92.0 | 4600 | 309.9419 | 0.805 | 0.3287 | 1.1183 | 0.805 | 0.7751 | 0.2353 | 0.0640 |
| 310.732 | 93.0 | 4650 | 309.9055 | 0.81 | 0.3268 | 1.1194 | 0.81 | 0.7761 | 0.2429 | 0.0613 |
| 310.732 | 94.0 | 4700 | 309.7320 | 0.82 | 0.3275 | 1.1117 | 0.82 | 0.7914 | 0.2408 | 0.0654 |
| 310.732 | 95.0 | 4750 | 309.9635 | 0.81 | 0.3334 | 1.1067 | 0.81 | 0.7747 | 0.2317 | 0.0637 |
| 310.732 | 96.0 | 4800 | 309.9630 | 0.805 | 0.3304 | 1.1165 | 0.805 | 0.7712 | 0.2316 | 0.0631 |
| 310.732 | 97.0 | 4850 | 309.8564 | 0.815 | 0.3263 | 1.1130 | 0.815 | 0.7870 | 0.2355 | 0.0619 |
| 310.732 | 98.0 | 4900 | 309.7815 | 0.815 | 0.3298 | 1.1198 | 0.815 | 0.7857 | 0.2386 | 0.0634 |
| 310.732 | 99.0 | 4950 | 309.8337 | 0.81 | 0.3354 | 1.0806 | 0.81 | 0.7818 | 0.2480 | 0.0672 |
| 310.5225 | 100.0 | 5000 | 309.8690 | 0.815 | 0.3313 | 1.1190 | 0.815 | 0.7825 | 0.2569 | 0.0659 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/resnet101-base_tobacco-cnn_tobacco3482_kd
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_kd
This model is a fine-tuned version of [bdpc/resnet101-base_tobacco](https://huggingface.co/bdpc/resnet101-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8158
- Accuracy: 0.565
- Brier Loss: 0.6104
- Nll: 2.6027
- F1 Micro: 0.565
- F1 Macro: 0.4783
- Ece: 0.2677
- Aurc: 0.2516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 4 | 1.4988 | 0.065 | 0.9007 | 9.6504 | 0.065 | 0.0267 | 0.1512 | 0.9377 |
| No log | 2.0 | 8 | 1.4615 | 0.155 | 0.8961 | 7.9200 | 0.155 | 0.0268 | 0.2328 | 0.9605 |
| No log | 3.0 | 12 | 1.4913 | 0.155 | 0.9531 | 11.6402 | 0.155 | 0.0268 | 0.3390 | 0.8899 |
| No log | 4.0 | 16 | 2.2747 | 0.155 | 1.4111 | 11.0294 | 0.155 | 0.0268 | 0.7077 | 0.7068 |
| No log | 5.0 | 20 | 2.4543 | 0.155 | 1.4359 | 8.3074 | 0.155 | 0.0268 | 0.7226 | 0.6151 |
| No log | 6.0 | 24 | 1.9614 | 0.155 | 1.1785 | 6.6431 | 0.155 | 0.0283 | 0.5497 | 0.6022 |
| No log | 7.0 | 28 | 1.6280 | 0.18 | 0.9978 | 5.8468 | 0.18 | 0.0488 | 0.4014 | 0.6135 |
| No log | 8.0 | 32 | 1.3465 | 0.225 | 0.8993 | 5.6177 | 0.225 | 0.0740 | 0.3378 | 0.5786 |
| No log | 9.0 | 36 | 1.2597 | 0.225 | 0.8794 | 5.0542 | 0.225 | 0.0727 | 0.3403 | 0.5658 |
| No log | 10.0 | 40 | 1.1149 | 0.27 | 0.8181 | 4.5188 | 0.27 | 0.1222 | 0.2890 | 0.5230 |
| No log | 11.0 | 44 | 0.9805 | 0.31 | 0.7600 | 3.8687 | 0.31 | 0.1726 | 0.2703 | 0.4690 |
| No log | 12.0 | 48 | 1.0099 | 0.335 | 0.7732 | 3.6652 | 0.335 | 0.2095 | 0.2892 | 0.4739 |
| No log | 13.0 | 52 | 1.0522 | 0.335 | 0.7919 | 3.3843 | 0.335 | 0.2562 | 0.3006 | 0.6402 |
| No log | 14.0 | 56 | 1.0566 | 0.32 | 0.7868 | 3.4244 | 0.32 | 0.2373 | 0.3023 | 0.6094 |
| No log | 15.0 | 60 | 0.9670 | 0.405 | 0.7333 | 3.3926 | 0.405 | 0.3189 | 0.3013 | 0.4037 |
| No log | 16.0 | 64 | 1.0979 | 0.31 | 0.7877 | 3.3045 | 0.31 | 0.2262 | 0.2792 | 0.5720 |
| No log | 17.0 | 68 | 0.9022 | 0.44 | 0.6913 | 3.2277 | 0.44 | 0.3429 | 0.2902 | 0.3657 |
| No log | 18.0 | 72 | 1.2120 | 0.315 | 0.8075 | 4.1289 | 0.315 | 0.2323 | 0.2857 | 0.5909 |
| No log | 19.0 | 76 | 1.1945 | 0.39 | 0.7974 | 4.2350 | 0.39 | 0.3292 | 0.3271 | 0.5989 |
| No log | 20.0 | 80 | 1.3861 | 0.345 | 0.7981 | 5.2605 | 0.345 | 0.2700 | 0.2832 | 0.5299 |
| No log | 21.0 | 84 | 1.2243 | 0.33 | 0.8073 | 4.5262 | 0.33 | 0.2545 | 0.3068 | 0.6133 |
| No log | 22.0 | 88 | 1.0455 | 0.38 | 0.7238 | 2.7133 | 0.38 | 0.3084 | 0.2901 | 0.4855 |
| No log | 23.0 | 92 | 0.9044 | 0.45 | 0.6814 | 3.4361 | 0.45 | 0.3273 | 0.2927 | 0.3246 |
| No log | 24.0 | 96 | 0.8930 | 0.495 | 0.6596 | 3.3412 | 0.495 | 0.4185 | 0.2882 | 0.3070 |
| No log | 25.0 | 100 | 0.8665 | 0.485 | 0.6534 | 2.9998 | 0.485 | 0.4154 | 0.2641 | 0.3298 |
| No log | 26.0 | 104 | 1.0458 | 0.375 | 0.7579 | 3.1074 | 0.375 | 0.3333 | 0.2735 | 0.5293 |
| No log | 27.0 | 108 | 1.0170 | 0.41 | 0.7321 | 2.8884 | 0.41 | 0.3468 | 0.2976 | 0.4566 |
| No log | 28.0 | 112 | 1.0956 | 0.395 | 0.7464 | 3.3094 | 0.395 | 0.3255 | 0.3154 | 0.4684 |
| No log | 29.0 | 116 | 1.0805 | 0.39 | 0.7544 | 3.2115 | 0.39 | 0.3193 | 0.3014 | 0.4594 |
| No log | 30.0 | 120 | 1.2358 | 0.375 | 0.7733 | 4.3992 | 0.375 | 0.3058 | 0.2845 | 0.4876 |
| No log | 31.0 | 124 | 1.0532 | 0.4 | 0.7458 | 2.7398 | 0.4000 | 0.3614 | 0.2890 | 0.4961 |
| No log | 32.0 | 128 | 1.0166 | 0.365 | 0.7355 | 2.5093 | 0.3650 | 0.2862 | 0.2728 | 0.5057 |
| No log | 33.0 | 132 | 0.9395 | 0.48 | 0.6807 | 2.6211 | 0.48 | 0.4394 | 0.2843 | 0.3719 |
| No log | 34.0 | 136 | 0.8718 | 0.52 | 0.6538 | 2.6802 | 0.52 | 0.4697 | 0.2954 | 0.3051 |
| No log | 35.0 | 140 | 0.8339 | 0.51 | 0.6362 | 3.1084 | 0.51 | 0.4373 | 0.2654 | 0.3006 |
| No log | 36.0 | 144 | 0.8411 | 0.51 | 0.6359 | 2.7881 | 0.51 | 0.4286 | 0.2759 | 0.2906 |
| No log | 37.0 | 148 | 0.8556 | 0.505 | 0.6402 | 2.5519 | 0.505 | 0.4076 | 0.2522 | 0.3060 |
| No log | 38.0 | 152 | 1.0928 | 0.395 | 0.7438 | 2.8660 | 0.395 | 0.3337 | 0.2815 | 0.4724 |
| No log | 39.0 | 156 | 1.3830 | 0.39 | 0.8135 | 4.7392 | 0.39 | 0.3094 | 0.2879 | 0.5239 |
| No log | 40.0 | 160 | 1.2180 | 0.38 | 0.7760 | 3.8384 | 0.38 | 0.3106 | 0.2614 | 0.5109 |
| No log | 41.0 | 164 | 1.1337 | 0.365 | 0.7486 | 2.8843 | 0.3650 | 0.2948 | 0.2665 | 0.4630 |
| No log | 42.0 | 168 | 0.8814 | 0.53 | 0.6425 | 2.3353 | 0.53 | 0.4645 | 0.2968 | 0.2973 |
| No log | 43.0 | 172 | 0.8324 | 0.515 | 0.6174 | 2.4407 | 0.515 | 0.4517 | 0.2847 | 0.2742 |
| No log | 44.0 | 176 | 0.8477 | 0.53 | 0.6282 | 2.5469 | 0.53 | 0.4615 | 0.2712 | 0.2831 |
| No log | 45.0 | 180 | 0.8307 | 0.515 | 0.6190 | 2.4871 | 0.515 | 0.4404 | 0.2594 | 0.2845 |
| No log | 46.0 | 184 | 0.8116 | 0.53 | 0.6070 | 2.4944 | 0.53 | 0.4410 | 0.2337 | 0.2451 |
| No log | 47.0 | 188 | 0.8349 | 0.54 | 0.6260 | 2.2843 | 0.54 | 0.4423 | 0.2911 | 0.2616 |
| No log | 48.0 | 192 | 0.8298 | 0.555 | 0.6178 | 2.2946 | 0.555 | 0.4725 | 0.2568 | 0.2482 |
| No log | 49.0 | 196 | 0.8252 | 0.565 | 0.6141 | 2.3311 | 0.565 | 0.4762 | 0.2810 | 0.2504 |
| No log | 50.0 | 200 | 0.8158 | 0.565 | 0.6104 | 2.6027 | 0.565 | 0.4783 | 0.2677 | 0.2516 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
hkivancoral/hushem_5x_deit_base_adamax_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_adamax_0001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4553
- Accuracy: 0.9024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7557 | 1.0 | 28 | 0.4830 | 0.7805 |
| 0.1392 | 2.0 | 56 | 0.3138 | 0.8293 |
| 0.0115 | 3.0 | 84 | 0.3481 | 0.8537 |
| 0.0078 | 4.0 | 112 | 0.3098 | 0.8537 |
| 0.0013 | 5.0 | 140 | 0.3283 | 0.9024 |
| 0.0023 | 6.0 | 168 | 0.5654 | 0.8537 |
| 0.0004 | 7.0 | 196 | 0.4129 | 0.9024 |
| 0.0003 | 8.0 | 224 | 0.4041 | 0.9024 |
| 0.0002 | 9.0 | 252 | 0.4192 | 0.9024 |
| 0.0002 | 10.0 | 280 | 0.4257 | 0.9024 |
| 0.0002 | 11.0 | 308 | 0.4271 | 0.9024 |
| 0.0002 | 12.0 | 336 | 0.4272 | 0.9024 |
| 0.0001 | 13.0 | 364 | 0.4303 | 0.9024 |
| 0.0002 | 14.0 | 392 | 0.4309 | 0.9024 |
| 0.0001 | 15.0 | 420 | 0.4302 | 0.9024 |
| 0.0001 | 16.0 | 448 | 0.4300 | 0.9024 |
| 0.0001 | 17.0 | 476 | 0.4319 | 0.9024 |
| 0.0001 | 18.0 | 504 | 0.4342 | 0.9024 |
| 0.0001 | 19.0 | 532 | 0.4349 | 0.9024 |
| 0.0001 | 20.0 | 560 | 0.4354 | 0.9024 |
| 0.0001 | 21.0 | 588 | 0.4378 | 0.9024 |
| 0.0001 | 22.0 | 616 | 0.4393 | 0.9024 |
| 0.0001 | 23.0 | 644 | 0.4414 | 0.9024 |
| 0.0001 | 24.0 | 672 | 0.4417 | 0.9024 |
| 0.0001 | 25.0 | 700 | 0.4428 | 0.9024 |
| 0.0001 | 26.0 | 728 | 0.4429 | 0.9024 |
| 0.0001 | 27.0 | 756 | 0.4437 | 0.9024 |
| 0.0001 | 28.0 | 784 | 0.4437 | 0.9024 |
| 0.0001 | 29.0 | 812 | 0.4449 | 0.9024 |
| 0.0001 | 30.0 | 840 | 0.4459 | 0.9024 |
| 0.0001 | 31.0 | 868 | 0.4470 | 0.9024 |
| 0.0001 | 32.0 | 896 | 0.4471 | 0.9024 |
| 0.0001 | 33.0 | 924 | 0.4499 | 0.9024 |
| 0.0001 | 34.0 | 952 | 0.4499 | 0.9024 |
| 0.0001 | 35.0 | 980 | 0.4504 | 0.9024 |
| 0.0001 | 36.0 | 1008 | 0.4504 | 0.9024 |
| 0.0001 | 37.0 | 1036 | 0.4513 | 0.9024 |
| 0.0001 | 38.0 | 1064 | 0.4525 | 0.9024 |
| 0.0001 | 39.0 | 1092 | 0.4530 | 0.9024 |
| 0.0001 | 40.0 | 1120 | 0.4533 | 0.9024 |
| 0.0001 | 41.0 | 1148 | 0.4538 | 0.9024 |
| 0.0001 | 42.0 | 1176 | 0.4539 | 0.9024 |
| 0.0001 | 43.0 | 1204 | 0.4547 | 0.9024 |
| 0.0001 | 44.0 | 1232 | 0.4551 | 0.9024 |
| 0.0001 | 45.0 | 1260 | 0.4551 | 0.9024 |
| 0.0001 | 46.0 | 1288 | 0.4551 | 0.9024 |
| 0.0001 | 47.0 | 1316 | 0.4553 | 0.9024 |
| 0.0001 | 48.0 | 1344 | 0.4553 | 0.9024 |
| 0.0001 | 49.0 | 1372 | 0.4553 | 0.9024 |
| 0.0001 | 50.0 | 1400 | 0.4553 | 0.9024 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
jordyvl/resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t1.0_a0.5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_kd_CEKD_t1.0_a0.5
This model is a fine-tuned version of [bdpc/resnet101-base_tobacco](https://huggingface.co/bdpc/resnet101-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8577
- Accuracy: 0.53
- Brier Loss: 0.6406
- Nll: 2.1208
- F1 Micro: 0.53
- F1 Macro: 0.4957
- Ece: 0.3004
- Aurc: 0.3168
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 4 | 1.4267 | 0.05 | 0.9008 | 9.6592 | 0.0500 | 0.0177 | 0.1432 | 0.9439 |
| No log | 2.0 | 8 | 1.4006 | 0.155 | 0.8969 | 7.9140 | 0.155 | 0.0268 | 0.2365 | 0.9603 |
| No log | 3.0 | 12 | 1.4621 | 0.155 | 0.9457 | 13.3695 | 0.155 | 0.0268 | 0.3013 | 0.9107 |
| No log | 4.0 | 16 | 2.1836 | 0.155 | 1.3252 | 12.8977 | 0.155 | 0.0268 | 0.6400 | 0.7514 |
| No log | 5.0 | 20 | 2.4365 | 0.155 | 1.3998 | 8.4435 | 0.155 | 0.0268 | 0.7030 | 0.6102 |
| No log | 6.0 | 24 | 2.1554 | 0.155 | 1.2534 | 6.9190 | 0.155 | 0.0279 | 0.5987 | 0.6271 |
| No log | 7.0 | 28 | 1.5617 | 0.175 | 0.9637 | 5.7454 | 0.175 | 0.0462 | 0.3802 | 0.6485 |
| No log | 8.0 | 32 | 1.3267 | 0.245 | 0.8707 | 5.2368 | 0.245 | 0.0835 | 0.2961 | 0.5438 |
| No log | 9.0 | 36 | 1.2434 | 0.19 | 0.8886 | 5.0360 | 0.19 | 0.0471 | 0.3198 | 0.7720 |
| No log | 10.0 | 40 | 1.0721 | 0.305 | 0.8123 | 4.5157 | 0.305 | 0.1762 | 0.2684 | 0.5269 |
| No log | 11.0 | 44 | 1.1256 | 0.22 | 0.8429 | 3.9215 | 0.22 | 0.1083 | 0.2812 | 0.7346 |
| No log | 12.0 | 48 | 0.9865 | 0.35 | 0.7676 | 3.4553 | 0.35 | 0.2565 | 0.2884 | 0.4790 |
| No log | 13.0 | 52 | 1.0206 | 0.355 | 0.7899 | 3.3582 | 0.3550 | 0.2278 | 0.2954 | 0.5883 |
| No log | 14.0 | 56 | 0.9096 | 0.415 | 0.6994 | 3.2174 | 0.415 | 0.3147 | 0.2563 | 0.3596 |
| No log | 15.0 | 60 | 0.9187 | 0.415 | 0.7129 | 3.2059 | 0.415 | 0.2742 | 0.2941 | 0.3971 |
| No log | 16.0 | 64 | 0.8905 | 0.395 | 0.6956 | 2.9931 | 0.395 | 0.2618 | 0.2590 | 0.3826 |
| No log | 17.0 | 68 | 0.9108 | 0.425 | 0.7073 | 3.1634 | 0.425 | 0.2855 | 0.2995 | 0.3685 |
| No log | 18.0 | 72 | 0.8769 | 0.465 | 0.6706 | 3.1088 | 0.465 | 0.3652 | 0.2855 | 0.3261 |
| No log | 19.0 | 76 | 0.8585 | 0.475 | 0.6687 | 2.8710 | 0.4750 | 0.3884 | 0.2916 | 0.3282 |
| No log | 20.0 | 80 | 0.9822 | 0.405 | 0.7378 | 2.8889 | 0.405 | 0.3570 | 0.2850 | 0.4895 |
| No log | 21.0 | 84 | 0.9324 | 0.445 | 0.6992 | 2.7975 | 0.445 | 0.3553 | 0.3021 | 0.3762 |
| No log | 22.0 | 88 | 1.0330 | 0.42 | 0.7350 | 2.7487 | 0.4200 | 0.3506 | 0.2984 | 0.4771 |
| No log | 23.0 | 92 | 0.8755 | 0.455 | 0.6674 | 2.5903 | 0.455 | 0.3415 | 0.2570 | 0.3352 |
| No log | 24.0 | 96 | 0.8651 | 0.47 | 0.6443 | 2.8456 | 0.47 | 0.3800 | 0.2451 | 0.2975 |
| No log | 25.0 | 100 | 0.9567 | 0.445 | 0.7150 | 2.7083 | 0.445 | 0.3727 | 0.2667 | 0.4676 |
| No log | 26.0 | 104 | 1.0224 | 0.42 | 0.7376 | 2.4408 | 0.4200 | 0.3367 | 0.2968 | 0.5019 |
| No log | 27.0 | 108 | 0.8365 | 0.525 | 0.6407 | 2.6426 | 0.525 | 0.4496 | 0.2960 | 0.2657 |
| No log | 28.0 | 112 | 0.9798 | 0.425 | 0.7287 | 2.6379 | 0.425 | 0.3489 | 0.2640 | 0.4668 |
| No log | 29.0 | 116 | 0.9226 | 0.44 | 0.6965 | 2.5748 | 0.44 | 0.3669 | 0.2561 | 0.4054 |
| No log | 30.0 | 120 | 0.8303 | 0.49 | 0.6398 | 2.4839 | 0.49 | 0.3924 | 0.2981 | 0.2936 |
| No log | 31.0 | 124 | 0.8426 | 0.52 | 0.6478 | 2.5282 | 0.52 | 0.4322 | 0.3109 | 0.3084 |
| No log | 32.0 | 128 | 0.9111 | 0.45 | 0.6970 | 2.3870 | 0.45 | 0.3947 | 0.2837 | 0.4448 |
| No log | 33.0 | 132 | 0.8723 | 0.51 | 0.6524 | 2.6124 | 0.51 | 0.4170 | 0.2536 | 0.3365 |
| No log | 34.0 | 136 | 0.8936 | 0.47 | 0.6671 | 2.8892 | 0.47 | 0.3814 | 0.2436 | 0.3357 |
| No log | 35.0 | 140 | 1.2870 | 0.42 | 0.7660 | 4.4020 | 0.4200 | 0.3468 | 0.2860 | 0.4606 |
| No log | 36.0 | 144 | 0.9991 | 0.455 | 0.7289 | 2.6973 | 0.455 | 0.4132 | 0.3272 | 0.4684 |
| No log | 37.0 | 148 | 1.6352 | 0.365 | 0.8356 | 4.7695 | 0.3650 | 0.3020 | 0.3312 | 0.6069 |
| No log | 38.0 | 152 | 1.3014 | 0.39 | 0.8213 | 2.9436 | 0.39 | 0.3382 | 0.3262 | 0.5476 |
| No log | 39.0 | 156 | 1.0294 | 0.415 | 0.7361 | 2.7188 | 0.415 | 0.3446 | 0.2454 | 0.4632 |
| No log | 40.0 | 160 | 0.8825 | 0.52 | 0.6538 | 2.3887 | 0.52 | 0.4608 | 0.2721 | 0.3186 |
| No log | 41.0 | 164 | 0.8572 | 0.54 | 0.6288 | 2.4201 | 0.54 | 0.4822 | 0.2963 | 0.2899 |
| No log | 42.0 | 168 | 0.8393 | 0.535 | 0.6291 | 2.3587 | 0.535 | 0.4726 | 0.2824 | 0.2937 |
| No log | 43.0 | 172 | 0.8369 | 0.515 | 0.6303 | 2.4060 | 0.515 | 0.4583 | 0.2689 | 0.2903 |
| No log | 44.0 | 176 | 0.8458 | 0.49 | 0.6346 | 2.3323 | 0.49 | 0.4428 | 0.2526 | 0.2951 |
| No log | 45.0 | 180 | 0.8446 | 0.49 | 0.6367 | 2.2207 | 0.49 | 0.4289 | 0.2655 | 0.3041 |
| No log | 46.0 | 184 | 0.8324 | 0.54 | 0.6289 | 2.3685 | 0.54 | 0.4779 | 0.2571 | 0.2873 |
| No log | 47.0 | 188 | 0.8658 | 0.515 | 0.6486 | 2.3922 | 0.515 | 0.4584 | 0.2623 | 0.3100 |
| No log | 48.0 | 192 | 0.8516 | 0.525 | 0.6410 | 2.4448 | 0.525 | 0.4700 | 0.3006 | 0.3044 |
| No log | 49.0 | 196 | 0.8520 | 0.55 | 0.6350 | 2.2049 | 0.55 | 0.4947 | 0.3030 | 0.2980 |
| No log | 50.0 | 200 | 0.8577 | 0.53 | 0.6406 | 2.1208 | 0.53 | 0.4957 | 0.3004 | 0.3168 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/resnet101-base_tobacco-cnn_tobacco3482_kd_MSE
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_kd_MSE
This model is a fine-tuned version of [bdpc/resnet101-base_tobacco](https://huggingface.co/bdpc/resnet101-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1315
- Accuracy: 0.365
- Brier Loss: 0.7313
- Nll: 5.5846
- F1 Micro: 0.3650
- F1 Macro: 0.2369
- Ece: 0.2526
- Aurc: 0.4412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 4 | 1.1507 | 0.1 | 0.8998 | 9.9508 | 0.1000 | 0.0462 | 0.1728 | 0.9208 |
| No log | 2.0 | 8 | 0.9900 | 0.155 | 0.8924 | 9.6289 | 0.155 | 0.0268 | 0.2400 | 0.9571 |
| No log | 3.0 | 12 | 0.8441 | 0.155 | 0.9273 | 8.9944 | 0.155 | 0.0268 | 0.3276 | 0.9345 |
| No log | 4.0 | 16 | 1.4048 | 0.155 | 1.3149 | 8.9869 | 0.155 | 0.0268 | 0.6569 | 0.6091 |
| No log | 5.0 | 20 | 1.0761 | 0.155 | 1.1553 | 8.9185 | 0.155 | 0.0272 | 0.5441 | 0.6112 |
| No log | 6.0 | 24 | 1.1745 | 0.155 | 1.1386 | 9.2644 | 0.155 | 0.0304 | 0.4982 | 0.6120 |
| No log | 7.0 | 28 | 0.4686 | 0.225 | 0.8829 | 7.3879 | 0.225 | 0.0724 | 0.3173 | 0.5804 |
| No log | 8.0 | 32 | 0.3535 | 0.24 | 0.8393 | 7.0880 | 0.24 | 0.0797 | 0.2963 | 0.5518 |
| No log | 9.0 | 36 | 0.2519 | 0.295 | 0.8157 | 6.6738 | 0.295 | 0.1375 | 0.2944 | 0.4810 |
| No log | 10.0 | 40 | 0.2957 | 0.265 | 0.8432 | 6.8903 | 0.265 | 0.1030 | 0.3171 | 0.5807 |
| No log | 11.0 | 44 | 0.5224 | 0.21 | 0.8832 | 8.6128 | 0.2100 | 0.0987 | 0.2948 | 0.6814 |
| No log | 12.0 | 48 | 0.4088 | 0.18 | 0.8807 | 7.0533 | 0.18 | 0.0309 | 0.2966 | 0.7466 |
| No log | 13.0 | 52 | 0.5082 | 0.225 | 0.8732 | 8.3126 | 0.225 | 0.0606 | 0.2761 | 0.7285 |
| No log | 14.0 | 56 | 0.5253 | 0.18 | 0.8905 | 8.3229 | 0.18 | 0.0305 | 0.2973 | 0.7838 |
| No log | 15.0 | 60 | 0.5612 | 0.225 | 0.8579 | 7.9410 | 0.225 | 0.0642 | 0.2690 | 0.7108 |
| No log | 16.0 | 64 | 0.2805 | 0.28 | 0.8094 | 6.0275 | 0.28 | 0.1475 | 0.2633 | 0.5701 |
| No log | 17.0 | 68 | 0.3076 | 0.32 | 0.8151 | 6.1462 | 0.32 | 0.1641 | 0.2852 | 0.6162 |
| No log | 18.0 | 72 | 0.3824 | 0.29 | 0.8072 | 6.0214 | 0.29 | 0.1681 | 0.2900 | 0.6048 |
| No log | 19.0 | 76 | 0.5089 | 0.19 | 0.8701 | 8.9391 | 0.19 | 0.0418 | 0.2582 | 0.7152 |
| No log | 20.0 | 80 | 0.1490 | 0.335 | 0.7347 | 5.7349 | 0.335 | 0.1786 | 0.2500 | 0.4430 |
| No log | 21.0 | 84 | 0.3448 | 0.255 | 0.8455 | 6.6598 | 0.255 | 0.0998 | 0.3124 | 0.7183 |
| No log | 22.0 | 88 | 0.6254 | 0.22 | 0.8413 | 6.9926 | 0.22 | 0.0966 | 0.2654 | 0.7197 |
| No log | 23.0 | 92 | 0.5464 | 0.215 | 0.8909 | 8.4952 | 0.2150 | 0.0570 | 0.2931 | 0.7084 |
| No log | 24.0 | 96 | 0.4465 | 0.24 | 0.8445 | 7.2319 | 0.24 | 0.1396 | 0.2575 | 0.6667 |
| No log | 25.0 | 100 | 0.3967 | 0.215 | 0.8547 | 6.7234 | 0.2150 | 0.0962 | 0.2913 | 0.7053 |
| No log | 26.0 | 104 | 0.2459 | 0.295 | 0.8041 | 5.1627 | 0.295 | 0.1901 | 0.2525 | 0.6590 |
| No log | 27.0 | 108 | 0.4125 | 0.19 | 0.8595 | 7.1181 | 0.19 | 0.0551 | 0.2707 | 0.7087 |
| No log | 28.0 | 112 | 0.1686 | 0.36 | 0.7309 | 5.1322 | 0.36 | 0.2178 | 0.2296 | 0.4432 |
| No log | 29.0 | 116 | 0.3573 | 0.205 | 0.8664 | 6.6815 | 0.205 | 0.0523 | 0.2753 | 0.7131 |
| No log | 30.0 | 120 | 0.1634 | 0.32 | 0.7416 | 5.6798 | 0.32 | 0.1862 | 0.2473 | 0.4616 |
| No log | 31.0 | 124 | 0.1404 | 0.35 | 0.7295 | 5.6538 | 0.35 | 0.2152 | 0.2688 | 0.4389 |
| No log | 32.0 | 128 | 0.1435 | 0.325 | 0.7415 | 5.5376 | 0.325 | 0.1439 | 0.2567 | 0.4489 |
| No log | 33.0 | 132 | 0.1428 | 0.33 | 0.7292 | 5.5151 | 0.33 | 0.1791 | 0.2502 | 0.4403 |
| No log | 34.0 | 136 | 0.1602 | 0.33 | 0.7371 | 5.8829 | 0.33 | 0.1941 | 0.2542 | 0.4481 |
| No log | 35.0 | 140 | 0.1663 | 0.325 | 0.7398 | 5.6501 | 0.325 | 0.1880 | 0.2443 | 0.4564 |
| No log | 36.0 | 144 | 0.1637 | 0.35 | 0.7422 | 5.9440 | 0.35 | 0.2053 | 0.2748 | 0.4361 |
| No log | 37.0 | 148 | 0.1520 | 0.325 | 0.7317 | 5.3284 | 0.325 | 0.1787 | 0.2677 | 0.4531 |
| No log | 38.0 | 152 | 0.1585 | 0.335 | 0.7385 | 5.9712 | 0.335 | 0.1939 | 0.2648 | 0.4483 |
| No log | 39.0 | 156 | 0.1491 | 0.335 | 0.7334 | 5.6729 | 0.335 | 0.1912 | 0.2533 | 0.4404 |
| No log | 40.0 | 160 | 0.1367 | 0.32 | 0.7297 | 5.7350 | 0.32 | 0.1818 | 0.2512 | 0.4498 |
| No log | 41.0 | 164 | 0.2089 | 0.335 | 0.7583 | 5.2150 | 0.335 | 0.2073 | 0.2822 | 0.4712 |
| No log | 42.0 | 168 | 0.1612 | 0.335 | 0.7323 | 4.9145 | 0.335 | 0.2058 | 0.2696 | 0.4482 |
| No log | 43.0 | 172 | 0.1616 | 0.335 | 0.7349 | 5.4305 | 0.335 | 0.1916 | 0.2650 | 0.4493 |
| No log | 44.0 | 176 | 0.1477 | 0.335 | 0.7335 | 5.3482 | 0.335 | 0.1761 | 0.2478 | 0.4410 |
| No log | 45.0 | 180 | 0.1426 | 0.34 | 0.7321 | 5.4265 | 0.34 | 0.2018 | 0.2307 | 0.4483 |
| No log | 46.0 | 184 | 0.1531 | 0.345 | 0.7351 | 5.2269 | 0.345 | 0.2108 | 0.2812 | 0.4572 |
| No log | 47.0 | 188 | 0.1426 | 0.34 | 0.7299 | 5.1412 | 0.34 | 0.2040 | 0.2418 | 0.4443 |
| No log | 48.0 | 192 | 0.1321 | 0.335 | 0.7353 | 5.2955 | 0.335 | 0.2017 | 0.2515 | 0.4547 |
| No log | 49.0 | 196 | 0.1330 | 0.34 | 0.7332 | 5.5391 | 0.34 | 0.2065 | 0.2485 | 0.4524 |
| No log | 50.0 | 200 | 0.1315 | 0.365 | 0.7313 | 5.5846 | 0.3650 | 0.2369 | 0.2526 | 0.4412 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/resnet101-base_tobacco-cnn_tobacco3482_kd_NKD_t1.0_g1.5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_kd_NKD_t1.0_g1.5
This model is a fine-tuned version of [bdpc/resnet101-base_tobacco](https://huggingface.co/bdpc/resnet101-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2266
- Accuracy: 0.385
- Brier Loss: 0.7374
- Nll: 4.0859
- F1 Micro: 0.3850
- F1 Macro: 0.2652
- Ece: 0.2858
- Aurc: 0.4261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 4 | 3.9610 | 0.05 | 0.9004 | 9.1922 | 0.0500 | 0.0100 | 0.1457 | 0.9423 |
| No log | 2.0 | 8 | 3.8752 | 0.155 | 0.8934 | 8.5838 | 0.155 | 0.0268 | 0.2356 | 0.9630 |
| No log | 3.0 | 12 | 3.8665 | 0.155 | 0.9261 | 8.8645 | 0.155 | 0.0268 | 0.3113 | 0.7263 |
| No log | 4.0 | 16 | 5.0552 | 0.155 | 1.3190 | 8.8736 | 0.155 | 0.0268 | 0.6667 | 0.6226 |
| No log | 5.0 | 20 | 4.9755 | 0.155 | 1.2873 | 8.9603 | 0.155 | 0.0270 | 0.6315 | 0.6033 |
| No log | 6.0 | 24 | 4.6069 | 0.155 | 1.1443 | 7.0637 | 0.155 | 0.0301 | 0.5057 | 0.6065 |
| No log | 7.0 | 28 | 3.7058 | 0.22 | 0.9193 | 6.6528 | 0.22 | 0.0685 | 0.3454 | 0.5737 |
| No log | 8.0 | 32 | 3.3000 | 0.25 | 0.8140 | 6.8642 | 0.25 | 0.1011 | 0.2638 | 0.5377 |
| No log | 9.0 | 36 | 3.3805 | 0.195 | 0.8768 | 6.5108 | 0.195 | 0.0779 | 0.2955 | 0.7532 |
| No log | 10.0 | 40 | 3.4626 | 0.2 | 0.8985 | 6.3933 | 0.2000 | 0.0745 | 0.3154 | 0.7384 |
| No log | 11.0 | 44 | 3.2088 | 0.32 | 0.7621 | 6.0433 | 0.32 | 0.1695 | 0.2375 | 0.4457 |
| No log | 12.0 | 48 | 3.4543 | 0.22 | 0.8720 | 6.1413 | 0.22 | 0.1065 | 0.3144 | 0.7026 |
| No log | 13.0 | 52 | 3.5300 | 0.225 | 0.8684 | 7.0938 | 0.225 | 0.1182 | 0.2747 | 0.7110 |
| No log | 14.0 | 56 | 3.5981 | 0.215 | 0.8821 | 7.5146 | 0.2150 | 0.0978 | 0.3047 | 0.7351 |
| No log | 15.0 | 60 | 3.5641 | 0.23 | 0.8895 | 7.7554 | 0.23 | 0.0944 | 0.2985 | 0.7568 |
| No log | 16.0 | 64 | 3.5853 | 0.235 | 0.8698 | 6.6949 | 0.235 | 0.1292 | 0.2634 | 0.6518 |
| No log | 17.0 | 68 | 3.5539 | 0.255 | 0.8597 | 7.5062 | 0.255 | 0.1331 | 0.2821 | 0.6332 |
| No log | 18.0 | 72 | 3.5725 | 0.265 | 0.8569 | 7.4117 | 0.265 | 0.1396 | 0.2708 | 0.5940 |
| No log | 19.0 | 76 | 3.5207 | 0.27 | 0.8415 | 6.5482 | 0.27 | 0.1542 | 0.2592 | 0.5619 |
| No log | 20.0 | 80 | 3.5360 | 0.26 | 0.8573 | 7.4207 | 0.26 | 0.1358 | 0.2942 | 0.5949 |
| No log | 21.0 | 84 | 3.2807 | 0.345 | 0.7933 | 4.8232 | 0.345 | 0.2077 | 0.2903 | 0.5385 |
| No log | 22.0 | 88 | 3.1633 | 0.39 | 0.7217 | 4.3843 | 0.39 | 0.2417 | 0.2547 | 0.3857 |
| No log | 23.0 | 92 | 3.2159 | 0.39 | 0.7463 | 4.4691 | 0.39 | 0.2481 | 0.2923 | 0.3756 |
| No log | 24.0 | 96 | 3.1650 | 0.375 | 0.7248 | 4.4043 | 0.375 | 0.2276 | 0.2433 | 0.3809 |
| No log | 25.0 | 100 | 3.2000 | 0.375 | 0.7470 | 4.7004 | 0.375 | 0.2473 | 0.2671 | 0.4264 |
| No log | 26.0 | 104 | 3.4356 | 0.27 | 0.8326 | 6.6479 | 0.27 | 0.1466 | 0.2636 | 0.5640 |
| No log | 27.0 | 108 | 3.5761 | 0.285 | 0.8347 | 6.5689 | 0.285 | 0.1796 | 0.2537 | 0.6182 |
| No log | 28.0 | 112 | 3.5778 | 0.26 | 0.8546 | 7.0753 | 0.26 | 0.1380 | 0.2629 | 0.5870 |
| No log | 29.0 | 116 | 3.1280 | 0.39 | 0.7075 | 4.5179 | 0.39 | 0.2450 | 0.2479 | 0.3759 |
| No log | 30.0 | 120 | 3.1559 | 0.37 | 0.7268 | 4.3444 | 0.37 | 0.2413 | 0.2588 | 0.3941 |
| No log | 31.0 | 124 | 3.1493 | 0.39 | 0.7133 | 4.6188 | 0.39 | 0.2305 | 0.2338 | 0.3686 |
| No log | 32.0 | 128 | 3.1287 | 0.39 | 0.7015 | 4.0848 | 0.39 | 0.2379 | 0.2271 | 0.3655 |
| No log | 33.0 | 132 | 3.1409 | 0.395 | 0.7048 | 4.0026 | 0.395 | 0.2290 | 0.2210 | 0.3606 |
| No log | 34.0 | 136 | 3.1691 | 0.375 | 0.7210 | 4.4086 | 0.375 | 0.2215 | 0.2495 | 0.3800 |
| No log | 35.0 | 140 | 3.1529 | 0.4 | 0.7117 | 4.1376 | 0.4000 | 0.2487 | 0.2200 | 0.3605 |
| No log | 36.0 | 144 | 3.1088 | 0.4 | 0.6989 | 4.0773 | 0.4000 | 0.2645 | 0.2478 | 0.3641 |
| No log | 37.0 | 148 | 3.2158 | 0.4 | 0.7230 | 4.1145 | 0.4000 | 0.2603 | 0.2517 | 0.3761 |
| No log | 38.0 | 152 | 3.1351 | 0.39 | 0.7064 | 4.3952 | 0.39 | 0.2398 | 0.2475 | 0.3606 |
| No log | 39.0 | 156 | 3.1239 | 0.395 | 0.7001 | 4.0496 | 0.395 | 0.2569 | 0.2364 | 0.3583 |
| No log | 40.0 | 160 | 3.1855 | 0.385 | 0.7169 | 4.0634 | 0.3850 | 0.2274 | 0.2467 | 0.3687 |
| No log | 41.0 | 164 | 3.1938 | 0.37 | 0.7098 | 3.9505 | 0.37 | 0.2146 | 0.2207 | 0.3781 |
| No log | 42.0 | 168 | 3.3495 | 0.395 | 0.7438 | 4.0247 | 0.395 | 0.2428 | 0.2901 | 0.3973 |
| No log | 43.0 | 172 | 3.2352 | 0.395 | 0.7115 | 3.9875 | 0.395 | 0.2431 | 0.2651 | 0.3790 |
| No log | 44.0 | 176 | 3.2838 | 0.39 | 0.7223 | 3.8867 | 0.39 | 0.2246 | 0.2590 | 0.3824 |
| No log | 45.0 | 180 | 3.3175 | 0.395 | 0.7304 | 4.2165 | 0.395 | 0.2286 | 0.2549 | 0.3811 |
| No log | 46.0 | 184 | 3.1183 | 0.395 | 0.6916 | 3.9786 | 0.395 | 0.2338 | 0.2345 | 0.3581 |
| No log | 47.0 | 188 | 3.1608 | 0.395 | 0.7049 | 3.7245 | 0.395 | 0.2580 | 0.2429 | 0.3668 |
| No log | 48.0 | 192 | 3.2144 | 0.38 | 0.7316 | 3.9593 | 0.38 | 0.2512 | 0.2517 | 0.4202 |
| No log | 49.0 | 196 | 3.2781 | 0.365 | 0.7561 | 3.9721 | 0.3650 | 0.2440 | 0.2429 | 0.4654 |
| No log | 50.0 | 200 | 3.2266 | 0.385 | 0.7374 | 4.0859 | 0.3850 | 0.2652 | 0.2858 | 0.4261 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/resnet101-base_tobacco-cnn_tobacco3482_hint
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_hint
This model is a fine-tuned version of [bdpc/resnet101-base_tobacco](https://huggingface.co/bdpc/resnet101-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 24.6607
- Accuracy: 0.57
- Brier Loss: 0.6012
- Nll: 2.9238
- F1 Micro: 0.57
- F1 Macro: 0.5344
- Ece: 0.2496
- Aurc: 0.2274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 27.2504 | 0.07 | 0.9006 | 8.5876 | 0.07 | 0.0131 | 0.1634 | 0.9646 |
| No log | 2.0 | 14 | 27.1186 | 0.155 | 0.9229 | 12.2960 | 0.155 | 0.0268 | 0.2967 | 0.8769 |
| No log | 3.0 | 21 | 27.9163 | 0.155 | 1.3722 | 11.2040 | 0.155 | 0.0268 | 0.6887 | 0.5963 |
| No log | 4.0 | 28 | 28.2724 | 0.155 | 1.4334 | 9.3615 | 0.155 | 0.0273 | 0.7029 | 0.6185 |
| No log | 5.0 | 35 | 26.9699 | 0.175 | 1.0316 | 5.3928 | 0.175 | 0.0465 | 0.4168 | 0.5989 |
| No log | 6.0 | 42 | 26.1797 | 0.23 | 0.8746 | 3.9558 | 0.23 | 0.0993 | 0.3120 | 0.5627 |
| No log | 7.0 | 49 | 25.8507 | 0.25 | 0.8299 | 3.2357 | 0.25 | 0.1721 | 0.2686 | 0.6618 |
| No log | 8.0 | 56 | 25.7515 | 0.24 | 0.8336 | 2.7738 | 0.24 | 0.1579 | 0.2670 | 0.6619 |
| No log | 9.0 | 63 | 25.3041 | 0.39 | 0.7346 | 2.5881 | 0.39 | 0.2914 | 0.2649 | 0.4362 |
| No log | 10.0 | 70 | 25.1996 | 0.375 | 0.7406 | 2.7338 | 0.375 | 0.2616 | 0.2903 | 0.4923 |
| No log | 11.0 | 77 | 25.0418 | 0.44 | 0.6756 | 3.2534 | 0.44 | 0.3173 | 0.2520 | 0.3197 |
| No log | 12.0 | 84 | 25.3664 | 0.35 | 0.8231 | 3.6209 | 0.35 | 0.2628 | 0.2924 | 0.5484 |
| No log | 13.0 | 91 | 25.0353 | 0.44 | 0.6927 | 3.5523 | 0.44 | 0.3230 | 0.2842 | 0.3332 |
| No log | 14.0 | 98 | 25.2980 | 0.36 | 0.8265 | 3.3953 | 0.36 | 0.2859 | 0.3158 | 0.5347 |
| No log | 15.0 | 105 | 24.8521 | 0.425 | 0.6604 | 3.0888 | 0.425 | 0.3379 | 0.2641 | 0.3096 |
| No log | 16.0 | 112 | 24.8368 | 0.46 | 0.6622 | 2.7863 | 0.46 | 0.3626 | 0.2771 | 0.3429 |
| No log | 17.0 | 119 | 25.0490 | 0.355 | 0.7909 | 2.9342 | 0.3550 | 0.2764 | 0.3300 | 0.5313 |
| No log | 18.0 | 126 | 24.9950 | 0.4 | 0.7521 | 3.5010 | 0.4000 | 0.3467 | 0.2801 | 0.4721 |
| No log | 19.0 | 133 | 24.7232 | 0.505 | 0.6259 | 2.9709 | 0.505 | 0.4017 | 0.2799 | 0.2807 |
| No log | 20.0 | 140 | 24.7500 | 0.5 | 0.6408 | 3.1274 | 0.5 | 0.4278 | 0.2398 | 0.2752 |
| No log | 21.0 | 147 | 24.5976 | 0.54 | 0.5922 | 2.7847 | 0.54 | 0.4872 | 0.2422 | 0.2319 |
| No log | 22.0 | 154 | 24.9329 | 0.42 | 0.7518 | 2.9924 | 0.4200 | 0.3777 | 0.3094 | 0.4446 |
| No log | 23.0 | 161 | 24.6088 | 0.535 | 0.6089 | 2.8494 | 0.535 | 0.5067 | 0.2756 | 0.2770 |
| No log | 24.0 | 168 | 25.1851 | 0.39 | 0.8175 | 3.7625 | 0.39 | 0.3513 | 0.3211 | 0.5049 |
| No log | 25.0 | 175 | 24.5058 | 0.585 | 0.5754 | 2.6524 | 0.585 | 0.5707 | 0.2296 | 0.2227 |
| No log | 26.0 | 182 | 25.2073 | 0.435 | 0.7812 | 3.0365 | 0.435 | 0.3839 | 0.3190 | 0.5012 |
| No log | 27.0 | 189 | 24.7752 | 0.54 | 0.6558 | 2.9071 | 0.54 | 0.4667 | 0.2669 | 0.2898 |
| No log | 28.0 | 196 | 24.8546 | 0.515 | 0.6697 | 2.5989 | 0.515 | 0.4397 | 0.2943 | 0.3817 |
| No log | 29.0 | 203 | 24.5759 | 0.56 | 0.5969 | 2.6234 | 0.56 | 0.5342 | 0.2609 | 0.2493 |
| No log | 30.0 | 210 | 24.7052 | 0.53 | 0.6198 | 2.9462 | 0.53 | 0.4811 | 0.2779 | 0.2766 |
| No log | 31.0 | 217 | 24.5828 | 0.545 | 0.6038 | 2.7967 | 0.545 | 0.4979 | 0.2455 | 0.2369 |
| No log | 32.0 | 224 | 24.6622 | 0.545 | 0.6220 | 2.8878 | 0.545 | 0.4925 | 0.2854 | 0.2682 |
| No log | 33.0 | 231 | 24.6253 | 0.57 | 0.5991 | 3.1607 | 0.57 | 0.5327 | 0.2869 | 0.2518 |
| No log | 34.0 | 238 | 24.6230 | 0.535 | 0.6351 | 2.5626 | 0.535 | 0.5245 | 0.2766 | 0.3077 |
| No log | 35.0 | 245 | 24.5803 | 0.59 | 0.5900 | 2.8215 | 0.59 | 0.5564 | 0.2724 | 0.2563 |
| No log | 36.0 | 252 | 24.5679 | 0.57 | 0.5709 | 3.1573 | 0.57 | 0.5089 | 0.2523 | 0.2222 |
| No log | 37.0 | 259 | 24.5375 | 0.575 | 0.5631 | 2.9349 | 0.575 | 0.5381 | 0.2279 | 0.2007 |
| No log | 38.0 | 266 | 24.6423 | 0.565 | 0.6072 | 2.6772 | 0.565 | 0.5340 | 0.2587 | 0.2247 |
| No log | 39.0 | 273 | 24.6706 | 0.575 | 0.6139 | 2.9241 | 0.575 | 0.5291 | 0.2318 | 0.2416 |
| No log | 40.0 | 280 | 24.6007 | 0.575 | 0.5774 | 2.9918 | 0.575 | 0.5323 | 0.2575 | 0.2138 |
| No log | 41.0 | 287 | 24.7587 | 0.565 | 0.6231 | 2.9588 | 0.565 | 0.5023 | 0.2685 | 0.2665 |
| No log | 42.0 | 294 | 24.5681 | 0.56 | 0.5786 | 2.9999 | 0.56 | 0.5153 | 0.2558 | 0.2093 |
| No log | 43.0 | 301 | 24.5971 | 0.59 | 0.5687 | 3.0595 | 0.59 | 0.5365 | 0.2532 | 0.2004 |
| No log | 44.0 | 308 | 24.6424 | 0.58 | 0.5918 | 2.9073 | 0.58 | 0.5432 | 0.2470 | 0.2113 |
| No log | 45.0 | 315 | 24.5998 | 0.58 | 0.5705 | 3.0442 | 0.58 | 0.5488 | 0.2769 | 0.2011 |
| No log | 46.0 | 322 | 24.5625 | 0.62 | 0.5561 | 2.9855 | 0.62 | 0.5869 | 0.2492 | 0.2069 |
| No log | 47.0 | 329 | 24.6409 | 0.57 | 0.5817 | 2.8587 | 0.57 | 0.5400 | 0.2480 | 0.2239 |
| No log | 48.0 | 336 | 24.6218 | 0.57 | 0.5958 | 2.8299 | 0.57 | 0.5426 | 0.2725 | 0.2251 |
| No log | 49.0 | 343 | 24.5568 | 0.585 | 0.5762 | 2.9178 | 0.585 | 0.5590 | 0.2374 | 0.2102 |
| No log | 50.0 | 350 | 24.6607 | 0.57 | 0.6012 | 2.9238 | 0.57 | 0.5344 | 0.2496 | 0.2274 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/resnet101-base_tobacco-cnn_tobacco3482_simkd
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_simkd
This model is a fine-tuned version of [bdpc/resnet101-base_tobacco](https://huggingface.co/bdpc/resnet101-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 13.1229
- Accuracy: 0.295
- Brier Loss: 0.7636
- Nll: 6.8757
- F1 Micro: 0.295
- F1 Macro: 0.1150
- Ece: 0.2446
- Aurc: 0.4919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 0.2512 | 0.18 | 0.9617 | 7.0686 | 0.18 | 0.0305 | 0.3439 | 0.7810 |
| No log | 2.0 | 14 | 0.3629 | 0.18 | 1.0943 | 7.0153 | 0.18 | 0.0305 | 0.4345 | 0.8186 |
| No log | 3.0 | 21 | 0.4745 | 0.18 | 1.1577 | 6.9805 | 0.18 | 0.0305 | 0.5034 | 0.8029 |
| No log | 4.0 | 28 | 0.6953 | 0.18 | 1.1290 | 6.9352 | 0.18 | 0.0305 | 0.4731 | 0.8367 |
| No log | 5.0 | 35 | 173.4450 | 0.18 | 1.1346 | 6.8314 | 0.18 | 0.0305 | 0.4615 | 0.8814 |
| No log | 6.0 | 42 | 412.7549 | 0.18 | 1.1098 | 6.8364 | 0.18 | 0.0305 | 0.4420 | 0.8716 |
| No log | 7.0 | 49 | 148.0839 | 0.18 | 1.0291 | 6.9271 | 0.18 | 0.0305 | 0.3960 | 0.7698 |
| No log | 8.0 | 56 | 61.2696 | 0.18 | 0.9674 | 6.9593 | 0.18 | 0.0305 | 0.3413 | 0.7924 |
| No log | 9.0 | 63 | 175.4512 | 0.18 | 0.9708 | 6.9854 | 0.18 | 0.0305 | 0.3549 | 0.8252 |
| No log | 10.0 | 70 | 139.2036 | 0.18 | 0.9400 | 6.9022 | 0.18 | 0.0305 | 0.3300 | 0.7760 |
| No log | 11.0 | 77 | 12.5605 | 0.295 | 0.8656 | 6.9766 | 0.295 | 0.1138 | 0.3093 | 0.5354 |
| No log | 12.0 | 84 | 2.3147 | 0.18 | 0.9363 | 6.9778 | 0.18 | 0.0305 | 0.3084 | 0.7507 |
| No log | 13.0 | 91 | 75.2050 | 0.18 | 0.9543 | 9.1566 | 0.18 | 0.0305 | 0.2990 | 0.7716 |
| No log | 14.0 | 98 | 37.4873 | 0.18 | 0.9410 | 9.1473 | 0.18 | 0.0305 | 0.3029 | 0.7517 |
| No log | 15.0 | 105 | 8.5750 | 0.18 | 0.9304 | 9.1440 | 0.18 | 0.0305 | 0.3033 | 0.7718 |
| No log | 16.0 | 112 | 21.5310 | 0.18 | 0.9232 | 9.1349 | 0.18 | 0.0305 | 0.3122 | 0.7717 |
| No log | 17.0 | 119 | 66.9546 | 0.18 | 0.9287 | 9.1376 | 0.18 | 0.0305 | 0.2920 | 0.7715 |
| No log | 18.0 | 126 | 2.6525 | 0.285 | 0.8357 | 7.0773 | 0.285 | 0.1143 | 0.3156 | 0.5306 |
| No log | 19.0 | 133 | 7.7253 | 0.24 | 0.8574 | 7.0190 | 0.24 | 0.0880 | 0.2948 | 0.7186 |
| No log | 20.0 | 140 | 30.0305 | 0.285 | 0.8086 | 6.9862 | 0.285 | 0.1133 | 0.3001 | 0.5273 |
| No log | 21.0 | 147 | 3.9243 | 0.18 | 0.8680 | 7.4799 | 0.18 | 0.0306 | 0.2739 | 0.7704 |
| No log | 22.0 | 154 | 4.4660 | 0.18 | 0.8831 | 8.9935 | 0.18 | 0.0308 | 0.2652 | 0.7313 |
| No log | 23.0 | 161 | 3.9728 | 0.18 | 0.8719 | 8.9609 | 0.18 | 0.0308 | 0.2600 | 0.7651 |
| No log | 24.0 | 168 | 2.6913 | 0.285 | 0.8089 | 6.9969 | 0.285 | 0.1146 | 0.2873 | 0.5122 |
| No log | 25.0 | 175 | 1.3141 | 0.29 | 0.8086 | 7.0227 | 0.29 | 0.1156 | 0.3154 | 0.5256 |
| No log | 26.0 | 182 | 13.5853 | 0.29 | 0.7782 | 6.8763 | 0.29 | 0.1168 | 0.2735 | 0.5045 |
| No log | 27.0 | 189 | 11.9763 | 0.3 | 0.7730 | 6.8499 | 0.3 | 0.1171 | 0.2740 | 0.4971 |
| No log | 28.0 | 196 | 1.6467 | 0.285 | 0.8067 | 7.1641 | 0.285 | 0.1144 | 0.2870 | 0.5193 |
| No log | 29.0 | 203 | 30.5306 | 0.285 | 0.8424 | 7.1576 | 0.285 | 0.1129 | 0.2686 | 0.6662 |
| No log | 30.0 | 210 | 13.5964 | 0.18 | 0.8584 | 7.0972 | 0.18 | 0.0305 | 0.2704 | 0.7307 |
| No log | 31.0 | 217 | 98.3061 | 0.29 | 0.8274 | 7.0330 | 0.29 | 0.1167 | 0.3163 | 0.5653 |
| No log | 32.0 | 224 | 53.0911 | 0.29 | 0.7984 | 6.9311 | 0.29 | 0.1167 | 0.2911 | 0.5181 |
| No log | 33.0 | 231 | 2.2010 | 0.265 | 0.8291 | 6.9883 | 0.265 | 0.1037 | 0.2945 | 0.6039 |
| No log | 34.0 | 238 | 3.6255 | 0.295 | 0.7836 | 6.8954 | 0.295 | 0.1176 | 0.2636 | 0.5025 |
| No log | 35.0 | 245 | 0.9640 | 0.3 | 0.7571 | 6.7913 | 0.3 | 0.1170 | 0.2388 | 0.4746 |
| No log | 36.0 | 252 | 1.1935 | 0.295 | 0.7711 | 6.7993 | 0.295 | 0.1175 | 0.2619 | 0.4779 |
| No log | 37.0 | 259 | 12.7465 | 0.305 | 0.7650 | 6.8142 | 0.305 | 0.1205 | 0.2512 | 0.4798 |
| No log | 38.0 | 266 | 56.6876 | 0.305 | 0.7840 | 6.8750 | 0.305 | 0.1205 | 0.2835 | 0.4985 |
| No log | 39.0 | 273 | 122.6602 | 0.295 | 0.7919 | 6.9220 | 0.295 | 0.1116 | 0.2493 | 0.5312 |
| No log | 40.0 | 280 | 14.4685 | 0.295 | 0.7757 | 6.8232 | 0.295 | 0.1162 | 0.2575 | 0.4988 |
| No log | 41.0 | 287 | 3.9605 | 0.295 | 0.7601 | 6.7809 | 0.295 | 0.1138 | 0.2437 | 0.4911 |
| No log | 42.0 | 294 | 7.9424 | 0.295 | 0.7567 | 6.7609 | 0.295 | 0.1138 | 0.2398 | 0.4883 |
| No log | 43.0 | 301 | 17.7810 | 0.295 | 0.7713 | 6.8075 | 0.295 | 0.1175 | 0.2503 | 0.5090 |
| No log | 44.0 | 308 | 30.8773 | 0.295 | 0.7747 | 6.8248 | 0.295 | 0.1127 | 0.2651 | 0.5149 |
| No log | 45.0 | 315 | 16.3877 | 0.29 | 0.7736 | 6.8888 | 0.29 | 0.1117 | 0.2641 | 0.5026 |
| No log | 46.0 | 322 | 7.4195 | 0.29 | 0.7674 | 6.8179 | 0.29 | 0.1117 | 0.2621 | 0.4991 |
| No log | 47.0 | 329 | 9.6560 | 0.295 | 0.7694 | 6.8960 | 0.295 | 0.1138 | 0.2604 | 0.4963 |
| No log | 48.0 | 336 | 6.6040 | 0.29 | 0.7622 | 6.7835 | 0.29 | 0.1117 | 0.2271 | 0.4958 |
| No log | 49.0 | 343 | 10.3365 | 0.29 | 0.7640 | 6.8293 | 0.29 | 0.1117 | 0.2583 | 0.4941 |
| No log | 50.0 | 350 | 13.1229 | 0.295 | 0.7636 | 6.8757 | 0.295 | 0.1150 | 0.2446 | 0.4919 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
jordyvl/resnet101-base_tobacco-cnn_tobacco3482_og_simkd
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101-base_tobacco-cnn_tobacco3482_og_simkd
This model is a fine-tuned version of [bdpc/resnet101-base_tobacco](https://huggingface.co/bdpc/resnet101-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1263
- Accuracy: 0.295
- Brier Loss: 0.7485
- Nll: 6.2362
- F1 Micro: 0.295
- F1 Macro: 0.1126
- Ece: 0.2177
- Aurc: 0.4648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 0.1887 | 0.18 | 0.8839 | 8.4886 | 0.18 | 0.0305 | 0.2321 | 0.8333 |
| No log | 2.0 | 14 | 0.1901 | 0.18 | 0.8775 | 6.8406 | 0.18 | 0.0305 | 0.2593 | 0.8187 |
| No log | 3.0 | 21 | 0.2085 | 0.28 | 0.9045 | 7.0658 | 0.28 | 0.1005 | 0.3168 | 0.6083 |
| No log | 4.0 | 28 | 0.2493 | 0.155 | 0.9936 | 8.0153 | 0.155 | 0.0300 | 0.3996 | 0.6329 |
| No log | 5.0 | 35 | 0.2952 | 0.16 | 1.0253 | 8.3946 | 0.16 | 0.0355 | 0.4181 | 0.6026 |
| No log | 6.0 | 42 | 0.2344 | 0.195 | 0.9022 | 7.0548 | 0.195 | 0.0485 | 0.3118 | 0.8445 |
| No log | 7.0 | 49 | 0.1665 | 0.18 | 0.9004 | 6.7765 | 0.18 | 0.0310 | 0.2888 | 0.7617 |
| No log | 8.0 | 56 | 0.1696 | 0.18 | 0.9279 | 9.0648 | 0.18 | 0.0309 | 0.3021 | 0.7025 |
| No log | 9.0 | 63 | 0.1715 | 0.18 | 0.9330 | 9.0774 | 0.18 | 0.0305 | 0.2992 | 0.7525 |
| No log | 10.0 | 70 | 0.1369 | 0.285 | 0.8092 | 6.9372 | 0.285 | 0.1134 | 0.2993 | 0.4899 |
| No log | 11.0 | 77 | 0.1584 | 0.18 | 0.8953 | 8.9899 | 0.18 | 0.0310 | 0.2666 | 0.7495 |
| No log | 12.0 | 84 | 0.1690 | 0.18 | 0.8896 | 8.9605 | 0.18 | 0.0310 | 0.2593 | 0.7452 |
| No log | 13.0 | 91 | 0.1636 | 0.18 | 0.8848 | 8.9907 | 0.18 | 0.0310 | 0.2661 | 0.7474 |
| No log | 14.0 | 98 | 0.1685 | 0.18 | 0.8815 | 8.9991 | 0.18 | 0.0309 | 0.2676 | 0.7750 |
| No log | 15.0 | 105 | 0.1678 | 0.18 | 0.8807 | 8.9352 | 0.18 | 0.0305 | 0.2658 | 0.7448 |
| No log | 16.0 | 112 | 0.1599 | 0.18 | 0.8848 | 9.0210 | 0.18 | 0.0309 | 0.2707 | 0.7742 |
| No log | 17.0 | 119 | 0.1553 | 0.18 | 0.8559 | 7.2132 | 0.18 | 0.0305 | 0.2569 | 0.7479 |
| No log | 18.0 | 126 | 0.1620 | 0.18 | 0.8728 | 8.8826 | 0.18 | 0.0308 | 0.2472 | 0.7289 |
| No log | 19.0 | 133 | 0.1631 | 0.18 | 0.8600 | 8.8681 | 0.18 | 0.0305 | 0.2689 | 0.7046 |
| No log | 20.0 | 140 | 0.1616 | 0.18 | 0.8702 | 8.8768 | 0.18 | 0.0305 | 0.2532 | 0.7489 |
| No log | 21.0 | 147 | 0.1521 | 0.18 | 0.8505 | 6.9939 | 0.18 | 0.0310 | 0.2687 | 0.7479 |
| No log | 22.0 | 154 | 0.1290 | 0.285 | 0.7742 | 6.8763 | 0.285 | 0.1123 | 0.2907 | 0.4899 |
| No log | 23.0 | 161 | 0.1256 | 0.305 | 0.7453 | 6.2659 | 0.305 | 0.1190 | 0.2133 | 0.4457 |
| No log | 24.0 | 168 | 0.1257 | 0.305 | 0.7527 | 6.7983 | 0.305 | 0.1192 | 0.2483 | 0.4694 |
| No log | 25.0 | 175 | 0.1256 | 0.295 | 0.7483 | 6.7540 | 0.295 | 0.1106 | 0.2233 | 0.4632 |
| No log | 26.0 | 182 | 0.1277 | 0.3 | 0.7590 | 6.6632 | 0.3 | 0.1214 | 0.2641 | 0.4563 |
| No log | 27.0 | 189 | 0.1644 | 0.18 | 0.8539 | 8.7216 | 0.18 | 0.0306 | 0.2483 | 0.7170 |
| No log | 28.0 | 196 | 0.1268 | 0.305 | 0.7494 | 6.5633 | 0.305 | 0.1146 | 0.2379 | 0.4509 |
| No log | 29.0 | 203 | 0.1246 | 0.305 | 0.7376 | 6.2718 | 0.305 | 0.1158 | 0.2319 | 0.4326 |
| No log | 30.0 | 210 | 0.1249 | 0.3 | 0.7428 | 6.5246 | 0.3 | 0.1138 | 0.2463 | 0.4449 |
| No log | 31.0 | 217 | 0.1284 | 0.295 | 0.7474 | 6.4668 | 0.295 | 0.1116 | 0.2566 | 0.4550 |
| No log | 32.0 | 224 | 0.1715 | 0.18 | 0.8599 | 8.5902 | 0.18 | 0.0310 | 0.2413 | 0.7447 |
| No log | 33.0 | 231 | 0.1566 | 0.18 | 0.8495 | 7.4352 | 0.18 | 0.0308 | 0.2624 | 0.7110 |
| No log | 34.0 | 238 | 0.1370 | 0.275 | 0.7990 | 6.5052 | 0.275 | 0.1096 | 0.2760 | 0.5186 |
| No log | 35.0 | 245 | 0.1289 | 0.3 | 0.7569 | 6.4685 | 0.3 | 0.1212 | 0.2643 | 0.4524 |
| No log | 36.0 | 252 | 0.1557 | 0.18 | 0.8493 | 6.8218 | 0.18 | 0.0305 | 0.2574 | 0.7401 |
| No log | 37.0 | 259 | 0.1629 | 0.18 | 0.8558 | 8.5068 | 0.18 | 0.0310 | 0.2522 | 0.7466 |
| No log | 38.0 | 266 | 0.1386 | 0.275 | 0.8117 | 6.4244 | 0.275 | 0.1053 | 0.2455 | 0.5912 |
| No log | 39.0 | 273 | 0.1601 | 0.18 | 0.8508 | 8.3697 | 0.18 | 0.0305 | 0.2445 | 0.7048 |
| No log | 40.0 | 280 | 0.1510 | 0.185 | 0.8428 | 6.8710 | 0.185 | 0.0369 | 0.2517 | 0.7155 |
| No log | 41.0 | 287 | 0.1315 | 0.29 | 0.7675 | 6.1897 | 0.29 | 0.1167 | 0.2708 | 0.4594 |
| No log | 42.0 | 294 | 0.1235 | 0.3 | 0.7405 | 6.1762 | 0.3 | 0.1158 | 0.2338 | 0.4496 |
| No log | 43.0 | 301 | 0.1250 | 0.295 | 0.7456 | 6.3789 | 0.295 | 0.1174 | 0.2524 | 0.4548 |
| No log | 44.0 | 308 | 0.1249 | 0.285 | 0.7440 | 6.3862 | 0.285 | 0.1097 | 0.2405 | 0.4680 |
| No log | 45.0 | 315 | 0.1245 | 0.29 | 0.7428 | 6.4641 | 0.29 | 0.1117 | 0.2403 | 0.4623 |
| No log | 46.0 | 322 | 0.1245 | 0.295 | 0.7440 | 6.5208 | 0.295 | 0.1149 | 0.2385 | 0.4610 |
| No log | 47.0 | 329 | 0.1250 | 0.29 | 0.7464 | 6.2221 | 0.29 | 0.1117 | 0.2332 | 0.4674 |
| No log | 48.0 | 336 | 0.1263 | 0.295 | 0.7458 | 6.3085 | 0.295 | 0.1126 | 0.2375 | 0.4670 |
| No log | 49.0 | 343 | 0.1252 | 0.29 | 0.7469 | 6.0647 | 0.29 | 0.1117 | 0.2410 | 0.4679 |
| No log | 50.0 | 350 | 0.1263 | 0.295 | 0.7485 | 6.2362 | 0.295 | 0.1126 | 0.2177 | 0.4648 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.2.0.dev20231112+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"adve",
"email",
"form",
"letter",
"memo",
"news",
"note",
"report",
"resume",
"scientific"
] |
dima806/smoker_image_classification
|
Returns whether the person is a smoker based on image with about 97% accuracy.
See https://www.kaggle.com/code/dima806/smoker-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
notsmoking 0.9907 0.9464 0.9680 112
smoking 0.9487 0.9911 0.9694 112
accuracy 0.9688 224
macro avg 0.9697 0.9688 0.9687 224
weighted avg 0.9697 0.9688 0.9687 224
```
|
[
"notsmoking",
"smoking"
] |
cjade100/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
dima806/mammals_45_types_image_classification
|
Returns a common mammal type given an image with about 96% accuracy.
See https://www.kaggle.com/code/dima806/mammals-45-types-image-classification-vit for more details.
```
Classification report:
precision recall f1-score support
african_elephant 1.0000 1.0000 1.0000 71
alpaca 0.9200 0.9718 0.9452 71
american_bison 1.0000 1.0000 1.0000 71
anteater 0.9853 0.9437 0.9640 71
arctic_fox 0.9286 0.9155 0.9220 71
armadillo 0.9726 1.0000 0.9861 71
baboon 0.9718 0.9718 0.9718 71
badger 1.0000 0.9718 0.9857 71
blue_whale 0.9710 0.9437 0.9571 71
brown_bear 0.9722 0.9859 0.9790 71
camel 0.9861 1.0000 0.9930 71
dolphin 0.8974 0.9859 0.9396 71
giraffe 0.9857 0.9718 0.9787 71
groundhog 0.9714 0.9577 0.9645 71
highland_cattle 0.9859 0.9859 0.9859 71
horse 1.0000 0.9859 0.9929 71
jackal 0.9577 0.9444 0.9510 72
kangaroo 0.8415 0.9583 0.8961 72
koala 0.9589 0.9859 0.9722 71
manatee 0.9861 0.9861 0.9861 72
mongoose 0.9483 0.7746 0.8527 71
mountain_goat 0.9855 0.9577 0.9714 71
opossum 1.0000 0.9577 0.9784 71
orangutan 1.0000 1.0000 1.0000 71
otter 1.0000 0.9577 0.9784 71
polar_bear 0.9706 0.9296 0.9496 71
porcupine 1.0000 0.9722 0.9859 72
red_panda 0.9718 0.9718 0.9718 71
rhinoceros 0.9859 0.9859 0.9859 71
sea_lion 0.7600 0.8028 0.7808 71
seal 0.8308 0.7500 0.7883 72
snow_leopard 1.0000 1.0000 1.0000 71
squirrel 0.9444 0.9577 0.9510 71
sugar_glider 0.8554 1.0000 0.9221 71
tapir 1.0000 1.0000 1.0000 71
vampire_bat 1.0000 0.9861 0.9930 72
vicuna 1.0000 0.8873 0.9403 71
walrus 0.9342 0.9861 0.9595 72
warthog 0.9571 0.9437 0.9504 71
water_buffalo 0.9333 0.9859 0.9589 71
weasel 0.9583 0.9583 0.9583 72
wildebeest 0.9577 0.9444 0.9510 72
wombat 0.8947 0.9577 0.9252 71
yak 1.0000 0.9437 0.9710 71
zebra 0.9595 1.0000 0.9793 71
accuracy 0.9572 3204
macro avg 0.9587 0.9573 0.9572 3204
weighted avg 0.9586 0.9572 0.9572 3204
```
|
[
"african_elephant",
"alpaca",
"american_bison",
"anteater",
"arctic_fox",
"armadillo",
"baboon",
"badger",
"blue_whale",
"brown_bear",
"camel",
"dolphin",
"giraffe",
"groundhog",
"highland_cattle",
"horse",
"jackal",
"kangaroo",
"koala",
"manatee",
"mongoose",
"mountain_goat",
"opossum",
"orangutan",
"otter",
"polar_bear",
"porcupine",
"red_panda",
"rhinoceros",
"sea_lion",
"seal",
"snow_leopard",
"squirrel",
"sugar_glider",
"tapir",
"vampire_bat",
"vicuna",
"walrus",
"warthog",
"water_buffalo",
"weasel",
"wildebeest",
"wombat",
"yak",
"zebra"
] |
phuong-tk-nguyen/resnet-50-finetuned-cifar10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-cifar10
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9060
- Accuracy: 0.5076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3058 | 0.03 | 10 | 2.3106 | 0.0794 |
| 2.3033 | 0.06 | 20 | 2.3026 | 0.0892 |
| 2.3012 | 0.09 | 30 | 2.2971 | 0.1042 |
| 2.2914 | 0.11 | 40 | 2.2890 | 0.1254 |
| 2.2869 | 0.14 | 50 | 2.2816 | 0.16 |
| 2.2785 | 0.17 | 60 | 2.2700 | 0.1902 |
| 2.2712 | 0.2 | 70 | 2.2602 | 0.2354 |
| 2.2619 | 0.23 | 80 | 2.2501 | 0.2688 |
| 2.2509 | 0.26 | 90 | 2.2383 | 0.3022 |
| 2.2382 | 0.28 | 100 | 2.2229 | 0.3268 |
| 2.2255 | 0.31 | 110 | 2.2084 | 0.353 |
| 2.2164 | 0.34 | 120 | 2.1939 | 0.3608 |
| 2.2028 | 0.37 | 130 | 2.1829 | 0.3668 |
| 2.1977 | 0.4 | 140 | 2.1646 | 0.401 |
| 2.1844 | 0.43 | 150 | 2.1441 | 0.4244 |
| 2.1689 | 0.45 | 160 | 2.1323 | 0.437 |
| 2.1555 | 0.48 | 170 | 2.1159 | 0.4462 |
| 2.1448 | 0.51 | 180 | 2.0992 | 0.45 |
| 2.1313 | 0.54 | 190 | 2.0810 | 0.4642 |
| 2.1189 | 0.57 | 200 | 2.0589 | 0.4708 |
| 2.1111 | 0.6 | 210 | 2.0430 | 0.4828 |
| 2.0905 | 0.63 | 220 | 2.0288 | 0.4938 |
| 2.082 | 0.65 | 230 | 2.0089 | 0.4938 |
| 2.0646 | 0.68 | 240 | 1.9970 | 0.5014 |
| 2.0636 | 0.71 | 250 | 1.9778 | 0.4946 |
| 2.0579 | 0.74 | 260 | 1.9609 | 0.49 |
| 2.028 | 0.77 | 270 | 1.9602 | 0.4862 |
| 2.0447 | 0.8 | 280 | 1.9460 | 0.4934 |
| 2.0168 | 0.82 | 290 | 1.9369 | 0.505 |
| 2.0126 | 0.85 | 300 | 1.9317 | 0.4926 |
| 2.0099 | 0.88 | 310 | 1.9235 | 0.4952 |
| 1.9978 | 0.91 | 320 | 1.9174 | 0.4972 |
| 1.9951 | 0.94 | 330 | 1.9119 | 0.507 |
| 1.9823 | 0.97 | 340 | 1.9120 | 0.4992 |
| 1.985 | 1.0 | 350 | 1.9064 | 0.5022 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
PK-B/roof_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# PK-B/roof_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6844
- Validation Loss: 2.3315
- Train Accuracy: 0.425
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1770, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.9736 | 2.9756 | 0.05 | 0 |
| 2.9016 | 2.9430 | 0.1 | 1 |
| 2.8192 | 2.9084 | 0.1 | 2 |
| 2.7004 | 2.8564 | 0.175 | 3 |
| 2.6005 | 2.8109 | 0.175 | 4 |
| 2.4981 | 2.7452 | 0.225 | 5 |
| 2.3819 | 2.6988 | 0.2125 | 6 |
| 2.2867 | 2.6998 | 0.25 | 7 |
| 2.1804 | 2.6510 | 0.275 | 8 |
| 2.1115 | 2.5307 | 0.3375 | 9 |
| 2.0161 | 2.5523 | 0.3 | 10 |
| 1.9189 | 2.5310 | 0.2875 | 11 |
| 1.8863 | 2.4733 | 0.3375 | 12 |
| 1.7518 | 2.4233 | 0.3625 | 13 |
| 1.6844 | 2.3315 | 0.425 | 14 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"atlas_pinnacle_wwood_shade",
"atlas_pinnacle_wwood_sun",
"gaf_wwood_shade",
"gaf_wwood_sun",
"iko_cornerstone_shade",
"iko_cornerstone_sun",
"malarkey_wwood_shade",
"malarkey_wwood_sun",
"oc_driftwood_shade",
"oc_driftwood_sun",
"tamko_wwood_shade",
"tamko_wwood_sun",
"cteed_maxdef_wwood_shade",
"cteed_maxdef_wwood_sun",
"cteed_wwood_shade",
"cteed_wwood_sun",
"gaf_mission_brown_shade",
"gaf_mission_brown_sun",
"gaf_pewter_gray_shade",
"gaf_pewter_gray_sun"
] |
andakm/swin-tiny-patch4-window7-224
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3635
- Accuracy: 0.5294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 1.7560 | 0.3137 |
| No log | 1.87 | 7 | 1.6225 | 0.3725 |
| 1.7919 | 2.93 | 11 | 1.5661 | 0.4510 |
| 1.7919 | 4.0 | 15 | 1.5332 | 0.4510 |
| 1.7919 | 4.8 | 18 | 1.4522 | 0.5294 |
| 1.5187 | 5.87 | 22 | 1.3873 | 0.4902 |
| 1.5187 | 6.93 | 26 | 1.3741 | 0.4902 |
| 1.2773 | 8.0 | 30 | 1.3635 | 0.5294 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"1-series",
"3-series",
"4-series",
"5-series",
"6-series",
"7-series",
"8-series",
"m3",
"m4",
"m5"
] |
phuong-tk-nguyen/vit-base-patch16-224-finetuned-cifar10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0564
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.4597 | 0.03 | 10 | 2.2902 | 0.1662 |
| 2.1429 | 0.06 | 20 | 1.7855 | 0.5086 |
| 1.6466 | 0.09 | 30 | 1.0829 | 0.8484 |
| 0.9962 | 0.11 | 40 | 0.4978 | 0.9288 |
| 0.6127 | 0.14 | 50 | 0.2717 | 0.9508 |
| 0.4544 | 0.17 | 60 | 0.1942 | 0.9588 |
| 0.4352 | 0.2 | 70 | 0.1504 | 0.9672 |
| 0.374 | 0.23 | 80 | 0.1221 | 0.9718 |
| 0.3261 | 0.26 | 90 | 0.1057 | 0.9772 |
| 0.34 | 0.28 | 100 | 0.0943 | 0.979 |
| 0.284 | 0.31 | 110 | 0.0958 | 0.9754 |
| 0.3151 | 0.34 | 120 | 0.0866 | 0.9776 |
| 0.3004 | 0.37 | 130 | 0.0838 | 0.9788 |
| 0.3334 | 0.4 | 140 | 0.0798 | 0.9806 |
| 0.3018 | 0.43 | 150 | 0.0800 | 0.9778 |
| 0.2957 | 0.45 | 160 | 0.0749 | 0.9808 |
| 0.2952 | 0.48 | 170 | 0.0704 | 0.9814 |
| 0.3084 | 0.51 | 180 | 0.0720 | 0.9812 |
| 0.3015 | 0.54 | 190 | 0.0708 | 0.983 |
| 0.2763 | 0.57 | 200 | 0.0672 | 0.9832 |
| 0.3376 | 0.6 | 210 | 0.0700 | 0.982 |
| 0.285 | 0.63 | 220 | 0.0657 | 0.9828 |
| 0.2857 | 0.65 | 230 | 0.0629 | 0.9836 |
| 0.2644 | 0.68 | 240 | 0.0612 | 0.9842 |
| 0.2461 | 0.71 | 250 | 0.0601 | 0.9836 |
| 0.2802 | 0.74 | 260 | 0.0589 | 0.9842 |
| 0.2481 | 0.77 | 270 | 0.0604 | 0.9838 |
| 0.2641 | 0.8 | 280 | 0.0591 | 0.9846 |
| 0.2737 | 0.82 | 290 | 0.0581 | 0.9842 |
| 0.2391 | 0.85 | 300 | 0.0565 | 0.9852 |
| 0.2283 | 0.88 | 310 | 0.0558 | 0.986 |
| 0.2626 | 0.91 | 320 | 0.0559 | 0.9852 |
| 0.2325 | 0.94 | 330 | 0.0563 | 0.9846 |
| 0.2459 | 0.97 | 340 | 0.0565 | 0.9846 |
| 0.2474 | 1.0 | 350 | 0.0564 | 0.9844 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
HarshaSingamshetty1/roof_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HarshaSingamshetty1/roof_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6380
- Validation Loss: 2.1987
- Train Accuracy: 0.4375
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1770, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.9764 | 2.9730 | 0.05 | 0 |
| 2.8746 | 2.9232 | 0.125 | 1 |
| 2.7792 | 2.8802 | 0.1375 | 2 |
| 2.6648 | 2.8491 | 0.225 | 3 |
| 2.5573 | 2.7563 | 0.1625 | 4 |
| 2.4614 | 2.7155 | 0.2875 | 5 |
| 2.3453 | 2.7005 | 0.2 | 6 |
| 2.2737 | 2.6443 | 0.2875 | 7 |
| 2.1555 | 2.5396 | 0.3625 | 8 |
| 2.0694 | 2.4244 | 0.425 | 9 |
| 2.0112 | 2.3738 | 0.4875 | 10 |
| 1.8867 | 2.3843 | 0.4125 | 11 |
| 1.8217 | 2.2878 | 0.45 | 12 |
| 1.7253 | 2.2642 | 0.475 | 13 |
| 1.6380 | 2.1987 | 0.4375 | 14 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"atlas_pinnacle_wwood_shade",
"atlas_pinnacle_wwood_sun",
"gaf_wwood_shade",
"gaf_wwood_sun",
"iko_cornerstone_shade",
"iko_cornerstone_sun",
"malarkey_wwood_shade",
"malarkey_wwood_sun",
"oc_driftwood_shade",
"oc_driftwood_sun",
"tamko_wwood_shade",
"tamko_wwood_sun",
"cteed_maxdef_wwood_shade",
"cteed_maxdef_wwood_sun",
"cteed_wwood_shade",
"cteed_wwood_sun",
"gaf_mission_brown_shade",
"gaf_mission_brown_sun",
"gaf_pewter_gray_shade",
"gaf_pewter_gray_sun"
] |
Iust1n2/resnet-18-finetuned-wikiart
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-18-finetuned-wikiart
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6256
- Accuracy: 0.6247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0382 | 1.0 | 2037 | 1.8771 | 0.5938 |
| 1.8027 | 2.0 | 4074 | 1.6860 | 0.6160 |
| 1.7033 | 3.0 | 6111 | 1.6256 | 0.6247 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"unknown artist",
"boris-kustodiev",
"camille-pissarro",
"childe-hassam",
"claude-monet",
"edgar-degas",
"eugene-boudin",
"gustave-dore",
"ilya-repin",
"ivan-aivazovsky",
"ivan-shishkin",
"john-singer-sargent",
"marc-chagall",
"martiros-saryan",
"nicholas-roerich",
"pablo-picasso",
"paul-cezanne",
"pierre-auguste-renoir",
"pyotr-konchalovsky",
"raphael-kirchner",
"rembrandt",
"salvador-dali",
"vincent-van-gogh",
"hieronymus-bosch",
"leonardo-da-vinci",
"albrecht-durer",
"edouard-cortes",
"sam-francis",
"juan-gris",
"lucas-cranach-the-elder",
"paul-gauguin",
"konstantin-makovsky",
"egon-schiele",
"thomas-eakins",
"gustave-moreau",
"francisco-goya",
"edvard-munch",
"henri-matisse",
"fra-angelico",
"maxime-maufra",
"jan-matejko",
"mstislav-dobuzhinsky",
"alfred-sisley",
"mary-cassatt",
"gustave-loiseau",
"fernando-botero",
"zinaida-serebriakova",
"georges-seurat",
"isaac-levitan",
"joaquãn-sorolla",
"jacek-malczewski",
"berthe-morisot",
"andy-warhol",
"arkhip-kuindzhi",
"niko-pirosmani",
"james-tissot",
"vasily-polenov",
"valentin-serov",
"pietro-perugino",
"pierre-bonnard",
"ferdinand-hodler",
"bartolome-esteban-murillo",
"giovanni-boldini",
"henri-martin",
"gustav-klimt",
"vasily-perov",
"odilon-redon",
"tintoretto",
"gene-davis",
"raphael",
"john-henry-twachtman",
"henri-de-toulouse-lautrec",
"antoine-blanchard",
"david-burliuk",
"camille-corot",
"konstantin-korovin",
"ivan-bilibin",
"titian",
"maurice-prendergast",
"edouard-manet",
"peter-paul-rubens",
"aubrey-beardsley",
"paolo-veronese",
"joshua-reynolds",
"kuzma-petrov-vodkin",
"gustave-caillebotte",
"lucian-freud",
"michelangelo",
"dante-gabriel-rossetti",
"felix-vallotton",
"nikolay-bogdanov-belsky",
"georges-braque",
"vasily-surikov",
"fernand-leger",
"konstantin-somov",
"katsushika-hokusai",
"sir-lawrence-alma-tadema",
"vasily-vereshchagin",
"ernst-ludwig-kirchner",
"mikhail-vrubel",
"orest-kiprensky",
"william-merritt-chase",
"aleksey-savrasov",
"hans-memling",
"amedeo-modigliani",
"ivan-kramskoy",
"utagawa-kuniyoshi",
"gustave-courbet",
"william-turner",
"theo-van-rysselberghe",
"joseph-wright",
"edward-burne-jones",
"koloman-moser",
"viktor-vasnetsov",
"anthony-van-dyck",
"raoul-dufy",
"frans-hals",
"hans-holbein-the-younger",
"ilya-mashkov",
"henri-fantin-latour",
"m.c.-escher",
"el-greco",
"mikalojus-ciurlionis",
"james-mcneill-whistler",
"karl-bryullov",
"jacob-jordaens",
"thomas-gainsborough",
"eugene-delacroix",
"canaletto"
] |
fashxp/car_manufacturer_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# car_manufacturer_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7826
- Accuracy: 0.3394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 3.1387 | 0.2018 |
| 2.8998 | 2.0 | 14 | 3.1029 | 0.2018 |
| 2.7326 | 3.0 | 21 | 3.0453 | 0.2294 |
| 2.7326 | 4.0 | 28 | 3.0104 | 0.2385 |
| 2.5797 | 5.0 | 35 | 2.9655 | 0.2477 |
| 2.4873 | 6.0 | 42 | 2.9166 | 0.3211 |
| 2.4873 | 7.0 | 49 | 2.9122 | 0.2569 |
| 2.3408 | 8.0 | 56 | 2.8122 | 0.3119 |
| 2.2696 | 9.0 | 63 | 2.8159 | 0.3578 |
| 2.1527 | 10.0 | 70 | 2.8589 | 0.2752 |
| 2.1527 | 11.0 | 77 | 2.8248 | 0.2936 |
| 2.0649 | 12.0 | 84 | 2.7709 | 0.2936 |
| 2.0855 | 13.0 | 91 | 2.8183 | 0.2477 |
| 2.0855 | 14.0 | 98 | 2.7552 | 0.2569 |
| 1.9347 | 15.0 | 105 | 2.7826 | 0.3394 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"ac cars",
"alfa romeo",
"chrysler",
"citroen",
"datsun",
"dodge",
"ferrari",
"fiat",
"ford",
"jaguar",
"lamborghini",
"lincoln",
"amc",
"mazda",
"mercedes",
"mercury",
"mga",
"morris",
"oldsmobile",
"opel",
"peugeot",
"plymouth",
"pontiac",
"aston martin",
"porsche",
"renault",
"saab",
"toyota",
"trabant",
"triumph",
"volkswagen",
"volvo",
"audi",
"austin healey",
"bmw",
"buick",
"cadillac",
"chevrolet"
] |
anirudhmu/swin-tiny-patch4-window7-224-finetuned-soccer-binary2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-soccer-binary2
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1078
- Accuracy: 0.9719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4085 | 0.99 | 20 | 0.1740 | 0.9544 |
| 0.1281 | 1.98 | 40 | 0.1078 | 0.9719 |
| 0.108 | 2.96 | 60 | 0.0978 | 0.9684 |
| 0.1077 | 4.0 | 81 | 0.1006 | 0.9684 |
| 0.0916 | 4.99 | 101 | 0.0954 | 0.9649 |
| 0.0824 | 5.98 | 121 | 0.0935 | 0.9684 |
| 0.0859 | 6.96 | 141 | 0.0975 | 0.9684 |
| 0.0927 | 8.0 | 162 | 0.0949 | 0.9684 |
| 0.0836 | 8.99 | 182 | 0.0928 | 0.9684 |
| 0.0958 | 9.88 | 200 | 0.0940 | 0.9684 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"closeup",
"overview",
"logo"
] |
phuong-tk-nguyen/swin-base-patch4-window7-224-in22k-finetuned-cifar10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-in22k-finetuned-cifar10
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0414
- Accuracy: 0.9858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.303 | 0.03 | 10 | 2.1672 | 0.2334 |
| 2.0158 | 0.06 | 20 | 1.6672 | 0.657 |
| 1.4855 | 0.09 | 30 | 0.8292 | 0.8704 |
| 0.7451 | 0.11 | 40 | 0.2578 | 0.93 |
| 0.5618 | 0.14 | 50 | 0.1476 | 0.962 |
| 0.4545 | 0.17 | 60 | 0.1248 | 0.9642 |
| 0.4587 | 0.2 | 70 | 0.0941 | 0.9748 |
| 0.3911 | 0.23 | 80 | 0.0944 | 0.9712 |
| 0.3839 | 0.26 | 90 | 0.0848 | 0.9756 |
| 0.3864 | 0.28 | 100 | 0.0744 | 0.978 |
| 0.3141 | 0.31 | 110 | 0.0673 | 0.98 |
| 0.3764 | 0.34 | 120 | 0.0706 | 0.9764 |
| 0.3003 | 0.37 | 130 | 0.0600 | 0.984 |
| 0.3566 | 0.4 | 140 | 0.0562 | 0.9826 |
| 0.2855 | 0.43 | 150 | 0.0567 | 0.9816 |
| 0.3351 | 0.45 | 160 | 0.0543 | 0.9828 |
| 0.2977 | 0.48 | 170 | 0.0568 | 0.9798 |
| 0.2924 | 0.51 | 180 | 0.0577 | 0.9804 |
| 0.2884 | 0.54 | 190 | 0.0551 | 0.983 |
| 0.3067 | 0.57 | 200 | 0.0487 | 0.983 |
| 0.3159 | 0.6 | 210 | 0.0513 | 0.984 |
| 0.2795 | 0.63 | 220 | 0.0460 | 0.9846 |
| 0.3113 | 0.65 | 230 | 0.0495 | 0.9832 |
| 0.2882 | 0.68 | 240 | 0.0475 | 0.9838 |
| 0.263 | 0.71 | 250 | 0.0449 | 0.9854 |
| 0.2686 | 0.74 | 260 | 0.0510 | 0.9826 |
| 0.2705 | 0.77 | 270 | 0.0483 | 0.9846 |
| 0.2807 | 0.8 | 280 | 0.0430 | 0.9854 |
| 0.2583 | 0.82 | 290 | 0.0452 | 0.9858 |
| 0.2346 | 0.85 | 300 | 0.0435 | 0.9858 |
| 0.2294 | 0.88 | 310 | 0.0434 | 0.986 |
| 0.2608 | 0.91 | 320 | 0.0433 | 0.986 |
| 0.2642 | 0.94 | 330 | 0.0425 | 0.9866 |
| 0.2781 | 0.97 | 340 | 0.0417 | 0.986 |
| 0.247 | 1.0 | 350 | 0.0414 | 0.9858 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
edwinpalegre/vit-base-trashnet-demo
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-trashnet-demo
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the edwinpalegre/trashnet-enhanced dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0701
- Accuracy: 0.9822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2636 | 0.4 | 100 | 0.2388 | 0.9394 |
| 0.1748 | 0.8 | 200 | 0.1414 | 0.9623 |
| 0.1231 | 1.2 | 300 | 0.1565 | 0.9545 |
| 0.0769 | 1.61 | 400 | 0.1074 | 0.9713 |
| 0.0556 | 2.01 | 500 | 0.0994 | 0.9726 |
| 0.0295 | 2.41 | 600 | 0.0720 | 0.9812 |
| 0.0311 | 2.81 | 700 | 0.0774 | 0.9806 |
| 0.0061 | 3.21 | 800 | 0.0703 | 0.9822 |
| 0.0289 | 3.61 | 900 | 0.0701 | 0.9822 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"biodegradable",
"cardboard",
"glass",
"metal",
"paper",
"plastic",
"trash"
] |
Gracoy/swinv2-base-patch4-window8-256-Kaggle_test_20231123
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-base-patch4-window8-256-Kaggle_test_20231123
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2066
- Accuracy: 0.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2882 | 1.0 | 64 | 0.2488 | 0.9311 |
| 0.2568 | 2.0 | 128 | 0.2522 | 0.9311 |
| 0.1961 | 3.0 | 192 | 0.2066 | 0.9315 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"0",
"1"
] |
parotnes/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6835
- Accuracy: 0.894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7311 | 0.99 | 62 | 2.5508 | 0.833 |
| 1.8635 | 2.0 | 125 | 1.8232 | 0.9 |
| 1.6152 | 2.98 | 186 | 1.6835 | 0.894 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Sharon8y/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model was trained from scratch on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4322
- Accuracy: 0.8855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4246 | 0.99 | 41 | 1.0122 | 0.7470 |
| 0.3689 | 1.99 | 82 | 0.8057 | 0.7831 |
| 0.1659 | 2.98 | 123 | 0.4473 | 0.8855 |
| 0.1287 | 4.0 | 165 | 0.5580 | 0.8434 |
| 0.0944 | 4.97 | 205 | 0.4322 | 0.8855 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"acerolas",
"apples",
"apricots",
"avocados",
"bananas",
"blackberries",
"blueberries",
"cantaloupes",
"cherries",
"coconuts",
"figs",
"grapefruits",
"grapes",
"guava",
"kiwifruit",
"lemons",
"limes",
"mangos",
"olives",
"oranges",
"passionfruit",
"peaches",
"pears",
"pineapples",
"plums",
"pomegranates",
"raspberries",
"strawberries",
"tomatoes",
"watermelons"
] |
danieltur/my_awesome_catdog_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_catdog_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0083
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0132 | 0.99 | 62 | 0.0121 | 1.0 |
| 0.0092 | 2.0 | 125 | 0.0089 | 1.0 |
| 0.0083 | 2.98 | 186 | 0.0083 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"cat",
"dog"
] |
parotnes/my_awesome_animal_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_animal_model
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9721
- Accuracy: 0.966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.675 | 0.99 | 62 | 1.6475 | 0.966 |
| 1.0692 | 2.0 | 125 | 1.1535 | 0.966 |
| 0.8611 | 2.98 | 186 | 0.9721 | 0.966 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"cane",
"cavallo",
"elefante",
"farfalla",
"gallina",
"gatto",
"mucca",
"pecora",
"ragno",
"scoiattolo"
] |
scottglover020/convnextv2-tiny-1k-224-finetuned-citrico-2615
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-tiny-1k-224-finetuned-beans
This model is a fine-tuned version of [facebook/convnextv2-tiny-1k-224](https://huggingface.co/facebook/convnextv2-tiny-1k-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9837
- Accuracy: 0.6381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7177 | 1.0 | 269 | 0.6939 | 0.5224 |
| 0.6132 | 2.0 | 538 | 0.6692 | 0.6101 |
| 0.642 | 3.0 | 807 | 0.6729 | 0.5951 |
| 0.6431 | 4.0 | 1076 | 0.6700 | 0.5690 |
| 0.5081 | 5.0 | 1345 | 0.7537 | 0.6213 |
| 0.4114 | 6.0 | 1614 | 0.9249 | 0.6175 |
| 0.3991 | 7.0 | 1883 | 0.9837 | 0.6381 |
| 0.2194 | 8.0 | 2152 | 1.4350 | 0.5802 |
| 0.0834 | 9.0 | 2421 | 1.3808 | 0.6138 |
| 0.15 | 10.0 | 2690 | 1.3277 | 0.6306 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"belly",
"notbelly",
"unclear"
] |
SirSkandrani/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SirSkandrani/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3560
- Validation Loss: 0.3026
- Train Accuracy: 0.93
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7916 | 1.6000 | 0.841 | 0 |
| 1.2008 | 0.7763 | 0.904 | 1 |
| 0.6724 | 0.4730 | 0.92 | 2 |
| 0.4895 | 0.3631 | 0.919 | 3 |
| 0.3560 | 0.3026 | 0.93 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
parotnes/my_animals_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_animals_model
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2858
- Accuracy: 0.1566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.287 | 1.0 | 42 | 2.2859 | 0.1595 |
| 2.2873 | 2.0 | 84 | 2.2870 | 0.1610 |
| 2.287 | 3.0 | 126 | 2.2858 | 0.1566 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"t - shirt / top",
"trouser",
"pullover",
"dress",
"coat",
"sandal",
"shirt",
"sneaker",
"bag",
"ankle boot"
] |
hkivancoral/hushem_5x_deit_base_adamax_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_adamax_00001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1362
- Accuracy: 0.6444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2172 | 1.0 | 27 | 1.2445 | 0.4 |
| 0.8523 | 2.0 | 54 | 1.0947 | 0.4667 |
| 0.5686 | 3.0 | 81 | 1.0185 | 0.5778 |
| 0.4004 | 4.0 | 108 | 0.9768 | 0.5778 |
| 0.2464 | 5.0 | 135 | 0.9587 | 0.5778 |
| 0.1691 | 6.0 | 162 | 0.8886 | 0.6222 |
| 0.1191 | 7.0 | 189 | 0.9107 | 0.6 |
| 0.0619 | 8.0 | 216 | 0.8951 | 0.6444 |
| 0.0336 | 9.0 | 243 | 0.9574 | 0.6 |
| 0.0186 | 10.0 | 270 | 0.9860 | 0.5778 |
| 0.0125 | 11.0 | 297 | 0.9869 | 0.6 |
| 0.0084 | 12.0 | 324 | 1.0113 | 0.6 |
| 0.0076 | 13.0 | 351 | 0.9936 | 0.6 |
| 0.0057 | 14.0 | 378 | 1.0048 | 0.6 |
| 0.0052 | 15.0 | 405 | 1.0120 | 0.6 |
| 0.0044 | 16.0 | 432 | 1.0086 | 0.6222 |
| 0.0038 | 17.0 | 459 | 1.0209 | 0.6222 |
| 0.0036 | 18.0 | 486 | 1.0433 | 0.6222 |
| 0.0032 | 19.0 | 513 | 1.0446 | 0.6444 |
| 0.0029 | 20.0 | 540 | 1.0517 | 0.6444 |
| 0.0025 | 21.0 | 567 | 1.0577 | 0.6444 |
| 0.0023 | 22.0 | 594 | 1.0550 | 0.6444 |
| 0.0022 | 23.0 | 621 | 1.0799 | 0.6444 |
| 0.002 | 24.0 | 648 | 1.0753 | 0.6444 |
| 0.002 | 25.0 | 675 | 1.0830 | 0.6444 |
| 0.002 | 26.0 | 702 | 1.0841 | 0.6444 |
| 0.0018 | 27.0 | 729 | 1.0884 | 0.6444 |
| 0.0017 | 28.0 | 756 | 1.0904 | 0.6444 |
| 0.0017 | 29.0 | 783 | 1.1034 | 0.6444 |
| 0.0016 | 30.0 | 810 | 1.1073 | 0.6444 |
| 0.0015 | 31.0 | 837 | 1.1021 | 0.6444 |
| 0.0015 | 32.0 | 864 | 1.1089 | 0.6444 |
| 0.0014 | 33.0 | 891 | 1.1157 | 0.6444 |
| 0.0014 | 34.0 | 918 | 1.1170 | 0.6444 |
| 0.0013 | 35.0 | 945 | 1.1193 | 0.6444 |
| 0.0012 | 36.0 | 972 | 1.1215 | 0.6444 |
| 0.0013 | 37.0 | 999 | 1.1225 | 0.6444 |
| 0.0012 | 38.0 | 1026 | 1.1226 | 0.6444 |
| 0.0012 | 39.0 | 1053 | 1.1299 | 0.6444 |
| 0.0011 | 40.0 | 1080 | 1.1301 | 0.6444 |
| 0.0012 | 41.0 | 1107 | 1.1312 | 0.6444 |
| 0.0011 | 42.0 | 1134 | 1.1308 | 0.6444 |
| 0.0012 | 43.0 | 1161 | 1.1360 | 0.6444 |
| 0.001 | 44.0 | 1188 | 1.1351 | 0.6444 |
| 0.0011 | 45.0 | 1215 | 1.1359 | 0.6444 |
| 0.0011 | 46.0 | 1242 | 1.1364 | 0.6444 |
| 0.001 | 47.0 | 1269 | 1.1364 | 0.6444 |
| 0.0011 | 48.0 | 1296 | 1.1362 | 0.6444 |
| 0.0011 | 49.0 | 1323 | 1.1362 | 0.6444 |
| 0.0011 | 50.0 | 1350 | 1.1362 | 0.6444 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_adamax_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_adamax_00001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3720
- Accuracy: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2573 | 1.0 | 27 | 1.3177 | 0.3778 |
| 0.9474 | 2.0 | 54 | 1.2698 | 0.4667 |
| 0.6743 | 3.0 | 81 | 1.1709 | 0.5333 |
| 0.53 | 4.0 | 108 | 1.1238 | 0.6 |
| 0.3327 | 5.0 | 135 | 1.1060 | 0.6 |
| 0.2187 | 6.0 | 162 | 1.0991 | 0.6444 |
| 0.1497 | 7.0 | 189 | 1.1072 | 0.6444 |
| 0.086 | 8.0 | 216 | 1.1220 | 0.6444 |
| 0.0449 | 9.0 | 243 | 1.1215 | 0.6444 |
| 0.0257 | 10.0 | 270 | 1.1368 | 0.6667 |
| 0.0174 | 11.0 | 297 | 1.1587 | 0.6667 |
| 0.0102 | 12.0 | 324 | 1.1715 | 0.6889 |
| 0.0083 | 13.0 | 351 | 1.2117 | 0.6889 |
| 0.0067 | 14.0 | 378 | 1.2042 | 0.6889 |
| 0.0061 | 15.0 | 405 | 1.2320 | 0.6889 |
| 0.0048 | 16.0 | 432 | 1.2396 | 0.6889 |
| 0.0043 | 17.0 | 459 | 1.2501 | 0.6889 |
| 0.0039 | 18.0 | 486 | 1.2585 | 0.6667 |
| 0.0034 | 19.0 | 513 | 1.2714 | 0.6889 |
| 0.0031 | 20.0 | 540 | 1.2786 | 0.6889 |
| 0.0029 | 21.0 | 567 | 1.2831 | 0.6667 |
| 0.0026 | 22.0 | 594 | 1.2886 | 0.6667 |
| 0.0022 | 23.0 | 621 | 1.2985 | 0.6667 |
| 0.0022 | 24.0 | 648 | 1.3036 | 0.6667 |
| 0.002 | 25.0 | 675 | 1.3071 | 0.6667 |
| 0.002 | 26.0 | 702 | 1.3150 | 0.6667 |
| 0.0017 | 27.0 | 729 | 1.3222 | 0.6667 |
| 0.0018 | 28.0 | 756 | 1.3235 | 0.6667 |
| 0.0018 | 29.0 | 783 | 1.3294 | 0.6667 |
| 0.0017 | 30.0 | 810 | 1.3351 | 0.6667 |
| 0.0015 | 31.0 | 837 | 1.3358 | 0.6667 |
| 0.0016 | 32.0 | 864 | 1.3406 | 0.6667 |
| 0.0015 | 33.0 | 891 | 1.3434 | 0.6667 |
| 0.0014 | 34.0 | 918 | 1.3481 | 0.6667 |
| 0.0013 | 35.0 | 945 | 1.3523 | 0.6667 |
| 0.0013 | 36.0 | 972 | 1.3535 | 0.6667 |
| 0.0013 | 37.0 | 999 | 1.3558 | 0.6667 |
| 0.0012 | 38.0 | 1026 | 1.3590 | 0.6667 |
| 0.0012 | 39.0 | 1053 | 1.3619 | 0.6667 |
| 0.0011 | 40.0 | 1080 | 1.3634 | 0.6667 |
| 0.0012 | 41.0 | 1107 | 1.3657 | 0.6667 |
| 0.0011 | 42.0 | 1134 | 1.3669 | 0.6667 |
| 0.0011 | 43.0 | 1161 | 1.3696 | 0.6667 |
| 0.0011 | 44.0 | 1188 | 1.3699 | 0.6667 |
| 0.0011 | 45.0 | 1215 | 1.3707 | 0.6667 |
| 0.0011 | 46.0 | 1242 | 1.3712 | 0.6667 |
| 0.0011 | 47.0 | 1269 | 1.3718 | 0.6667 |
| 0.0011 | 48.0 | 1296 | 1.3720 | 0.6667 |
| 0.0011 | 49.0 | 1323 | 1.3720 | 0.6667 |
| 0.0011 | 50.0 | 1350 | 1.3720 | 0.6667 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
bortle/astrophotography-object-classifier-alpha5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# astrophotography-object-classifier-alpha5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1827
- Accuracy: 0.9516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.2639 | 1.0 | 2575 | 0.2192 | 0.9461 |
| 0.2457 | 2.0 | 5150 | 0.2065 | 0.9464 |
| 0.3157 | 3.0 | 7725 | 0.1827 | 0.9516 |
| 0.3149 | 4.0 | 10300 | 0.1855 | 0.9488 |
| 0.1212 | 5.0 | 12875 | 0.2079 | 0.9480 |
| 0.078 | 6.0 | 15450 | 0.2008 | 0.9516 |
| 0.3493 | 7.0 | 18025 | 0.2038 | 0.9497 |
| 0.131 | 8.0 | 20600 | 0.2059 | 0.9510 |
| 0.2658 | 9.0 | 23175 | 0.2089 | 0.9510 |
| 0.0762 | 10.0 | 25750 | 0.2068 | 0.9541 |
| 0.127 | 11.0 | 28325 | 0.1986 | 0.9543 |
| 0.181 | 12.0 | 30900 | 0.2227 | 0.9513 |
| 0.1072 | 13.0 | 33475 | 0.2303 | 0.9502 |
| 0.0179 | 14.0 | 36050 | 0.2240 | 0.9483 |
| 0.1447 | 15.0 | 38625 | 0.2364 | 0.9505 |
| 0.0933 | 16.0 | 41200 | 0.2372 | 0.9532 |
| 0.17 | 17.0 | 43775 | 0.2166 | 0.9557 |
| 0.0463 | 18.0 | 46350 | 0.2852 | 0.9461 |
| 0.1207 | 19.0 | 48925 | 0.2653 | 0.9508 |
| 0.1761 | 20.0 | 51500 | 0.2443 | 0.9521 |
| 0.1441 | 21.0 | 54075 | 0.2464 | 0.9535 |
| 0.1279 | 22.0 | 56650 | 0.2681 | 0.9499 |
| 0.1811 | 23.0 | 59225 | 0.2626 | 0.9538 |
| 0.1737 | 24.0 | 61800 | 0.2604 | 0.9541 |
| 0.0275 | 25.0 | 64375 | 0.2625 | 0.9510 |
| 0.1757 | 26.0 | 66950 | 0.2819 | 0.9488 |
| 0.1257 | 27.0 | 69525 | 0.2708 | 0.9521 |
| 0.1097 | 28.0 | 72100 | 0.2801 | 0.9519 |
| 0.0772 | 29.0 | 74675 | 0.2870 | 0.9499 |
| 0.132 | 30.0 | 77250 | 0.2824 | 0.9497 |
| 0.0652 | 31.0 | 79825 | 0.2628 | 0.9538 |
| 0.0324 | 32.0 | 82400 | 0.3223 | 0.9453 |
| 0.1774 | 33.0 | 84975 | 0.2749 | 0.9549 |
| 0.1178 | 34.0 | 87550 | 0.2905 | 0.9513 |
| 0.0804 | 35.0 | 90125 | 0.3100 | 0.9480 |
| 0.0617 | 36.0 | 92700 | 0.3131 | 0.9475 |
| 0.0348 | 37.0 | 95275 | 0.3341 | 0.9486 |
| 0.0057 | 38.0 | 97850 | 0.3225 | 0.9466 |
| 0.0409 | 39.0 | 100425 | 0.3206 | 0.9483 |
| 0.1052 | 40.0 | 103000 | 0.3212 | 0.9494 |
| 0.0943 | 41.0 | 105575 | 0.3075 | 0.9508 |
| 0.0018 | 42.0 | 108150 | 0.3062 | 0.9519 |
| 0.0287 | 43.0 | 110725 | 0.3224 | 0.9469 |
| 0.0384 | 44.0 | 113300 | 0.3086 | 0.9488 |
| 0.1214 | 45.0 | 115875 | 0.3145 | 0.9494 |
| 0.1735 | 46.0 | 118450 | 0.3191 | 0.9494 |
| 0.0477 | 47.0 | 121025 | 0.3004 | 0.9521 |
| 0.0221 | 48.0 | 123600 | 0.3205 | 0.9480 |
| 0.0939 | 49.0 | 126175 | 0.3431 | 0.9486 |
| 0.0599 | 50.0 | 128750 | 0.3167 | 0.9516 |
| 0.1785 | 51.0 | 131325 | 0.3274 | 0.9513 |
| 0.1039 | 52.0 | 133900 | 0.3114 | 0.9519 |
| 0.0527 | 53.0 | 136475 | 0.3252 | 0.9477 |
| 0.0584 | 54.0 | 139050 | 0.3200 | 0.9510 |
| 0.1022 | 55.0 | 141625 | 0.3284 | 0.9491 |
| 0.013 | 56.0 | 144200 | 0.3386 | 0.9475 |
| 0.0488 | 57.0 | 146775 | 0.3290 | 0.9505 |
| 0.0514 | 58.0 | 149350 | 0.3126 | 0.9535 |
| 0.0184 | 59.0 | 151925 | 0.3196 | 0.9532 |
| 0.1233 | 60.0 | 154500 | 0.3270 | 0.9516 |
| 0.1667 | 61.0 | 157075 | 0.3250 | 0.9502 |
| 0.0497 | 62.0 | 159650 | 0.3375 | 0.9466 |
| 0.0445 | 63.0 | 162225 | 0.3493 | 0.9502 |
| 0.114 | 64.0 | 164800 | 0.3368 | 0.9488 |
| 0.048 | 65.0 | 167375 | 0.3358 | 0.9510 |
| 0.2337 | 66.0 | 169950 | 0.3330 | 0.9510 |
| 0.0705 | 67.0 | 172525 | 0.3480 | 0.9510 |
| 0.094 | 68.0 | 175100 | 0.3508 | 0.9497 |
| 0.0498 | 69.0 | 177675 | 0.3328 | 0.9508 |
| 0.0535 | 70.0 | 180250 | 0.3558 | 0.9499 |
| 0.0217 | 71.0 | 182825 | 0.3583 | 0.9488 |
| 0.0264 | 72.0 | 185400 | 0.3600 | 0.9477 |
| 0.0108 | 73.0 | 187975 | 0.3629 | 0.9491 |
| 0.0446 | 74.0 | 190550 | 0.3570 | 0.9508 |
| 0.0702 | 75.0 | 193125 | 0.3600 | 0.9502 |
| 0.141 | 76.0 | 195700 | 0.3428 | 0.9527 |
| 0.0226 | 77.0 | 198275 | 0.3594 | 0.9502 |
| 0.0055 | 78.0 | 200850 | 0.3653 | 0.9508 |
| 0.1442 | 79.0 | 203425 | 0.3437 | 0.9530 |
| 0.0834 | 80.0 | 206000 | 0.3431 | 0.9524 |
| 0.0388 | 81.0 | 208575 | 0.3426 | 0.9521 |
| 0.0321 | 82.0 | 211150 | 0.3555 | 0.9497 |
| 0.051 | 83.0 | 213725 | 0.3730 | 0.9505 |
| 0.0049 | 84.0 | 216300 | 0.3549 | 0.9527 |
| 0.043 | 85.0 | 218875 | 0.3592 | 0.9524 |
| 0.0284 | 86.0 | 221450 | 0.3749 | 0.9499 |
| 0.0923 | 87.0 | 224025 | 0.3527 | 0.9513 |
| 0.1188 | 88.0 | 226600 | 0.3725 | 0.9486 |
| 0.1493 | 89.0 | 229175 | 0.3560 | 0.9521 |
| 0.0164 | 90.0 | 231750 | 0.3573 | 0.9508 |
| 0.0477 | 91.0 | 234325 | 0.3679 | 0.9502 |
| 0.0827 | 92.0 | 236900 | 0.3683 | 0.9486 |
| 0.0799 | 93.0 | 239475 | 0.3667 | 0.9510 |
| 0.0413 | 94.0 | 242050 | 0.3604 | 0.9516 |
| 0.071 | 95.0 | 244625 | 0.3725 | 0.9483 |
| 0.2079 | 96.0 | 247200 | 0.3688 | 0.9483 |
| 0.0665 | 97.0 | 249775 | 0.3576 | 0.9521 |
| 0.0673 | 98.0 | 252350 | 0.3636 | 0.9513 |
| 0.062 | 99.0 | 254925 | 0.3688 | 0.9513 |
| 0.1217 | 100.0 | 257500 | 0.3742 | 0.9508 |
| 0.0951 | 101.0 | 260075 | 0.3718 | 0.9491 |
| 0.0118 | 102.0 | 262650 | 0.3849 | 0.9491 |
| 0.0307 | 103.0 | 265225 | 0.3644 | 0.9535 |
| 0.0157 | 104.0 | 267800 | 0.3647 | 0.9524 |
| 0.0125 | 105.0 | 270375 | 0.3994 | 0.9486 |
| 0.0213 | 106.0 | 272950 | 0.3775 | 0.9499 |
| 0.1249 | 107.0 | 275525 | 0.3902 | 0.9491 |
| 0.0333 | 108.0 | 278100 | 0.3637 | 0.9516 |
| 0.0545 | 109.0 | 280675 | 0.3663 | 0.9521 |
| 0.1136 | 110.0 | 283250 | 0.3847 | 0.9502 |
| 0.0751 | 111.0 | 285825 | 0.3818 | 0.9513 |
| 0.001 | 112.0 | 288400 | 0.3811 | 0.9521 |
| 0.0282 | 113.0 | 290975 | 0.3843 | 0.9510 |
| 0.1117 | 114.0 | 293550 | 0.3790 | 0.9521 |
| 0.0022 | 115.0 | 296125 | 0.3717 | 0.9521 |
| 0.0203 | 116.0 | 298700 | 0.3794 | 0.9530 |
| 0.0437 | 117.0 | 301275 | 0.3807 | 0.9527 |
| 0.0045 | 118.0 | 303850 | 0.3821 | 0.9530 |
| 0.0015 | 119.0 | 306425 | 0.3867 | 0.9527 |
| 0.1152 | 120.0 | 309000 | 0.3842 | 0.9521 |
| 0.0748 | 121.0 | 311575 | 0.3839 | 0.9527 |
| 0.0955 | 122.0 | 314150 | 0.3805 | 0.9516 |
| 0.0043 | 123.0 | 316725 | 0.3833 | 0.9521 |
| 0.0249 | 124.0 | 319300 | 0.3745 | 0.9497 |
| 0.0002 | 125.0 | 321875 | 0.3744 | 0.9519 |
| 0.0169 | 126.0 | 324450 | 0.3808 | 0.9510 |
| 0.0277 | 127.0 | 327025 | 0.3735 | 0.9524 |
| 0.0082 | 128.0 | 329600 | 0.3831 | 0.9527 |
| 0.0737 | 129.0 | 332175 | 0.3891 | 0.9524 |
| 0.0517 | 130.0 | 334750 | 0.3839 | 0.9530 |
| 0.0218 | 131.0 | 337325 | 0.3863 | 0.9527 |
| 0.0228 | 132.0 | 339900 | 0.3913 | 0.9519 |
| 0.0094 | 133.0 | 342475 | 0.3968 | 0.9513 |
| 0.0784 | 134.0 | 345050 | 0.3871 | 0.9532 |
| 0.0116 | 135.0 | 347625 | 0.3890 | 0.9538 |
| 0.015 | 136.0 | 350200 | 0.3846 | 0.9530 |
| 0.0307 | 137.0 | 352775 | 0.3850 | 0.9530 |
| 0.0081 | 138.0 | 355350 | 0.3852 | 0.9532 |
| 0.0705 | 139.0 | 357925 | 0.3859 | 0.9527 |
| 0.0442 | 140.0 | 360500 | 0.3871 | 0.9524 |
| 0.0888 | 141.0 | 363075 | 0.3851 | 0.9535 |
| 0.0169 | 142.0 | 365650 | 0.3908 | 0.9527 |
| 0.0132 | 143.0 | 368225 | 0.3923 | 0.9527 |
| 0.0349 | 144.0 | 370800 | 0.3880 | 0.9527 |
| 0.0014 | 145.0 | 373375 | 0.3875 | 0.9535 |
| 0.0495 | 146.0 | 375950 | 0.3898 | 0.9535 |
| 0.0006 | 147.0 | 378525 | 0.3908 | 0.9530 |
| 0.0226 | 148.0 | 381100 | 0.3899 | 0.9527 |
| 0.0927 | 149.0 | 383675 | 0.3895 | 0.9527 |
| 0.081 | 150.0 | 386250 | 0.3896 | 0.9527 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"diffuse_nebula",
"galaxy",
"sun",
"supernova_remnant",
"globular_cluster",
"jupiter",
"mars",
"milky_way",
"moon",
"open_cluster",
"planetary_nebula",
"saturn"
] |
hkivancoral/hushem_5x_deit_base_adamax_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_adamax_00001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4419
- Accuracy: 0.8372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3275 | 1.0 | 28 | 1.2372 | 0.5814 |
| 1.0641 | 2.0 | 56 | 1.0484 | 0.6977 |
| 0.7591 | 3.0 | 84 | 0.8760 | 0.7442 |
| 0.5652 | 4.0 | 112 | 0.7360 | 0.8140 |
| 0.3906 | 5.0 | 140 | 0.6489 | 0.8372 |
| 0.3059 | 6.0 | 168 | 0.5954 | 0.8605 |
| 0.1994 | 7.0 | 196 | 0.5269 | 0.8372 |
| 0.134 | 8.0 | 224 | 0.5174 | 0.8605 |
| 0.0783 | 9.0 | 252 | 0.4602 | 0.8605 |
| 0.0454 | 10.0 | 280 | 0.4569 | 0.8372 |
| 0.0318 | 11.0 | 308 | 0.4393 | 0.8837 |
| 0.018 | 12.0 | 336 | 0.4222 | 0.8605 |
| 0.0132 | 13.0 | 364 | 0.4453 | 0.8837 |
| 0.0088 | 14.0 | 392 | 0.4098 | 0.8837 |
| 0.0068 | 15.0 | 420 | 0.4226 | 0.8605 |
| 0.0058 | 16.0 | 448 | 0.4268 | 0.8605 |
| 0.0055 | 17.0 | 476 | 0.4132 | 0.8605 |
| 0.0045 | 18.0 | 504 | 0.4342 | 0.8605 |
| 0.004 | 19.0 | 532 | 0.4228 | 0.8605 |
| 0.0033 | 20.0 | 560 | 0.4271 | 0.8372 |
| 0.0033 | 21.0 | 588 | 0.4254 | 0.8372 |
| 0.0029 | 22.0 | 616 | 0.4205 | 0.8372 |
| 0.0027 | 23.0 | 644 | 0.4207 | 0.8372 |
| 0.0024 | 24.0 | 672 | 0.4248 | 0.8605 |
| 0.0022 | 25.0 | 700 | 0.4229 | 0.8372 |
| 0.0021 | 26.0 | 728 | 0.4293 | 0.8372 |
| 0.002 | 27.0 | 756 | 0.4267 | 0.8372 |
| 0.002 | 28.0 | 784 | 0.4239 | 0.8605 |
| 0.0018 | 29.0 | 812 | 0.4273 | 0.8372 |
| 0.0018 | 30.0 | 840 | 0.4313 | 0.8372 |
| 0.0016 | 31.0 | 868 | 0.4289 | 0.8372 |
| 0.0016 | 32.0 | 896 | 0.4329 | 0.8372 |
| 0.0016 | 33.0 | 924 | 0.4313 | 0.8372 |
| 0.0014 | 34.0 | 952 | 0.4362 | 0.8372 |
| 0.0016 | 35.0 | 980 | 0.4336 | 0.8372 |
| 0.0014 | 36.0 | 1008 | 0.4353 | 0.8372 |
| 0.0014 | 37.0 | 1036 | 0.4446 | 0.8372 |
| 0.0013 | 38.0 | 1064 | 0.4482 | 0.8372 |
| 0.0013 | 39.0 | 1092 | 0.4496 | 0.8372 |
| 0.0012 | 40.0 | 1120 | 0.4442 | 0.8372 |
| 0.0013 | 41.0 | 1148 | 0.4456 | 0.8372 |
| 0.0013 | 42.0 | 1176 | 0.4450 | 0.8372 |
| 0.0012 | 43.0 | 1204 | 0.4433 | 0.8372 |
| 0.0012 | 44.0 | 1232 | 0.4424 | 0.8372 |
| 0.0011 | 45.0 | 1260 | 0.4418 | 0.8372 |
| 0.0011 | 46.0 | 1288 | 0.4417 | 0.8372 |
| 0.0011 | 47.0 | 1316 | 0.4421 | 0.8372 |
| 0.0011 | 48.0 | 1344 | 0.4419 | 0.8372 |
| 0.0011 | 49.0 | 1372 | 0.4419 | 0.8372 |
| 0.0011 | 50.0 | 1400 | 0.4419 | 0.8372 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_adamax_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_adamax_00001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2070
- Accuracy: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2779 | 1.0 | 28 | 1.2578 | 0.5 |
| 0.9904 | 2.0 | 56 | 1.0864 | 0.5952 |
| 0.7136 | 3.0 | 84 | 0.8757 | 0.7381 |
| 0.5283 | 4.0 | 112 | 0.7271 | 0.8095 |
| 0.3401 | 5.0 | 140 | 0.5900 | 0.8333 |
| 0.2667 | 6.0 | 168 | 0.4970 | 0.8571 |
| 0.1719 | 7.0 | 196 | 0.4291 | 0.8810 |
| 0.1351 | 8.0 | 224 | 0.3736 | 0.8810 |
| 0.0756 | 9.0 | 252 | 0.3239 | 0.8571 |
| 0.0457 | 10.0 | 280 | 0.2724 | 0.9286 |
| 0.0311 | 11.0 | 308 | 0.2513 | 0.9286 |
| 0.0183 | 12.0 | 336 | 0.2397 | 0.9524 |
| 0.0115 | 13.0 | 364 | 0.2242 | 0.9286 |
| 0.0092 | 14.0 | 392 | 0.2124 | 0.9286 |
| 0.0064 | 15.0 | 420 | 0.2027 | 0.9286 |
| 0.0052 | 16.0 | 448 | 0.2099 | 0.9286 |
| 0.0048 | 17.0 | 476 | 0.2208 | 0.9286 |
| 0.0041 | 18.0 | 504 | 0.2156 | 0.9286 |
| 0.0036 | 19.0 | 532 | 0.2081 | 0.9286 |
| 0.0034 | 20.0 | 560 | 0.2100 | 0.9286 |
| 0.003 | 21.0 | 588 | 0.2099 | 0.9286 |
| 0.0028 | 22.0 | 616 | 0.2113 | 0.9286 |
| 0.0024 | 23.0 | 644 | 0.2110 | 0.9286 |
| 0.0023 | 24.0 | 672 | 0.2106 | 0.9286 |
| 0.0022 | 25.0 | 700 | 0.2101 | 0.9286 |
| 0.002 | 26.0 | 728 | 0.2088 | 0.9286 |
| 0.002 | 27.0 | 756 | 0.2066 | 0.9286 |
| 0.0018 | 28.0 | 784 | 0.2096 | 0.9286 |
| 0.0018 | 29.0 | 812 | 0.2064 | 0.9286 |
| 0.0016 | 30.0 | 840 | 0.2088 | 0.9286 |
| 0.0016 | 31.0 | 868 | 0.2088 | 0.9286 |
| 0.0015 | 32.0 | 896 | 0.2078 | 0.9286 |
| 0.0015 | 33.0 | 924 | 0.2057 | 0.9286 |
| 0.0014 | 34.0 | 952 | 0.2073 | 0.9286 |
| 0.0014 | 35.0 | 980 | 0.2070 | 0.9286 |
| 0.0014 | 36.0 | 1008 | 0.2069 | 0.9286 |
| 0.0013 | 37.0 | 1036 | 0.2071 | 0.9286 |
| 0.0013 | 38.0 | 1064 | 0.2055 | 0.9286 |
| 0.0013 | 39.0 | 1092 | 0.2077 | 0.9286 |
| 0.0011 | 40.0 | 1120 | 0.2076 | 0.9286 |
| 0.0012 | 41.0 | 1148 | 0.2068 | 0.9286 |
| 0.0012 | 42.0 | 1176 | 0.2086 | 0.9286 |
| 0.0011 | 43.0 | 1204 | 0.2084 | 0.9286 |
| 0.0011 | 44.0 | 1232 | 0.2077 | 0.9286 |
| 0.0011 | 45.0 | 1260 | 0.2078 | 0.9286 |
| 0.0011 | 46.0 | 1288 | 0.2072 | 0.9286 |
| 0.0011 | 47.0 | 1316 | 0.2070 | 0.9286 |
| 0.0011 | 48.0 | 1344 | 0.2070 | 0.9286 |
| 0.0012 | 49.0 | 1372 | 0.2070 | 0.9286 |
| 0.0011 | 50.0 | 1400 | 0.2070 | 0.9286 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_adamax_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_adamax_00001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5126
- Accuracy: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2842 | 1.0 | 28 | 1.2327 | 0.4634 |
| 1.0428 | 2.0 | 56 | 1.0859 | 0.5854 |
| 0.7338 | 3.0 | 84 | 0.9001 | 0.6585 |
| 0.526 | 4.0 | 112 | 0.7803 | 0.6829 |
| 0.3644 | 5.0 | 140 | 0.6952 | 0.6829 |
| 0.2797 | 6.0 | 168 | 0.5854 | 0.7073 |
| 0.1574 | 7.0 | 196 | 0.5546 | 0.7561 |
| 0.1102 | 8.0 | 224 | 0.5073 | 0.7805 |
| 0.0668 | 9.0 | 252 | 0.4703 | 0.8049 |
| 0.0352 | 10.0 | 280 | 0.4612 | 0.8293 |
| 0.0202 | 11.0 | 308 | 0.4468 | 0.8537 |
| 0.0126 | 12.0 | 336 | 0.4508 | 0.8537 |
| 0.0083 | 13.0 | 364 | 0.4495 | 0.8537 |
| 0.0078 | 14.0 | 392 | 0.4584 | 0.8780 |
| 0.0058 | 15.0 | 420 | 0.4565 | 0.8537 |
| 0.0051 | 16.0 | 448 | 0.4617 | 0.8780 |
| 0.0041 | 17.0 | 476 | 0.4620 | 0.8537 |
| 0.0038 | 18.0 | 504 | 0.4684 | 0.8780 |
| 0.0035 | 19.0 | 532 | 0.4714 | 0.8780 |
| 0.0033 | 20.0 | 560 | 0.4764 | 0.8780 |
| 0.0027 | 21.0 | 588 | 0.4824 | 0.8780 |
| 0.0027 | 22.0 | 616 | 0.4828 | 0.8780 |
| 0.0023 | 23.0 | 644 | 0.4858 | 0.8780 |
| 0.0021 | 24.0 | 672 | 0.4874 | 0.8780 |
| 0.002 | 25.0 | 700 | 0.4903 | 0.8780 |
| 0.002 | 26.0 | 728 | 0.4912 | 0.8780 |
| 0.0019 | 27.0 | 756 | 0.4926 | 0.8780 |
| 0.0017 | 28.0 | 784 | 0.4940 | 0.8780 |
| 0.0016 | 29.0 | 812 | 0.4962 | 0.8780 |
| 0.0016 | 30.0 | 840 | 0.4964 | 0.8780 |
| 0.0015 | 31.0 | 868 | 0.4991 | 0.8780 |
| 0.0015 | 32.0 | 896 | 0.5002 | 0.8780 |
| 0.0013 | 33.0 | 924 | 0.5024 | 0.8780 |
| 0.0014 | 34.0 | 952 | 0.5035 | 0.8780 |
| 0.0013 | 35.0 | 980 | 0.5045 | 0.8780 |
| 0.0013 | 36.0 | 1008 | 0.5049 | 0.8780 |
| 0.0012 | 37.0 | 1036 | 0.5067 | 0.8780 |
| 0.0012 | 38.0 | 1064 | 0.5085 | 0.8780 |
| 0.0012 | 39.0 | 1092 | 0.5083 | 0.8780 |
| 0.0011 | 40.0 | 1120 | 0.5095 | 0.8780 |
| 0.0011 | 41.0 | 1148 | 0.5101 | 0.8780 |
| 0.0011 | 42.0 | 1176 | 0.5103 | 0.8780 |
| 0.0011 | 43.0 | 1204 | 0.5116 | 0.8780 |
| 0.0011 | 44.0 | 1232 | 0.5122 | 0.8780 |
| 0.0011 | 45.0 | 1260 | 0.5120 | 0.8780 |
| 0.0011 | 46.0 | 1288 | 0.5123 | 0.8780 |
| 0.001 | 47.0 | 1316 | 0.5124 | 0.8780 |
| 0.0011 | 48.0 | 1344 | 0.5126 | 0.8780 |
| 0.001 | 49.0 | 1372 | 0.5126 | 0.8780 |
| 0.0011 | 50.0 | 1400 | 0.5126 | 0.8780 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_rms_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_rms_00001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3013
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7665 | 1.0 | 27 | 0.8626 | 0.6667 |
| 0.1538 | 2.0 | 54 | 0.9929 | 0.6444 |
| 0.0306 | 3.0 | 81 | 1.0027 | 0.6667 |
| 0.0064 | 4.0 | 108 | 0.7842 | 0.7778 |
| 0.0027 | 5.0 | 135 | 0.8679 | 0.7556 |
| 0.0017 | 6.0 | 162 | 0.8696 | 0.7778 |
| 0.0013 | 7.0 | 189 | 0.8740 | 0.8 |
| 0.0009 | 8.0 | 216 | 0.8908 | 0.8 |
| 0.0007 | 9.0 | 243 | 0.9347 | 0.8 |
| 0.0006 | 10.0 | 270 | 0.9426 | 0.8 |
| 0.0005 | 11.0 | 297 | 0.9525 | 0.8 |
| 0.0004 | 12.0 | 324 | 0.9729 | 0.8 |
| 0.0003 | 13.0 | 351 | 0.9639 | 0.8 |
| 0.0003 | 14.0 | 378 | 0.9770 | 0.8 |
| 0.0002 | 15.0 | 405 | 1.0081 | 0.8 |
| 0.0002 | 16.0 | 432 | 1.0083 | 0.8 |
| 0.0002 | 17.0 | 459 | 1.0108 | 0.8 |
| 0.0002 | 18.0 | 486 | 1.0375 | 0.8 |
| 0.0001 | 19.0 | 513 | 1.0395 | 0.8 |
| 0.0001 | 20.0 | 540 | 1.0606 | 0.8 |
| 0.0001 | 21.0 | 567 | 1.0558 | 0.8 |
| 0.0001 | 22.0 | 594 | 1.0825 | 0.8 |
| 0.0001 | 23.0 | 621 | 1.0953 | 0.8 |
| 0.0001 | 24.0 | 648 | 1.1052 | 0.8 |
| 0.0001 | 25.0 | 675 | 1.1190 | 0.8 |
| 0.0001 | 26.0 | 702 | 1.1262 | 0.8 |
| 0.0001 | 27.0 | 729 | 1.1329 | 0.8 |
| 0.0 | 28.0 | 756 | 1.1463 | 0.8 |
| 0.0 | 29.0 | 783 | 1.1643 | 0.8 |
| 0.0 | 30.0 | 810 | 1.1628 | 0.8 |
| 0.0 | 31.0 | 837 | 1.1766 | 0.8 |
| 0.0 | 32.0 | 864 | 1.1948 | 0.8 |
| 0.0 | 33.0 | 891 | 1.2037 | 0.8 |
| 0.0 | 34.0 | 918 | 1.2175 | 0.8 |
| 0.0 | 35.0 | 945 | 1.2224 | 0.8 |
| 0.0 | 36.0 | 972 | 1.2274 | 0.8 |
| 0.0 | 37.0 | 999 | 1.2352 | 0.8 |
| 0.0 | 38.0 | 1026 | 1.2512 | 0.8 |
| 0.0 | 39.0 | 1053 | 1.2560 | 0.8 |
| 0.0 | 40.0 | 1080 | 1.2629 | 0.8 |
| 0.0 | 41.0 | 1107 | 1.2729 | 0.8 |
| 0.0 | 42.0 | 1134 | 1.2812 | 0.8 |
| 0.0 | 43.0 | 1161 | 1.2836 | 0.8 |
| 0.0 | 44.0 | 1188 | 1.2893 | 0.8 |
| 0.0 | 45.0 | 1215 | 1.2944 | 0.8 |
| 0.0 | 46.0 | 1242 | 1.2996 | 0.8 |
| 0.0 | 47.0 | 1269 | 1.3018 | 0.8 |
| 0.0 | 48.0 | 1296 | 1.3014 | 0.8 |
| 0.0 | 49.0 | 1323 | 1.3013 | 0.8 |
| 0.0 | 50.0 | 1350 | 1.3013 | 0.8 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_rms_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_rms_00001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6978
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7885 | 1.0 | 27 | 1.0771 | 0.6222 |
| 0.1275 | 2.0 | 54 | 1.2167 | 0.6667 |
| 0.0229 | 3.0 | 81 | 1.0976 | 0.6667 |
| 0.0044 | 4.0 | 108 | 1.1187 | 0.6889 |
| 0.0023 | 5.0 | 135 | 1.1431 | 0.7111 |
| 0.0016 | 6.0 | 162 | 1.1708 | 0.7111 |
| 0.0013 | 7.0 | 189 | 1.2048 | 0.7333 |
| 0.0009 | 8.0 | 216 | 1.2211 | 0.7111 |
| 0.0007 | 9.0 | 243 | 1.2394 | 0.7333 |
| 0.0006 | 10.0 | 270 | 1.2768 | 0.7111 |
| 0.0005 | 11.0 | 297 | 1.2868 | 0.7333 |
| 0.0004 | 12.0 | 324 | 1.2957 | 0.7333 |
| 0.0003 | 13.0 | 351 | 1.3184 | 0.7333 |
| 0.0003 | 14.0 | 378 | 1.3307 | 0.7333 |
| 0.0003 | 15.0 | 405 | 1.3520 | 0.7333 |
| 0.0002 | 16.0 | 432 | 1.3706 | 0.7333 |
| 0.0002 | 17.0 | 459 | 1.3793 | 0.7333 |
| 0.0002 | 18.0 | 486 | 1.3973 | 0.7333 |
| 0.0001 | 19.0 | 513 | 1.4099 | 0.7333 |
| 0.0001 | 20.0 | 540 | 1.4201 | 0.7333 |
| 0.0001 | 21.0 | 567 | 1.4342 | 0.7333 |
| 0.0001 | 22.0 | 594 | 1.4529 | 0.7333 |
| 0.0001 | 23.0 | 621 | 1.4659 | 0.7333 |
| 0.0001 | 24.0 | 648 | 1.4737 | 0.7333 |
| 0.0001 | 25.0 | 675 | 1.4832 | 0.7333 |
| 0.0001 | 26.0 | 702 | 1.5042 | 0.7333 |
| 0.0001 | 27.0 | 729 | 1.5167 | 0.7333 |
| 0.0001 | 28.0 | 756 | 1.5245 | 0.7333 |
| 0.0 | 29.0 | 783 | 1.5454 | 0.7333 |
| 0.0 | 30.0 | 810 | 1.5513 | 0.7333 |
| 0.0 | 31.0 | 837 | 1.5670 | 0.7333 |
| 0.0 | 32.0 | 864 | 1.5721 | 0.7333 |
| 0.0 | 33.0 | 891 | 1.5854 | 0.7333 |
| 0.0 | 34.0 | 918 | 1.5951 | 0.7333 |
| 0.0 | 35.0 | 945 | 1.6025 | 0.7333 |
| 0.0 | 36.0 | 972 | 1.6174 | 0.7333 |
| 0.0 | 37.0 | 999 | 1.6316 | 0.7333 |
| 0.0 | 38.0 | 1026 | 1.6382 | 0.7333 |
| 0.0 | 39.0 | 1053 | 1.6457 | 0.7333 |
| 0.0 | 40.0 | 1080 | 1.6550 | 0.7333 |
| 0.0 | 41.0 | 1107 | 1.6656 | 0.7333 |
| 0.0 | 42.0 | 1134 | 1.6708 | 0.7333 |
| 0.0 | 43.0 | 1161 | 1.6757 | 0.7333 |
| 0.0 | 44.0 | 1188 | 1.6846 | 0.7333 |
| 0.0 | 45.0 | 1215 | 1.6907 | 0.7333 |
| 0.0 | 46.0 | 1242 | 1.6942 | 0.7333 |
| 0.0 | 47.0 | 1269 | 1.6967 | 0.7333 |
| 0.0 | 48.0 | 1296 | 1.6977 | 0.7333 |
| 0.0 | 49.0 | 1323 | 1.6978 | 0.7333 |
| 0.0 | 50.0 | 1350 | 1.6978 | 0.7333 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
phuong-tk-nguyen/resnet-50-finetuned-cifar10-finetuned-cifar10
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-cifar10-finetuned-cifar10
This model was trained from scratch on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7363
- Accuracy: 0.561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9934 | 0.14 | 10 | 1.8738 | 0.517 |
| 1.9493 | 0.28 | 20 | 1.8358 | 0.532 |
| 1.9328 | 0.43 | 30 | 1.7941 | 0.515 |
| 1.9175 | 0.57 | 40 | 1.7683 | 0.531 |
| 1.8875 | 0.71 | 50 | 1.7649 | 0.545 |
| 1.8752 | 0.85 | 60 | 1.7309 | 0.559 |
| 1.8881 | 0.99 | 70 | 1.7363 | 0.561 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
notepsk/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# notepsk/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7870
- Validation Loss: 1.5762
- Train Accuracy: 0.869
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7870 | 1.5762 | 0.869 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
hkivancoral/hushem_5x_deit_base_rms_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_rms_00001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6600
- Accuracy: 0.8837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7956 | 1.0 | 28 | 0.6205 | 0.7674 |
| 0.1411 | 2.0 | 56 | 0.3123 | 0.8837 |
| 0.014 | 3.0 | 84 | 0.3844 | 0.9070 |
| 0.0034 | 4.0 | 112 | 0.3408 | 0.8837 |
| 0.0022 | 5.0 | 140 | 0.4472 | 0.8605 |
| 0.0015 | 6.0 | 168 | 0.3917 | 0.8605 |
| 0.0011 | 7.0 | 196 | 0.3836 | 0.8837 |
| 0.0008 | 8.0 | 224 | 0.4047 | 0.8837 |
| 0.0006 | 9.0 | 252 | 0.4079 | 0.8605 |
| 0.0005 | 10.0 | 280 | 0.4138 | 0.8837 |
| 0.0004 | 11.0 | 308 | 0.4271 | 0.8837 |
| 0.0004 | 12.0 | 336 | 0.4048 | 0.8837 |
| 0.0003 | 13.0 | 364 | 0.4452 | 0.8837 |
| 0.0002 | 14.0 | 392 | 0.4491 | 0.8837 |
| 0.0002 | 15.0 | 420 | 0.4640 | 0.8837 |
| 0.0002 | 16.0 | 448 | 0.4755 | 0.8837 |
| 0.0002 | 17.0 | 476 | 0.4421 | 0.8837 |
| 0.0001 | 18.0 | 504 | 0.4868 | 0.8837 |
| 0.0001 | 19.0 | 532 | 0.5095 | 0.8837 |
| 0.0001 | 20.0 | 560 | 0.5094 | 0.8837 |
| 0.0001 | 21.0 | 588 | 0.5135 | 0.8837 |
| 0.0001 | 22.0 | 616 | 0.5162 | 0.8837 |
| 0.0001 | 23.0 | 644 | 0.5296 | 0.8837 |
| 0.0001 | 24.0 | 672 | 0.5403 | 0.8837 |
| 0.0001 | 25.0 | 700 | 0.5417 | 0.8837 |
| 0.0001 | 26.0 | 728 | 0.5505 | 0.8837 |
| 0.0 | 27.0 | 756 | 0.5557 | 0.8837 |
| 0.0 | 28.0 | 784 | 0.5868 | 0.8837 |
| 0.0 | 29.0 | 812 | 0.5803 | 0.8837 |
| 0.0 | 30.0 | 840 | 0.5730 | 0.8837 |
| 0.0 | 31.0 | 868 | 0.5921 | 0.8837 |
| 0.0 | 32.0 | 896 | 0.5971 | 0.8837 |
| 0.0 | 33.0 | 924 | 0.5949 | 0.8837 |
| 0.0 | 34.0 | 952 | 0.6083 | 0.8837 |
| 0.0 | 35.0 | 980 | 0.5834 | 0.8837 |
| 0.0 | 36.0 | 1008 | 0.6025 | 0.8605 |
| 0.0 | 37.0 | 1036 | 0.6316 | 0.8837 |
| 0.0 | 38.0 | 1064 | 0.6619 | 0.8837 |
| 0.0 | 39.0 | 1092 | 0.6540 | 0.8837 |
| 0.0 | 40.0 | 1120 | 0.6507 | 0.8837 |
| 0.0 | 41.0 | 1148 | 0.6507 | 0.8837 |
| 0.0 | 42.0 | 1176 | 0.6547 | 0.8837 |
| 0.0 | 43.0 | 1204 | 0.6523 | 0.8837 |
| 0.0 | 44.0 | 1232 | 0.6524 | 0.8837 |
| 0.0 | 45.0 | 1260 | 0.6538 | 0.8837 |
| 0.0 | 46.0 | 1288 | 0.6554 | 0.8837 |
| 0.0 | 47.0 | 1316 | 0.6605 | 0.8837 |
| 0.0 | 48.0 | 1344 | 0.6600 | 0.8837 |
| 0.0 | 49.0 | 1372 | 0.6600 | 0.8837 |
| 0.0 | 50.0 | 1400 | 0.6600 | 0.8837 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_rms_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_rms_00001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1652
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8112 | 1.0 | 28 | 0.6674 | 0.6667 |
| 0.1706 | 2.0 | 56 | 0.2999 | 0.8810 |
| 0.0445 | 3.0 | 84 | 0.2095 | 0.9524 |
| 0.0064 | 4.0 | 112 | 0.0984 | 0.9762 |
| 0.0024 | 5.0 | 140 | 0.0814 | 0.9762 |
| 0.0019 | 6.0 | 168 | 0.0875 | 0.9762 |
| 0.0013 | 7.0 | 196 | 0.1085 | 0.9762 |
| 0.001 | 8.0 | 224 | 0.0872 | 0.9762 |
| 0.0008 | 9.0 | 252 | 0.0829 | 0.9762 |
| 0.0006 | 10.0 | 280 | 0.0900 | 0.9762 |
| 0.0005 | 11.0 | 308 | 0.0916 | 0.9762 |
| 0.0004 | 12.0 | 336 | 0.0984 | 0.9762 |
| 0.0004 | 13.0 | 364 | 0.0990 | 0.9762 |
| 0.0003 | 14.0 | 392 | 0.0990 | 0.9762 |
| 0.0003 | 15.0 | 420 | 0.0986 | 0.9762 |
| 0.0002 | 16.0 | 448 | 0.1009 | 0.9762 |
| 0.0002 | 17.0 | 476 | 0.1045 | 0.9762 |
| 0.0002 | 18.0 | 504 | 0.1034 | 0.9762 |
| 0.0001 | 19.0 | 532 | 0.1105 | 0.9762 |
| 0.0001 | 20.0 | 560 | 0.1077 | 0.9762 |
| 0.0001 | 21.0 | 588 | 0.1166 | 0.9762 |
| 0.0001 | 22.0 | 616 | 0.1228 | 0.9762 |
| 0.0001 | 23.0 | 644 | 0.1152 | 0.9762 |
| 0.0001 | 24.0 | 672 | 0.1166 | 0.9762 |
| 0.0001 | 25.0 | 700 | 0.1199 | 0.9762 |
| 0.0001 | 26.0 | 728 | 0.1209 | 0.9762 |
| 0.0001 | 27.0 | 756 | 0.1278 | 0.9762 |
| 0.0001 | 28.0 | 784 | 0.1240 | 0.9762 |
| 0.0 | 29.0 | 812 | 0.1343 | 0.9762 |
| 0.0 | 30.0 | 840 | 0.1301 | 0.9762 |
| 0.0 | 31.0 | 868 | 0.1433 | 0.9762 |
| 0.0 | 32.0 | 896 | 0.1322 | 0.9762 |
| 0.0 | 33.0 | 924 | 0.1376 | 0.9762 |
| 0.0 | 34.0 | 952 | 0.1447 | 0.9524 |
| 0.0 | 35.0 | 980 | 0.1392 | 0.9762 |
| 0.0 | 36.0 | 1008 | 0.1480 | 0.9524 |
| 0.0 | 37.0 | 1036 | 0.1479 | 0.9762 |
| 0.0 | 38.0 | 1064 | 0.1555 | 0.9524 |
| 0.0 | 39.0 | 1092 | 0.1571 | 0.9524 |
| 0.0 | 40.0 | 1120 | 0.1585 | 0.9524 |
| 0.0 | 41.0 | 1148 | 0.1656 | 0.9524 |
| 0.0 | 42.0 | 1176 | 0.1596 | 0.9524 |
| 0.0 | 43.0 | 1204 | 0.1611 | 0.9524 |
| 0.0 | 44.0 | 1232 | 0.1624 | 0.9524 |
| 0.0 | 45.0 | 1260 | 0.1633 | 0.9524 |
| 0.0 | 46.0 | 1288 | 0.1648 | 0.9524 |
| 0.0 | 47.0 | 1316 | 0.1649 | 0.9524 |
| 0.0 | 48.0 | 1344 | 0.1652 | 0.9524 |
| 0.0 | 49.0 | 1372 | 0.1652 | 0.9524 |
| 0.0 | 50.0 | 1400 | 0.1652 | 0.9524 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_rms_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_rms_00001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6351
- Accuracy: 0.9024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8457 | 1.0 | 28 | 0.5947 | 0.7317 |
| 0.1932 | 2.0 | 56 | 0.3789 | 0.8780 |
| 0.0378 | 3.0 | 84 | 0.3371 | 0.9024 |
| 0.0061 | 4.0 | 112 | 0.3727 | 0.9024 |
| 0.0027 | 5.0 | 140 | 0.3487 | 0.9024 |
| 0.0018 | 6.0 | 168 | 0.3750 | 0.9024 |
| 0.0012 | 7.0 | 196 | 0.3872 | 0.9024 |
| 0.0009 | 8.0 | 224 | 0.3976 | 0.9024 |
| 0.0007 | 9.0 | 252 | 0.4053 | 0.9024 |
| 0.0006 | 10.0 | 280 | 0.4125 | 0.9024 |
| 0.0005 | 11.0 | 308 | 0.4192 | 0.9024 |
| 0.0004 | 12.0 | 336 | 0.4329 | 0.9024 |
| 0.0003 | 13.0 | 364 | 0.4400 | 0.9024 |
| 0.0003 | 14.0 | 392 | 0.4408 | 0.9024 |
| 0.0002 | 15.0 | 420 | 0.4473 | 0.9024 |
| 0.0002 | 16.0 | 448 | 0.4630 | 0.9024 |
| 0.0002 | 17.0 | 476 | 0.4703 | 0.9024 |
| 0.0002 | 18.0 | 504 | 0.4685 | 0.9024 |
| 0.0001 | 19.0 | 532 | 0.4848 | 0.9024 |
| 0.0001 | 20.0 | 560 | 0.5034 | 0.9024 |
| 0.0001 | 21.0 | 588 | 0.5008 | 0.9024 |
| 0.0001 | 22.0 | 616 | 0.5129 | 0.9024 |
| 0.0001 | 23.0 | 644 | 0.5167 | 0.9024 |
| 0.0001 | 24.0 | 672 | 0.5213 | 0.9024 |
| 0.0001 | 25.0 | 700 | 0.5209 | 0.9024 |
| 0.0001 | 26.0 | 728 | 0.5340 | 0.9024 |
| 0.0001 | 27.0 | 756 | 0.5439 | 0.9024 |
| 0.0 | 28.0 | 784 | 0.5491 | 0.9024 |
| 0.0 | 29.0 | 812 | 0.5502 | 0.9024 |
| 0.0 | 30.0 | 840 | 0.5577 | 0.9024 |
| 0.0 | 31.0 | 868 | 0.5662 | 0.9024 |
| 0.0 | 32.0 | 896 | 0.5801 | 0.9024 |
| 0.0 | 33.0 | 924 | 0.5760 | 0.9024 |
| 0.0 | 34.0 | 952 | 0.5820 | 0.9024 |
| 0.0 | 35.0 | 980 | 0.5825 | 0.9024 |
| 0.0 | 36.0 | 1008 | 0.5963 | 0.9024 |
| 0.0 | 37.0 | 1036 | 0.6052 | 0.9024 |
| 0.0 | 38.0 | 1064 | 0.6015 | 0.9024 |
| 0.0 | 39.0 | 1092 | 0.6109 | 0.9024 |
| 0.0 | 40.0 | 1120 | 0.6162 | 0.9024 |
| 0.0 | 41.0 | 1148 | 0.6213 | 0.9024 |
| 0.0 | 42.0 | 1176 | 0.6284 | 0.9024 |
| 0.0 | 43.0 | 1204 | 0.6259 | 0.9024 |
| 0.0 | 44.0 | 1232 | 0.6257 | 0.9024 |
| 0.0 | 45.0 | 1260 | 0.6306 | 0.9024 |
| 0.0 | 46.0 | 1288 | 0.6336 | 0.9024 |
| 0.0 | 47.0 | 1316 | 0.6353 | 0.9024 |
| 0.0 | 48.0 | 1344 | 0.6351 | 0.9024 |
| 0.0 | 49.0 | 1372 | 0.6351 | 0.9024 |
| 0.0 | 50.0 | 1400 | 0.6351 | 0.9024 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_rms_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_rms_0001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9961
- Accuracy: 0.7111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4401 | 1.0 | 27 | 1.3889 | 0.2444 |
| 1.4795 | 2.0 | 54 | 1.6032 | 0.2444 |
| 1.2229 | 3.0 | 81 | 1.1436 | 0.5111 |
| 0.8987 | 4.0 | 108 | 1.0040 | 0.5556 |
| 0.4853 | 5.0 | 135 | 1.0534 | 0.6222 |
| 0.1456 | 6.0 | 162 | 1.8360 | 0.5556 |
| 0.0696 | 7.0 | 189 | 1.2156 | 0.7333 |
| 0.0874 | 8.0 | 216 | 0.7950 | 0.7556 |
| 0.0365 | 9.0 | 243 | 1.6830 | 0.7111 |
| 0.0006 | 10.0 | 270 | 1.6730 | 0.7111 |
| 0.0002 | 11.0 | 297 | 1.6991 | 0.7111 |
| 0.0002 | 12.0 | 324 | 1.7182 | 0.7111 |
| 0.0001 | 13.0 | 351 | 1.7320 | 0.7111 |
| 0.0001 | 14.0 | 378 | 1.7414 | 0.7111 |
| 0.0001 | 15.0 | 405 | 1.7505 | 0.7111 |
| 0.0001 | 16.0 | 432 | 1.7579 | 0.7111 |
| 0.0001 | 17.0 | 459 | 1.7666 | 0.7111 |
| 0.0001 | 18.0 | 486 | 1.7749 | 0.7111 |
| 0.0001 | 19.0 | 513 | 1.7836 | 0.7333 |
| 0.0 | 20.0 | 540 | 1.7919 | 0.7333 |
| 0.0 | 21.0 | 567 | 1.8002 | 0.7111 |
| 0.0 | 22.0 | 594 | 1.8101 | 0.7111 |
| 0.0 | 23.0 | 621 | 1.8191 | 0.7111 |
| 0.0 | 24.0 | 648 | 1.8264 | 0.7111 |
| 0.0 | 25.0 | 675 | 1.8362 | 0.7111 |
| 0.0 | 26.0 | 702 | 1.8441 | 0.7111 |
| 0.0 | 27.0 | 729 | 1.8521 | 0.7111 |
| 0.0 | 28.0 | 756 | 1.8613 | 0.7111 |
| 0.0 | 29.0 | 783 | 1.8701 | 0.7111 |
| 0.0 | 30.0 | 810 | 1.8780 | 0.7111 |
| 0.0 | 31.0 | 837 | 1.8862 | 0.7111 |
| 0.0 | 32.0 | 864 | 1.8953 | 0.7111 |
| 0.0 | 33.0 | 891 | 1.9042 | 0.7111 |
| 0.0 | 34.0 | 918 | 1.9125 | 0.7111 |
| 0.0 | 35.0 | 945 | 1.9206 | 0.7111 |
| 0.0 | 36.0 | 972 | 1.9289 | 0.7111 |
| 0.0 | 37.0 | 999 | 1.9371 | 0.7111 |
| 0.0 | 38.0 | 1026 | 1.9452 | 0.7111 |
| 0.0 | 39.0 | 1053 | 1.9530 | 0.7111 |
| 0.0 | 40.0 | 1080 | 1.9602 | 0.7111 |
| 0.0 | 41.0 | 1107 | 1.9674 | 0.7111 |
| 0.0 | 42.0 | 1134 | 1.9741 | 0.7111 |
| 0.0 | 43.0 | 1161 | 1.9798 | 0.7111 |
| 0.0 | 44.0 | 1188 | 1.9852 | 0.7111 |
| 0.0 | 45.0 | 1215 | 1.9896 | 0.7111 |
| 0.0 | 46.0 | 1242 | 1.9931 | 0.7111 |
| 0.0 | 47.0 | 1269 | 1.9953 | 0.7111 |
| 0.0 | 48.0 | 1296 | 1.9961 | 0.7111 |
| 0.0 | 49.0 | 1323 | 1.9961 | 0.7111 |
| 0.0 | 50.0 | 1350 | 1.9961 | 0.7111 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_rms_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_rms_0001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1764
- Accuracy: 0.5333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4732 | 1.0 | 27 | 1.5871 | 0.2667 |
| 1.4137 | 2.0 | 54 | 1.4271 | 0.2667 |
| 1.462 | 3.0 | 81 | 1.4098 | 0.2667 |
| 1.4423 | 4.0 | 108 | 1.4316 | 0.2444 |
| 1.4677 | 5.0 | 135 | 1.1736 | 0.6 |
| 1.1753 | 6.0 | 162 | 1.3090 | 0.4889 |
| 1.0628 | 7.0 | 189 | 1.1008 | 0.4 |
| 0.8856 | 8.0 | 216 | 1.3194 | 0.4667 |
| 0.7266 | 9.0 | 243 | 1.5517 | 0.4667 |
| 0.7206 | 10.0 | 270 | 1.5964 | 0.4222 |
| 0.6825 | 11.0 | 297 | 1.9511 | 0.5333 |
| 0.6024 | 12.0 | 324 | 1.1289 | 0.5111 |
| 0.7093 | 13.0 | 351 | 1.6051 | 0.4667 |
| 0.5446 | 14.0 | 378 | 1.0604 | 0.5333 |
| 0.4716 | 15.0 | 405 | 2.6293 | 0.5778 |
| 0.4728 | 16.0 | 432 | 3.2908 | 0.4889 |
| 0.5099 | 17.0 | 459 | 2.0246 | 0.5333 |
| 0.4809 | 18.0 | 486 | 3.4545 | 0.5333 |
| 0.3484 | 19.0 | 513 | 2.2451 | 0.5111 |
| 0.352 | 20.0 | 540 | 2.8572 | 0.4889 |
| 0.3258 | 21.0 | 567 | 3.5970 | 0.5556 |
| 0.2785 | 22.0 | 594 | 3.6404 | 0.5556 |
| 0.3005 | 23.0 | 621 | 3.6333 | 0.5111 |
| 0.2089 | 24.0 | 648 | 4.2561 | 0.5333 |
| 0.1996 | 25.0 | 675 | 3.8526 | 0.5111 |
| 0.1044 | 26.0 | 702 | 4.1245 | 0.5333 |
| 0.2042 | 27.0 | 729 | 3.9154 | 0.5556 |
| 0.1371 | 28.0 | 756 | 3.3906 | 0.5556 |
| 0.1014 | 29.0 | 783 | 4.2534 | 0.5556 |
| 0.0761 | 30.0 | 810 | 3.8328 | 0.5778 |
| 0.0321 | 31.0 | 837 | 4.5117 | 0.5556 |
| 0.1194 | 32.0 | 864 | 4.5296 | 0.5333 |
| 0.0072 | 33.0 | 891 | 4.9299 | 0.5333 |
| 0.0276 | 34.0 | 918 | 5.0433 | 0.5111 |
| 0.0121 | 35.0 | 945 | 4.9519 | 0.5333 |
| 0.0051 | 36.0 | 972 | 4.9546 | 0.5333 |
| 0.0001 | 37.0 | 999 | 4.9700 | 0.5111 |
| 0.0001 | 38.0 | 1026 | 4.9962 | 0.5111 |
| 0.0 | 39.0 | 1053 | 5.0319 | 0.5111 |
| 0.0 | 40.0 | 1080 | 5.0566 | 0.5111 |
| 0.0001 | 41.0 | 1107 | 5.0812 | 0.5333 |
| 0.0 | 42.0 | 1134 | 5.1051 | 0.5333 |
| 0.0 | 43.0 | 1161 | 5.1228 | 0.5333 |
| 0.0 | 44.0 | 1188 | 5.1393 | 0.5333 |
| 0.0 | 45.0 | 1215 | 5.1531 | 0.5333 |
| 0.0 | 46.0 | 1242 | 5.1647 | 0.5333 |
| 0.0 | 47.0 | 1269 | 5.1724 | 0.5333 |
| 0.0 | 48.0 | 1296 | 5.1763 | 0.5333 |
| 0.0 | 49.0 | 1323 | 5.1764 | 0.5333 |
| 0.0 | 50.0 | 1350 | 5.1764 | 0.5333 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_rms_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_rms_0001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4738
- Accuracy: 0.7907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4909 | 1.0 | 28 | 1.4982 | 0.2558 |
| 1.0101 | 2.0 | 56 | 0.9495 | 0.6744 |
| 0.4042 | 3.0 | 84 | 0.4221 | 0.7442 |
| 0.1388 | 4.0 | 112 | 0.7310 | 0.7442 |
| 0.1089 | 5.0 | 140 | 1.2245 | 0.8140 |
| 0.0672 | 6.0 | 168 | 0.8270 | 0.7907 |
| 0.0498 | 7.0 | 196 | 0.7672 | 0.7674 |
| 0.0008 | 8.0 | 224 | 1.0591 | 0.7907 |
| 0.0002 | 9.0 | 252 | 1.0196 | 0.8140 |
| 0.0001 | 10.0 | 280 | 1.0272 | 0.8140 |
| 0.0001 | 11.0 | 308 | 1.0400 | 0.8140 |
| 0.0001 | 12.0 | 336 | 1.0507 | 0.8140 |
| 0.0001 | 13.0 | 364 | 1.0623 | 0.7907 |
| 0.0001 | 14.0 | 392 | 1.0738 | 0.7907 |
| 0.0001 | 15.0 | 420 | 1.0809 | 0.7907 |
| 0.0 | 16.0 | 448 | 1.0928 | 0.7907 |
| 0.0 | 17.0 | 476 | 1.1047 | 0.7907 |
| 0.0 | 18.0 | 504 | 1.1192 | 0.7907 |
| 0.0 | 19.0 | 532 | 1.1317 | 0.7907 |
| 0.0 | 20.0 | 560 | 1.1436 | 0.7907 |
| 0.0 | 21.0 | 588 | 1.1568 | 0.7907 |
| 0.0 | 22.0 | 616 | 1.1654 | 0.7907 |
| 0.0 | 23.0 | 644 | 1.1790 | 0.7907 |
| 0.0 | 24.0 | 672 | 1.1944 | 0.7907 |
| 0.0 | 25.0 | 700 | 1.2077 | 0.7907 |
| 0.0 | 26.0 | 728 | 1.2229 | 0.7907 |
| 0.0 | 27.0 | 756 | 1.2379 | 0.7907 |
| 0.0 | 28.0 | 784 | 1.2518 | 0.7907 |
| 0.0 | 29.0 | 812 | 1.2649 | 0.7907 |
| 0.0 | 30.0 | 840 | 1.2762 | 0.7907 |
| 0.0 | 31.0 | 868 | 1.2927 | 0.7907 |
| 0.0 | 32.0 | 896 | 1.3064 | 0.7907 |
| 0.0 | 33.0 | 924 | 1.3200 | 0.7907 |
| 0.0 | 34.0 | 952 | 1.3332 | 0.7907 |
| 0.0 | 35.0 | 980 | 1.3480 | 0.7907 |
| 0.0 | 36.0 | 1008 | 1.3592 | 0.7907 |
| 0.0 | 37.0 | 1036 | 1.3743 | 0.7907 |
| 0.0 | 38.0 | 1064 | 1.3941 | 0.7907 |
| 0.0 | 39.0 | 1092 | 1.4057 | 0.7907 |
| 0.0 | 40.0 | 1120 | 1.4180 | 0.7907 |
| 0.0 | 41.0 | 1148 | 1.4282 | 0.7907 |
| 0.0 | 42.0 | 1176 | 1.4383 | 0.7907 |
| 0.0 | 43.0 | 1204 | 1.4471 | 0.7907 |
| 0.0 | 44.0 | 1232 | 1.4565 | 0.7907 |
| 0.0 | 45.0 | 1260 | 1.4629 | 0.7907 |
| 0.0 | 46.0 | 1288 | 1.4680 | 0.7907 |
| 0.0 | 47.0 | 1316 | 1.4718 | 0.7907 |
| 0.0 | 48.0 | 1344 | 1.4738 | 0.7907 |
| 0.0 | 49.0 | 1372 | 1.4738 | 0.7907 |
| 0.0 | 50.0 | 1400 | 1.4738 | 0.7907 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_5x_deit_base_rms_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_rms_0001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5888
- Accuracy: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6398 | 1.0 | 28 | 1.4620 | 0.2381 |
| 1.4471 | 2.0 | 56 | 1.4867 | 0.2619 |
| 1.4043 | 3.0 | 84 | 1.4639 | 0.2381 |
| 1.6225 | 4.0 | 112 | 1.1986 | 0.4524 |
| 1.0459 | 5.0 | 140 | 1.1310 | 0.4762 |
| 0.7275 | 6.0 | 168 | 0.7753 | 0.6429 |
| 0.4185 | 7.0 | 196 | 0.5503 | 0.7857 |
| 0.2249 | 8.0 | 224 | 0.5491 | 0.8571 |
| 0.0749 | 9.0 | 252 | 0.2650 | 0.9286 |
| 0.0643 | 10.0 | 280 | 0.5070 | 0.8333 |
| 0.083 | 11.0 | 308 | 0.5183 | 0.8810 |
| 0.0258 | 12.0 | 336 | 0.5166 | 0.8571 |
| 0.0004 | 13.0 | 364 | 0.4395 | 0.9524 |
| 0.03 | 14.0 | 392 | 0.5344 | 0.9048 |
| 0.0374 | 15.0 | 420 | 1.0859 | 0.8095 |
| 0.032 | 16.0 | 448 | 0.4372 | 0.9048 |
| 0.0018 | 17.0 | 476 | 0.4691 | 0.9048 |
| 0.0319 | 18.0 | 504 | 0.5620 | 0.8810 |
| 0.022 | 19.0 | 532 | 0.4782 | 0.9048 |
| 0.0002 | 20.0 | 560 | 0.4687 | 0.9048 |
| 0.0001 | 21.0 | 588 | 0.4749 | 0.9048 |
| 0.0001 | 22.0 | 616 | 0.4799 | 0.9048 |
| 0.0001 | 23.0 | 644 | 0.4865 | 0.9048 |
| 0.0001 | 24.0 | 672 | 0.4924 | 0.9048 |
| 0.0001 | 25.0 | 700 | 0.4977 | 0.9048 |
| 0.0001 | 26.0 | 728 | 0.5030 | 0.9048 |
| 0.0 | 27.0 | 756 | 0.5085 | 0.9048 |
| 0.0 | 28.0 | 784 | 0.5132 | 0.9048 |
| 0.0 | 29.0 | 812 | 0.5184 | 0.9048 |
| 0.0 | 30.0 | 840 | 0.5233 | 0.9048 |
| 0.0 | 31.0 | 868 | 0.5283 | 0.9048 |
| 0.0 | 32.0 | 896 | 0.5333 | 0.9048 |
| 0.0 | 33.0 | 924 | 0.5383 | 0.9048 |
| 0.0 | 34.0 | 952 | 0.5430 | 0.9048 |
| 0.0 | 35.0 | 980 | 0.5476 | 0.9048 |
| 0.0 | 36.0 | 1008 | 0.5522 | 0.9048 |
| 0.0 | 37.0 | 1036 | 0.5569 | 0.9048 |
| 0.0 | 38.0 | 1064 | 0.5613 | 0.9048 |
| 0.0 | 39.0 | 1092 | 0.5655 | 0.9048 |
| 0.0 | 40.0 | 1120 | 0.5694 | 0.9048 |
| 0.0 | 41.0 | 1148 | 0.5725 | 0.9048 |
| 0.0 | 42.0 | 1176 | 0.5761 | 0.9048 |
| 0.0 | 43.0 | 1204 | 0.5794 | 0.9048 |
| 0.0 | 44.0 | 1232 | 0.5824 | 0.9048 |
| 0.0 | 45.0 | 1260 | 0.5848 | 0.9048 |
| 0.0 | 46.0 | 1288 | 0.5868 | 0.9048 |
| 0.0 | 47.0 | 1316 | 0.5882 | 0.9048 |
| 0.0 | 48.0 | 1344 | 0.5888 | 0.9048 |
| 0.0 | 49.0 | 1372 | 0.5888 | 0.9048 |
| 0.0 | 50.0 | 1400 | 0.5888 | 0.9048 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
harrytechiz/vit-base-patch16-224-blur_vs_clean
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-blur_vs_clean
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0714
- Accuracy: 0.9754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0539 | 1.0 | 151 | 0.1078 | 0.9596 |
| 0.0611 | 2.0 | 302 | 0.0846 | 0.9698 |
| 0.049 | 3.0 | 453 | 0.0714 | 0.9754 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
[
"blur",
"clean"
] |
phuong-tk-nguyen/vit-base-patch16-224-newly-trained
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-newly-trained
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1996
- Accuracy: 0.964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2183 | 0.14 | 10 | 1.6296 | 0.629 |
| 1.4213 | 0.28 | 20 | 0.8637 | 0.899 |
| 0.86 | 0.43 | 30 | 0.4598 | 0.949 |
| 0.614 | 0.57 | 40 | 0.2998 | 0.96 |
| 0.48 | 0.71 | 50 | 0.2337 | 0.967 |
| 0.4123 | 0.85 | 60 | 0.2091 | 0.964 |
| 0.4511 | 0.99 | 70 | 0.1996 | 0.964 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
hkivancoral/hushem_5x_deit_base_rms_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_deit_base_rms_0001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3118
- Accuracy: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5446 | 1.0 | 28 | 1.3850 | 0.2195 |
| 1.371 | 2.0 | 56 | 1.0037 | 0.4878 |
| 0.7358 | 3.0 | 84 | 0.5519 | 0.7561 |
| 0.2869 | 4.0 | 112 | 0.6592 | 0.7561 |
| 0.1411 | 5.0 | 140 | 0.6324 | 0.8780 |
| 0.0266 | 6.0 | 168 | 0.8126 | 0.8049 |
| 0.0011 | 7.0 | 196 | 0.7003 | 0.8537 |
| 0.0012 | 8.0 | 224 | 1.2708 | 0.8049 |
| 0.0223 | 9.0 | 252 | 0.7784 | 0.8780 |
| 0.0109 | 10.0 | 280 | 1.2289 | 0.7805 |
| 0.0002 | 11.0 | 308 | 0.9688 | 0.8537 |
| 0.03 | 12.0 | 336 | 0.8929 | 0.8537 |
| 0.0037 | 13.0 | 364 | 0.7649 | 0.8537 |
| 0.0119 | 14.0 | 392 | 0.9677 | 0.8049 |
| 0.0001 | 15.0 | 420 | 1.0107 | 0.7805 |
| 0.0001 | 16.0 | 448 | 1.0261 | 0.7805 |
| 0.0001 | 17.0 | 476 | 1.0390 | 0.7805 |
| 0.0001 | 18.0 | 504 | 1.0514 | 0.7805 |
| 0.0001 | 19.0 | 532 | 1.0626 | 0.7805 |
| 0.0 | 20.0 | 560 | 1.0741 | 0.7805 |
| 0.0 | 21.0 | 588 | 1.0847 | 0.7805 |
| 0.0 | 22.0 | 616 | 1.0958 | 0.7805 |
| 0.0 | 23.0 | 644 | 1.1069 | 0.7805 |
| 0.0 | 24.0 | 672 | 1.1169 | 0.7805 |
| 0.0 | 25.0 | 700 | 1.1262 | 0.8049 |
| 0.0 | 26.0 | 728 | 1.1359 | 0.8049 |
| 0.0 | 27.0 | 756 | 1.1455 | 0.8049 |
| 0.0 | 28.0 | 784 | 1.1554 | 0.8049 |
| 0.0 | 29.0 | 812 | 1.1647 | 0.8049 |
| 0.0 | 30.0 | 840 | 1.1746 | 0.8049 |
| 0.0 | 31.0 | 868 | 1.1846 | 0.8049 |
| 0.0 | 32.0 | 896 | 1.1951 | 0.8049 |
| 0.0 | 33.0 | 924 | 1.2053 | 0.8293 |
| 0.0 | 34.0 | 952 | 1.2145 | 0.8293 |
| 0.0 | 35.0 | 980 | 1.2243 | 0.8537 |
| 0.0 | 36.0 | 1008 | 1.2340 | 0.8537 |
| 0.0 | 37.0 | 1036 | 1.2436 | 0.8537 |
| 0.0 | 38.0 | 1064 | 1.2528 | 0.8537 |
| 0.0 | 39.0 | 1092 | 1.2615 | 0.8537 |
| 0.0 | 40.0 | 1120 | 1.2699 | 0.8537 |
| 0.0 | 41.0 | 1148 | 1.2781 | 0.8537 |
| 0.0 | 42.0 | 1176 | 1.2859 | 0.8537 |
| 0.0 | 43.0 | 1204 | 1.2920 | 0.8537 |
| 0.0 | 44.0 | 1232 | 1.2978 | 0.8537 |
| 0.0 | 45.0 | 1260 | 1.3031 | 0.8537 |
| 0.0 | 46.0 | 1288 | 1.3073 | 0.8537 |
| 0.0 | 47.0 | 1316 | 1.3103 | 0.8537 |
| 0.0 | 48.0 | 1344 | 1.3117 | 0.8537 |
| 0.0 | 49.0 | 1372 | 1.3118 | 0.8537 |
| 0.0 | 50.0 | 1400 | 1.3118 | 0.8537 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
phuong-tk-nguyen/swin-base-patch4-window7-224-in22k-newly-trained
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-in22k-newly-trained
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1335
- Accuracy: 0.959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2459 | 0.14 | 10 | 1.7346 | 0.575 |
| 1.4338 | 0.28 | 20 | 0.7222 | 0.841 |
| 0.8059 | 0.43 | 30 | 0.3252 | 0.915 |
| 0.5772 | 0.57 | 40 | 0.2071 | 0.942 |
| 0.5599 | 0.71 | 50 | 0.1553 | 0.958 |
| 0.4473 | 0.85 | 60 | 0.1373 | 0.958 |
| 0.4292 | 0.99 | 70 | 0.1335 | 0.959 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
MaksymDrobchak/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
mansee/swin-tiny-patch4-window7-224-spa_saloon_classification-spa-saloon
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-spa_saloon_classification-spa-saloon
This model is a fine-tuned version of [100rab25/swin-tiny-patch4-window7-224-spa_saloon_classification](https://huggingface.co/100rab25/swin-tiny-patch4-window7-224-spa_saloon_classification) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0971
- Accuracy: 0.9652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2504 | 0.99 | 20 | 0.1401 | 0.9512 |
| 0.2051 | 1.98 | 40 | 0.1083 | 0.9652 |
| 0.1894 | 2.96 | 60 | 0.0939 | 0.9652 |
| 0.1115 | 4.0 | 81 | 0.0880 | 0.9686 |
| 0.117 | 4.94 | 100 | 0.0971 | 0.9652 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"ambience",
"hair_style",
"manicure",
"massage_room",
"others",
"pedicure"
] |
shubhamWi91/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4049
- Accuracy: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5093 | 0.98 | 37 | 0.4578 | 0.7776 |
| 0.4411 | 1.99 | 75 | 0.4189 | 0.8131 |
| 0.4177 | 2.94 | 111 | 0.4049 | 0.8243 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"cap",
"no_cap"
] |
dima806/card_type_image_detection
|
Returns card type given an image with about 66% accuracy.
See https://www.kaggle.com/code/dima806/card-types-image-detection-vit for more details.
```
Classification report:
precision recall f1-score support
ace of clubs 0.8000 0.9474 0.8675 38
ace of diamonds 0.6604 0.9211 0.7692 38
ace of hearts 0.7727 0.8947 0.8293 38
ace of spades 0.6129 1.0000 0.7600 38
eight of clubs 0.6500 0.3421 0.4483 38
eight of diamonds 0.7500 0.5385 0.6269 39
eight of hearts 0.5000 0.1842 0.2692 38
eight of spades 0.7273 0.2105 0.3265 38
five of clubs 0.8438 0.6923 0.7606 39
five of diamonds 0.7750 0.8158 0.7949 38
five of hearts 0.7949 0.8158 0.8052 38
five of spades 0.7368 0.7368 0.7368 38
four of clubs 0.7333 0.8684 0.7952 38
four of diamonds 0.8571 0.6316 0.7273 38
four of hearts 0.7368 0.7368 0.7368 38
four of spades 0.9000 0.6923 0.7826 39
jack of clubs 0.7037 0.5000 0.5846 38
jack of diamonds 0.5806 0.4737 0.5217 38
jack of hearts 0.8889 0.2105 0.3404 38
jack of spades 0.4000 0.2051 0.2712 39
joker 0.9487 0.9737 0.9610 38
king of clubs 0.3721 0.8421 0.5161 38
king of diamonds 0.4865 0.9474 0.6429 38
king of hearts 0.5472 0.7436 0.6304 39
king of spades 0.4203 0.7632 0.5421 38
nine of clubs 0.5909 0.6842 0.6341 38
nine of diamonds 0.8095 0.4474 0.5763 38
nine of hearts 0.5455 0.6154 0.5783 39
nine of spades 0.4615 0.7895 0.5825 38
queen of clubs 0.2727 0.1538 0.1967 39
queen of diamonds 0.6250 0.1282 0.2128 39
queen of hearts 0.6216 0.6053 0.6133 38
queen of spades 0.7353 0.6579 0.6944 38
seven of clubs 0.5333 0.6316 0.5783 38
seven of diamonds 0.3571 0.3947 0.3750 38
seven of hearts 0.7143 0.7895 0.7500 38
seven of spades 0.7742 0.6316 0.6957 38
six of clubs 0.7368 0.7179 0.7273 39
six of diamonds 0.4462 0.7632 0.5631 38
six of hearts 0.8462 0.5789 0.6875 38
six of spades 0.7879 0.6842 0.7324 38
ten of clubs 0.8889 0.6316 0.7385 38
ten of diamonds 0.6136 0.7105 0.6585 38
ten of hearts 0.7021 0.8684 0.7765 38
ten of spades 0.8529 0.7632 0.8056 38
three of clubs 0.7561 0.7949 0.7750 39
three of diamonds 0.7419 0.6053 0.6667 38
three of hearts 0.7273 0.8205 0.7711 39
three of spades 0.6744 0.7632 0.7160 38
two of clubs 0.7179 0.7368 0.7273 38
two of diamonds 0.7667 0.6053 0.6765 38
two of hearts 0.7647 0.6842 0.7222 38
two of spades 0.7949 0.8158 0.8052 38
accuracy 0.6553 2025
macro avg 0.6804 0.6559 0.6431 2025
weighted avg 0.6802 0.6553 0.6427 2025
```
|
[
"ace of clubs",
"ace of diamonds",
"ace of hearts",
"ace of spades",
"eight of clubs",
"eight of diamonds",
"eight of hearts",
"eight of spades",
"five of clubs",
"five of diamonds",
"five of hearts",
"five of spades",
"four of clubs",
"four of diamonds",
"four of hearts",
"four of spades",
"jack of clubs",
"jack of diamonds",
"jack of hearts",
"jack of spades",
"joker",
"king of clubs",
"king of diamonds",
"king of hearts",
"king of spades",
"nine of clubs",
"nine of diamonds",
"nine of hearts",
"nine of spades",
"queen of clubs",
"queen of diamonds",
"queen of hearts",
"queen of spades",
"seven of clubs",
"seven of diamonds",
"seven of hearts",
"seven of spades",
"six of clubs",
"six of diamonds",
"six of hearts",
"six of spades",
"ten of clubs",
"ten of diamonds",
"ten of hearts",
"ten of spades",
"three of clubs",
"three of diamonds",
"three of hearts",
"three of spades",
"two of clubs",
"two of diamonds",
"two of hearts",
"two of spades"
] |
Zendel/my_awesome_food_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5709
- Accuracy: 0.918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7167 | 0.99 | 62 | 2.5215 | 0.862 |
| 1.8648 | 2.0 | 125 | 1.7438 | 0.891 |
| 1.6405 | 2.98 | 186 | 1.5709 | 0.918 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
Artef/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [umm-maybe/AI-image-detector](https://huggingface.co/umm-maybe/AI-image-detector) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1365
- eval_accuracy: 0.9612
- eval_runtime: 20.7986
- eval_samples_per_second: 28.511
- eval_steps_per_second: 3.606
- epoch: 1.86
- step: 550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.2
|
[
"artificial",
"human"
] |
hkivancoral/hushem_5x_beit_base_adamax_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_5x_beit_base_adamax_001_fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 6.2787
- Accuracy: 0.4444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4418 | 1.0 | 27 | 1.4079 | 0.2444 |
| 1.2606 | 2.0 | 54 | 1.4566 | 0.4 |
| 1.1141 | 3.0 | 81 | 1.4147 | 0.3111 |
| 0.9738 | 4.0 | 108 | 1.7371 | 0.3556 |
| 0.7887 | 5.0 | 135 | 1.5516 | 0.3778 |
| 0.7198 | 6.0 | 162 | 1.3626 | 0.4 |
| 0.8269 | 7.0 | 189 | 1.5448 | 0.3778 |
| 0.8171 | 8.0 | 216 | 1.4576 | 0.4 |
| 0.7255 | 9.0 | 243 | 2.3915 | 0.3778 |
| 0.6369 | 10.0 | 270 | 1.6627 | 0.3778 |
| 0.6809 | 11.0 | 297 | 1.5201 | 0.3556 |
| 0.6237 | 12.0 | 324 | 1.3289 | 0.4222 |
| 0.6768 | 13.0 | 351 | 1.6115 | 0.3556 |
| 0.6336 | 14.0 | 378 | 2.0397 | 0.3778 |
| 0.5238 | 15.0 | 405 | 1.5857 | 0.3778 |
| 0.5016 | 16.0 | 432 | 1.4047 | 0.4444 |
| 0.4321 | 17.0 | 459 | 2.2039 | 0.3556 |
| 0.4791 | 18.0 | 486 | 2.3823 | 0.3778 |
| 0.484 | 19.0 | 513 | 1.4706 | 0.4222 |
| 0.4812 | 20.0 | 540 | 1.6485 | 0.4222 |
| 0.4413 | 21.0 | 567 | 1.7092 | 0.4 |
| 0.4306 | 22.0 | 594 | 1.8582 | 0.4 |
| 0.37 | 23.0 | 621 | 1.8653 | 0.3778 |
| 0.3048 | 24.0 | 648 | 1.6342 | 0.4444 |
| 0.3515 | 25.0 | 675 | 1.5211 | 0.4889 |
| 0.3558 | 26.0 | 702 | 1.9714 | 0.4222 |
| 0.2599 | 27.0 | 729 | 1.7243 | 0.4667 |
| 0.267 | 28.0 | 756 | 1.7049 | 0.5111 |
| 0.2625 | 29.0 | 783 | 2.1704 | 0.4222 |
| 0.2368 | 30.0 | 810 | 2.2942 | 0.4667 |
| 0.2036 | 31.0 | 837 | 2.0691 | 0.4667 |
| 0.1938 | 32.0 | 864 | 2.7340 | 0.4 |
| 0.1597 | 33.0 | 891 | 3.0661 | 0.4 |
| 0.1166 | 34.0 | 918 | 2.8536 | 0.4667 |
| 0.1248 | 35.0 | 945 | 2.9508 | 0.4444 |
| 0.121 | 36.0 | 972 | 3.2153 | 0.4667 |
| 0.0801 | 37.0 | 999 | 3.0021 | 0.4222 |
| 0.0529 | 38.0 | 1026 | 3.3247 | 0.4222 |
| 0.0434 | 39.0 | 1053 | 4.0394 | 0.4667 |
| 0.0599 | 40.0 | 1080 | 4.1062 | 0.4889 |
| 0.0437 | 41.0 | 1107 | 5.3485 | 0.4667 |
| 0.0045 | 42.0 | 1134 | 5.3122 | 0.4667 |
| 0.0368 | 43.0 | 1161 | 5.1937 | 0.4667 |
| 0.0032 | 44.0 | 1188 | 5.6803 | 0.4889 |
| 0.0061 | 45.0 | 1215 | 5.8620 | 0.4444 |
| 0.0035 | 46.0 | 1242 | 5.9016 | 0.4889 |
| 0.0011 | 47.0 | 1269 | 6.3136 | 0.4444 |
| 0.0277 | 48.0 | 1296 | 6.2816 | 0.4444 |
| 0.0067 | 49.0 | 1323 | 6.2787 | 0.4444 |
| 0.0372 | 50.0 | 1350 | 6.2787 | 0.4444 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_1x_beit_base_adamax_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_beit_base_adamax_001_fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6981
- Accuracy: 0.4444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.5800 | 0.2444 |
| 2.0893 | 2.0 | 12 | 1.3869 | 0.3333 |
| 2.0893 | 3.0 | 18 | 1.3893 | 0.2444 |
| 1.4148 | 4.0 | 24 | 1.3366 | 0.3333 |
| 1.3117 | 5.0 | 30 | 1.3938 | 0.2889 |
| 1.3117 | 6.0 | 36 | 1.5221 | 0.3778 |
| 1.2096 | 7.0 | 42 | 1.7519 | 0.4222 |
| 1.2096 | 8.0 | 48 | 1.6213 | 0.2444 |
| 1.1162 | 9.0 | 54 | 1.4721 | 0.2889 |
| 1.0871 | 10.0 | 60 | 1.3748 | 0.3333 |
| 1.0871 | 11.0 | 66 | 1.7274 | 0.4667 |
| 1.0753 | 12.0 | 72 | 3.1976 | 0.3333 |
| 1.0753 | 13.0 | 78 | 1.3693 | 0.4222 |
| 1.2635 | 14.0 | 84 | 1.5090 | 0.3778 |
| 0.9248 | 15.0 | 90 | 1.4886 | 0.5333 |
| 0.9248 | 16.0 | 96 | 1.4765 | 0.4444 |
| 0.8798 | 17.0 | 102 | 1.9348 | 0.4222 |
| 0.8798 | 18.0 | 108 | 1.3064 | 0.4667 |
| 0.8666 | 19.0 | 114 | 1.5832 | 0.4444 |
| 0.7171 | 20.0 | 120 | 2.1360 | 0.4444 |
| 0.7171 | 21.0 | 126 | 1.7636 | 0.4444 |
| 0.7588 | 22.0 | 132 | 2.3529 | 0.3556 |
| 0.7588 | 23.0 | 138 | 2.7880 | 0.3556 |
| 0.6002 | 24.0 | 144 | 1.8764 | 0.4222 |
| 0.5204 | 25.0 | 150 | 2.9921 | 0.4 |
| 0.5204 | 26.0 | 156 | 2.6311 | 0.4444 |
| 0.4748 | 27.0 | 162 | 2.1490 | 0.4889 |
| 0.4748 | 28.0 | 168 | 2.4874 | 0.4889 |
| 0.4423 | 29.0 | 174 | 1.9273 | 0.4444 |
| 0.3826 | 30.0 | 180 | 3.0375 | 0.4222 |
| 0.3826 | 31.0 | 186 | 3.0775 | 0.4667 |
| 0.3486 | 32.0 | 192 | 2.5400 | 0.4 |
| 0.3486 | 33.0 | 198 | 3.1424 | 0.4444 |
| 0.3116 | 34.0 | 204 | 2.9144 | 0.4667 |
| 0.2168 | 35.0 | 210 | 3.3792 | 0.4444 |
| 0.2168 | 36.0 | 216 | 3.7895 | 0.4667 |
| 0.2383 | 37.0 | 222 | 3.1800 | 0.4889 |
| 0.2383 | 38.0 | 228 | 3.3532 | 0.4444 |
| 0.1463 | 39.0 | 234 | 3.6524 | 0.4222 |
| 0.1584 | 40.0 | 240 | 3.6346 | 0.4444 |
| 0.1584 | 41.0 | 246 | 3.6838 | 0.4444 |
| 0.1431 | 42.0 | 252 | 3.6981 | 0.4444 |
| 0.1431 | 43.0 | 258 | 3.6981 | 0.4444 |
| 0.1356 | 44.0 | 264 | 3.6981 | 0.4444 |
| 0.139 | 45.0 | 270 | 3.6981 | 0.4444 |
| 0.139 | 46.0 | 276 | 3.6981 | 0.4444 |
| 0.1502 | 47.0 | 282 | 3.6981 | 0.4444 |
| 0.1502 | 48.0 | 288 | 3.6981 | 0.4444 |
| 0.128 | 49.0 | 294 | 3.6981 | 0.4444 |
| 0.1474 | 50.0 | 300 | 3.6981 | 0.4444 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
Sharon8y/my_hotdog_model
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_hotdog_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5346
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.95 | 9 | 2.1083 | 0.5967 |
| 2.2301 | 2.0 | 19 | 1.8377 | 0.7067 |
| 1.9275 | 2.95 | 28 | 1.6582 | 0.78 |
| 1.6897 | 4.0 | 38 | 1.5653 | 0.79 |
| 1.5374 | 4.74 | 45 | 1.5346 | 0.81 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"baked potato",
"burger",
"crispy chicken",
"donut",
"fries",
"hot dog",
"pizza",
"sandwich",
"taco",
"taquito"
] |
hkivancoral/hushem_1x_beit_base_adamax_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_beit_base_adamax_001_fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3645
- Accuracy: 0.5556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.4123 | 0.2444 |
| 1.8567 | 2.0 | 12 | 1.3969 | 0.2444 |
| 1.8567 | 3.0 | 18 | 1.3773 | 0.4 |
| 1.4001 | 4.0 | 24 | 1.3688 | 0.3778 |
| 1.3691 | 5.0 | 30 | 1.3640 | 0.2444 |
| 1.3691 | 6.0 | 36 | 1.2556 | 0.5111 |
| 1.3116 | 7.0 | 42 | 1.4009 | 0.2667 |
| 1.3116 | 8.0 | 48 | 1.2324 | 0.4222 |
| 1.1799 | 9.0 | 54 | 1.1289 | 0.5111 |
| 1.1098 | 10.0 | 60 | 1.5348 | 0.2667 |
| 1.1098 | 11.0 | 66 | 1.2341 | 0.4222 |
| 1.0933 | 12.0 | 72 | 1.3191 | 0.4667 |
| 1.0933 | 13.0 | 78 | 1.3567 | 0.4 |
| 0.986 | 14.0 | 84 | 1.1728 | 0.3778 |
| 0.9075 | 15.0 | 90 | 1.1993 | 0.5111 |
| 0.9075 | 16.0 | 96 | 1.1869 | 0.3556 |
| 0.8205 | 17.0 | 102 | 1.3241 | 0.5333 |
| 0.8205 | 18.0 | 108 | 1.2073 | 0.5333 |
| 0.9036 | 19.0 | 114 | 1.2788 | 0.4889 |
| 0.7712 | 20.0 | 120 | 1.2208 | 0.4667 |
| 0.7712 | 21.0 | 126 | 1.2263 | 0.5333 |
| 0.6949 | 22.0 | 132 | 1.1609 | 0.4889 |
| 0.6949 | 23.0 | 138 | 1.1919 | 0.4222 |
| 0.7053 | 24.0 | 144 | 1.2190 | 0.5111 |
| 0.6439 | 25.0 | 150 | 1.2569 | 0.5556 |
| 0.6439 | 26.0 | 156 | 1.3636 | 0.5333 |
| 0.6537 | 27.0 | 162 | 1.4293 | 0.5778 |
| 0.6537 | 28.0 | 168 | 1.2396 | 0.5111 |
| 0.6181 | 29.0 | 174 | 1.3037 | 0.5556 |
| 0.5097 | 30.0 | 180 | 1.3049 | 0.5778 |
| 0.5097 | 31.0 | 186 | 1.1406 | 0.5333 |
| 0.5782 | 32.0 | 192 | 1.2396 | 0.5333 |
| 0.5782 | 33.0 | 198 | 1.2877 | 0.5111 |
| 0.5897 | 34.0 | 204 | 1.3944 | 0.5778 |
| 0.4972 | 35.0 | 210 | 1.2439 | 0.5556 |
| 0.4972 | 36.0 | 216 | 1.2993 | 0.5556 |
| 0.4729 | 37.0 | 222 | 1.3034 | 0.5556 |
| 0.4729 | 38.0 | 228 | 1.3631 | 0.5556 |
| 0.3719 | 39.0 | 234 | 1.4220 | 0.5778 |
| 0.4329 | 40.0 | 240 | 1.3836 | 0.5111 |
| 0.4329 | 41.0 | 246 | 1.3661 | 0.5556 |
| 0.3819 | 42.0 | 252 | 1.3645 | 0.5556 |
| 0.3819 | 43.0 | 258 | 1.3645 | 0.5556 |
| 0.3664 | 44.0 | 264 | 1.3645 | 0.5556 |
| 0.4152 | 45.0 | 270 | 1.3645 | 0.5556 |
| 0.4152 | 46.0 | 276 | 1.3645 | 0.5556 |
| 0.3637 | 47.0 | 282 | 1.3645 | 0.5556 |
| 0.3637 | 48.0 | 288 | 1.3645 | 0.5556 |
| 0.394 | 49.0 | 294 | 1.3645 | 0.5556 |
| 0.3776 | 50.0 | 300 | 1.3645 | 0.5556 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_1x_beit_base_adamax_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_beit_base_adamax_001_fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8097
- Accuracy: 0.5349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.4381 | 0.2326 |
| 2.0527 | 2.0 | 12 | 1.4022 | 0.2558 |
| 2.0527 | 3.0 | 18 | 1.3682 | 0.3256 |
| 1.3782 | 4.0 | 24 | 1.3387 | 0.3953 |
| 1.2679 | 5.0 | 30 | 1.3721 | 0.3256 |
| 1.2679 | 6.0 | 36 | 1.7451 | 0.3488 |
| 1.2756 | 7.0 | 42 | 1.3183 | 0.3953 |
| 1.2756 | 8.0 | 48 | 1.4225 | 0.3023 |
| 1.173 | 9.0 | 54 | 1.4215 | 0.3953 |
| 1.1959 | 10.0 | 60 | 1.4072 | 0.3721 |
| 1.1959 | 11.0 | 66 | 1.4852 | 0.4186 |
| 1.1344 | 12.0 | 72 | 1.4523 | 0.2791 |
| 1.1344 | 13.0 | 78 | 1.4043 | 0.4651 |
| 1.0854 | 14.0 | 84 | 1.3638 | 0.3953 |
| 1.1124 | 15.0 | 90 | 1.4323 | 0.3953 |
| 1.1124 | 16.0 | 96 | 1.4664 | 0.4884 |
| 1.0108 | 17.0 | 102 | 1.5473 | 0.3721 |
| 1.0108 | 18.0 | 108 | 1.2300 | 0.4651 |
| 0.9443 | 19.0 | 114 | 1.2523 | 0.4419 |
| 0.9125 | 20.0 | 120 | 1.4134 | 0.3721 |
| 0.9125 | 21.0 | 126 | 1.1280 | 0.4884 |
| 0.8328 | 22.0 | 132 | 1.1054 | 0.4884 |
| 0.8328 | 23.0 | 138 | 1.6081 | 0.4419 |
| 0.7565 | 24.0 | 144 | 1.0331 | 0.5349 |
| 0.7135 | 25.0 | 150 | 1.6384 | 0.5116 |
| 0.7135 | 26.0 | 156 | 1.9524 | 0.4651 |
| 0.7048 | 27.0 | 162 | 1.1399 | 0.5349 |
| 0.7048 | 28.0 | 168 | 1.0504 | 0.5581 |
| 0.7074 | 29.0 | 174 | 1.0452 | 0.5581 |
| 0.7008 | 30.0 | 180 | 1.4757 | 0.5581 |
| 0.7008 | 31.0 | 186 | 1.0663 | 0.4419 |
| 0.5976 | 32.0 | 192 | 1.0991 | 0.5349 |
| 0.5976 | 33.0 | 198 | 1.5330 | 0.5814 |
| 0.5565 | 34.0 | 204 | 1.1511 | 0.5349 |
| 0.458 | 35.0 | 210 | 1.5836 | 0.5349 |
| 0.458 | 36.0 | 216 | 1.4225 | 0.5581 |
| 0.5542 | 37.0 | 222 | 1.4182 | 0.6047 |
| 0.5542 | 38.0 | 228 | 1.3407 | 0.5581 |
| 0.3706 | 39.0 | 234 | 1.4368 | 0.5581 |
| 0.3087 | 40.0 | 240 | 1.6899 | 0.5814 |
| 0.3087 | 41.0 | 246 | 1.8110 | 0.5116 |
| 0.3001 | 42.0 | 252 | 1.8097 | 0.5349 |
| 0.3001 | 43.0 | 258 | 1.8097 | 0.5349 |
| 0.3061 | 44.0 | 264 | 1.8097 | 0.5349 |
| 0.2986 | 45.0 | 270 | 1.8097 | 0.5349 |
| 0.2986 | 46.0 | 276 | 1.8097 | 0.5349 |
| 0.2791 | 47.0 | 282 | 1.8097 | 0.5349 |
| 0.2791 | 48.0 | 288 | 1.8097 | 0.5349 |
| 0.2908 | 49.0 | 294 | 1.8097 | 0.5349 |
| 0.2986 | 50.0 | 300 | 1.8097 | 0.5349 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_1x_beit_base_adamax_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_beit_base_adamax_001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3503
- Accuracy: 0.4524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.4229 | 0.2381 |
| 2.0151 | 2.0 | 12 | 1.3893 | 0.2619 |
| 2.0151 | 3.0 | 18 | 1.3408 | 0.3333 |
| 1.3963 | 4.0 | 24 | 1.3326 | 0.3095 |
| 1.3169 | 5.0 | 30 | 1.2412 | 0.4762 |
| 1.3169 | 6.0 | 36 | 1.0247 | 0.5476 |
| 1.2588 | 7.0 | 42 | 1.2101 | 0.3571 |
| 1.2588 | 8.0 | 48 | 1.0013 | 0.5238 |
| 1.1685 | 9.0 | 54 | 1.3288 | 0.4524 |
| 1.1624 | 10.0 | 60 | 1.0173 | 0.5 |
| 1.1624 | 11.0 | 66 | 1.2213 | 0.4762 |
| 1.163 | 12.0 | 72 | 1.3131 | 0.4286 |
| 1.163 | 13.0 | 78 | 1.0794 | 0.5238 |
| 1.0128 | 14.0 | 84 | 1.2744 | 0.3810 |
| 1.1156 | 15.0 | 90 | 1.2253 | 0.5 |
| 1.1156 | 16.0 | 96 | 1.2674 | 0.4048 |
| 0.9374 | 17.0 | 102 | 1.1623 | 0.4524 |
| 0.9374 | 18.0 | 108 | 1.5694 | 0.4048 |
| 0.9149 | 19.0 | 114 | 1.0570 | 0.5476 |
| 0.912 | 20.0 | 120 | 1.2919 | 0.4286 |
| 0.912 | 21.0 | 126 | 1.4307 | 0.5 |
| 0.6869 | 22.0 | 132 | 1.5771 | 0.5238 |
| 0.6869 | 23.0 | 138 | 2.1692 | 0.3571 |
| 0.6883 | 24.0 | 144 | 1.5822 | 0.5714 |
| 0.7288 | 25.0 | 150 | 2.0687 | 0.4524 |
| 0.7288 | 26.0 | 156 | 2.1992 | 0.4524 |
| 0.4823 | 27.0 | 162 | 2.2715 | 0.5238 |
| 0.4823 | 28.0 | 168 | 3.3968 | 0.4286 |
| 0.4173 | 29.0 | 174 | 2.2538 | 0.5476 |
| 0.4253 | 30.0 | 180 | 3.6242 | 0.3810 |
| 0.4253 | 31.0 | 186 | 2.4386 | 0.5952 |
| 0.3088 | 32.0 | 192 | 3.2728 | 0.4762 |
| 0.3088 | 33.0 | 198 | 3.5241 | 0.5476 |
| 0.1666 | 34.0 | 204 | 3.5230 | 0.5 |
| 0.2645 | 35.0 | 210 | 3.7888 | 0.4286 |
| 0.2645 | 36.0 | 216 | 4.2240 | 0.5238 |
| 0.1416 | 37.0 | 222 | 4.2393 | 0.5 |
| 0.1416 | 38.0 | 228 | 4.0612 | 0.4762 |
| 0.1169 | 39.0 | 234 | 4.3686 | 0.4524 |
| 0.0781 | 40.0 | 240 | 4.2437 | 0.4762 |
| 0.0781 | 41.0 | 246 | 4.2703 | 0.4286 |
| 0.06 | 42.0 | 252 | 4.3503 | 0.4524 |
| 0.06 | 43.0 | 258 | 4.3503 | 0.4524 |
| 0.0264 | 44.0 | 264 | 4.3503 | 0.4524 |
| 0.1093 | 45.0 | 270 | 4.3503 | 0.4524 |
| 0.1093 | 46.0 | 276 | 4.3503 | 0.4524 |
| 0.0479 | 47.0 | 282 | 4.3503 | 0.4524 |
| 0.0479 | 48.0 | 288 | 4.3503 | 0.4524 |
| 0.0488 | 49.0 | 294 | 4.3503 | 0.4524 |
| 0.0619 | 50.0 | 300 | 4.3503 | 0.4524 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_1x_beit_base_adamax_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_beit_base_adamax_001_fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2278
- Accuracy: 0.7317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.4268 | 0.2439 |
| 1.7859 | 2.0 | 12 | 1.3982 | 0.2439 |
| 1.7859 | 3.0 | 18 | 1.3119 | 0.4878 |
| 1.3869 | 4.0 | 24 | 1.2627 | 0.4146 |
| 1.329 | 5.0 | 30 | 1.0564 | 0.5610 |
| 1.329 | 6.0 | 36 | 1.2486 | 0.2927 |
| 1.2971 | 7.0 | 42 | 1.2260 | 0.3415 |
| 1.2971 | 8.0 | 48 | 1.1669 | 0.5122 |
| 1.2043 | 9.0 | 54 | 1.2078 | 0.4390 |
| 1.166 | 10.0 | 60 | 1.1291 | 0.4390 |
| 1.166 | 11.0 | 66 | 1.4793 | 0.2683 |
| 1.2368 | 12.0 | 72 | 1.1712 | 0.4390 |
| 1.2368 | 13.0 | 78 | 1.1600 | 0.4146 |
| 1.0841 | 14.0 | 84 | 1.1286 | 0.4146 |
| 1.1358 | 15.0 | 90 | 1.0309 | 0.4878 |
| 1.1358 | 16.0 | 96 | 1.0536 | 0.3902 |
| 1.0304 | 17.0 | 102 | 0.9535 | 0.4878 |
| 1.0304 | 18.0 | 108 | 1.1738 | 0.3659 |
| 0.9971 | 19.0 | 114 | 0.9220 | 0.5122 |
| 0.9482 | 20.0 | 120 | 1.0234 | 0.6829 |
| 0.9482 | 21.0 | 126 | 1.0465 | 0.5366 |
| 0.9578 | 22.0 | 132 | 1.0713 | 0.5854 |
| 0.9578 | 23.0 | 138 | 1.1190 | 0.5122 |
| 1.0032 | 24.0 | 144 | 1.0303 | 0.6341 |
| 0.9765 | 25.0 | 150 | 0.9143 | 0.6098 |
| 0.9765 | 26.0 | 156 | 0.9675 | 0.6098 |
| 0.8768 | 27.0 | 162 | 0.8561 | 0.6341 |
| 0.8768 | 28.0 | 168 | 1.0406 | 0.4878 |
| 0.813 | 29.0 | 174 | 1.2443 | 0.6098 |
| 0.8566 | 30.0 | 180 | 0.8255 | 0.6341 |
| 0.8566 | 31.0 | 186 | 0.8471 | 0.6829 |
| 0.7675 | 32.0 | 192 | 0.9851 | 0.6829 |
| 0.7675 | 33.0 | 198 | 1.1042 | 0.6829 |
| 0.7167 | 34.0 | 204 | 1.0172 | 0.6829 |
| 0.6799 | 35.0 | 210 | 1.1228 | 0.5366 |
| 0.6799 | 36.0 | 216 | 1.1880 | 0.7317 |
| 0.6558 | 37.0 | 222 | 1.1922 | 0.7317 |
| 0.6558 | 38.0 | 228 | 1.4663 | 0.6585 |
| 0.5997 | 39.0 | 234 | 1.0459 | 0.7317 |
| 0.579 | 40.0 | 240 | 1.1555 | 0.7073 |
| 0.579 | 41.0 | 246 | 1.1889 | 0.7073 |
| 0.5728 | 42.0 | 252 | 1.2278 | 0.7317 |
| 0.5728 | 43.0 | 258 | 1.2278 | 0.7317 |
| 0.5177 | 44.0 | 264 | 1.2278 | 0.7317 |
| 0.5591 | 45.0 | 270 | 1.2278 | 0.7317 |
| 0.5591 | 46.0 | 276 | 1.2278 | 0.7317 |
| 0.5528 | 47.0 | 282 | 1.2278 | 0.7317 |
| 0.5528 | 48.0 | 288 | 1.2278 | 0.7317 |
| 0.575 | 49.0 | 294 | 1.2278 | 0.7317 |
| 0.5528 | 50.0 | 300 | 1.2278 | 0.7317 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_1x_beit_base_adamax_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_beit_base_adamax_0001_fold1
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1942
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.1737 | 0.5111 |
| 1.2996 | 2.0 | 12 | 0.6731 | 0.7111 |
| 1.2996 | 3.0 | 18 | 0.5816 | 0.7778 |
| 0.3034 | 4.0 | 24 | 0.5950 | 0.7778 |
| 0.0484 | 5.0 | 30 | 0.7873 | 0.7333 |
| 0.0484 | 6.0 | 36 | 0.7472 | 0.7556 |
| 0.0106 | 7.0 | 42 | 0.8528 | 0.8 |
| 0.0106 | 8.0 | 48 | 0.7211 | 0.7778 |
| 0.0205 | 9.0 | 54 | 0.6347 | 0.7778 |
| 0.0012 | 10.0 | 60 | 0.6115 | 0.8 |
| 0.0012 | 11.0 | 66 | 0.6050 | 0.8222 |
| 0.0005 | 12.0 | 72 | 0.6253 | 0.8222 |
| 0.0005 | 13.0 | 78 | 0.7723 | 0.8 |
| 0.0021 | 14.0 | 84 | 0.9287 | 0.8 |
| 0.0003 | 15.0 | 90 | 1.0136 | 0.7778 |
| 0.0003 | 16.0 | 96 | 0.9985 | 0.7778 |
| 0.0004 | 17.0 | 102 | 0.9348 | 0.7778 |
| 0.0004 | 18.0 | 108 | 0.8985 | 0.8 |
| 0.0003 | 19.0 | 114 | 0.8733 | 0.8222 |
| 0.0009 | 20.0 | 120 | 0.8790 | 0.8222 |
| 0.0009 | 21.0 | 126 | 1.1330 | 0.7778 |
| 0.0002 | 22.0 | 132 | 1.2620 | 0.7556 |
| 0.0002 | 23.0 | 138 | 1.3184 | 0.7556 |
| 0.0003 | 24.0 | 144 | 1.3104 | 0.7778 |
| 0.0003 | 25.0 | 150 | 1.2554 | 0.7556 |
| 0.0003 | 26.0 | 156 | 1.2162 | 0.7556 |
| 0.0002 | 27.0 | 162 | 1.1923 | 0.7333 |
| 0.0002 | 28.0 | 168 | 1.1869 | 0.7333 |
| 0.0002 | 29.0 | 174 | 1.1546 | 0.7333 |
| 0.0002 | 30.0 | 180 | 1.1302 | 0.7556 |
| 0.0002 | 31.0 | 186 | 1.1214 | 0.7556 |
| 0.0003 | 32.0 | 192 | 1.1205 | 0.7556 |
| 0.0003 | 33.0 | 198 | 1.1222 | 0.7556 |
| 0.0018 | 34.0 | 204 | 1.1316 | 0.7556 |
| 0.0004 | 35.0 | 210 | 1.1630 | 0.7556 |
| 0.0004 | 36.0 | 216 | 1.1838 | 0.7333 |
| 0.0002 | 37.0 | 222 | 1.1946 | 0.7333 |
| 0.0002 | 38.0 | 228 | 1.1949 | 0.7333 |
| 0.0004 | 39.0 | 234 | 1.1930 | 0.7333 |
| 0.0002 | 40.0 | 240 | 1.1932 | 0.7333 |
| 0.0002 | 41.0 | 246 | 1.1940 | 0.7333 |
| 0.0002 | 42.0 | 252 | 1.1942 | 0.7333 |
| 0.0002 | 43.0 | 258 | 1.1942 | 0.7333 |
| 0.0002 | 44.0 | 264 | 1.1942 | 0.7333 |
| 0.0002 | 45.0 | 270 | 1.1942 | 0.7333 |
| 0.0002 | 46.0 | 276 | 1.1942 | 0.7333 |
| 0.0002 | 47.0 | 282 | 1.1942 | 0.7333 |
| 0.0002 | 48.0 | 288 | 1.1942 | 0.7333 |
| 0.0003 | 49.0 | 294 | 1.1942 | 0.7333 |
| 0.0001 | 50.0 | 300 | 1.1942 | 0.7333 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_1x_beit_base_adamax_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_beit_base_adamax_0001_fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0153
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.2331 | 0.5556 |
| 1.3348 | 2.0 | 12 | 0.8218 | 0.6889 |
| 1.3348 | 3.0 | 18 | 0.6484 | 0.7556 |
| 0.3555 | 4.0 | 24 | 0.8513 | 0.7556 |
| 0.1239 | 5.0 | 30 | 0.7326 | 0.7333 |
| 0.1239 | 6.0 | 36 | 0.6190 | 0.8 |
| 0.0625 | 7.0 | 42 | 1.0407 | 0.7333 |
| 0.0625 | 8.0 | 48 | 0.7902 | 0.8 |
| 0.0045 | 9.0 | 54 | 0.8103 | 0.7778 |
| 0.0021 | 10.0 | 60 | 1.0314 | 0.8 |
| 0.0021 | 11.0 | 66 | 1.1219 | 0.7556 |
| 0.0013 | 12.0 | 72 | 1.0834 | 0.7556 |
| 0.0013 | 13.0 | 78 | 1.0270 | 0.7333 |
| 0.0006 | 14.0 | 84 | 1.0518 | 0.7556 |
| 0.0005 | 15.0 | 90 | 1.0755 | 0.7556 |
| 0.0005 | 16.0 | 96 | 1.1073 | 0.7556 |
| 0.0005 | 17.0 | 102 | 1.1726 | 0.7556 |
| 0.0005 | 18.0 | 108 | 1.2002 | 0.7556 |
| 0.0005 | 19.0 | 114 | 1.1838 | 0.7556 |
| 0.0007 | 20.0 | 120 | 1.1860 | 0.7556 |
| 0.0007 | 21.0 | 126 | 1.2997 | 0.7556 |
| 0.0003 | 22.0 | 132 | 1.3311 | 0.7556 |
| 0.0003 | 23.0 | 138 | 1.3197 | 0.7556 |
| 0.0002 | 24.0 | 144 | 1.2630 | 0.7556 |
| 0.0003 | 25.0 | 150 | 1.1925 | 0.7556 |
| 0.0003 | 26.0 | 156 | 1.1444 | 0.7778 |
| 0.0002 | 27.0 | 162 | 1.1105 | 0.7778 |
| 0.0002 | 28.0 | 168 | 1.0790 | 0.7778 |
| 0.0002 | 29.0 | 174 | 1.0616 | 0.7778 |
| 0.0002 | 30.0 | 180 | 1.0495 | 0.7778 |
| 0.0002 | 31.0 | 186 | 1.0431 | 0.7778 |
| 0.0002 | 32.0 | 192 | 1.0407 | 0.7778 |
| 0.0002 | 33.0 | 198 | 1.0375 | 0.8 |
| 0.0107 | 34.0 | 204 | 1.0331 | 0.8 |
| 0.0002 | 35.0 | 210 | 1.0311 | 0.8 |
| 0.0002 | 36.0 | 216 | 1.0289 | 0.8 |
| 0.0002 | 37.0 | 222 | 1.0264 | 0.8 |
| 0.0002 | 38.0 | 228 | 1.0203 | 0.8 |
| 0.0003 | 39.0 | 234 | 1.0167 | 0.8 |
| 0.0002 | 40.0 | 240 | 1.0146 | 0.8 |
| 0.0002 | 41.0 | 246 | 1.0152 | 0.8 |
| 0.0002 | 42.0 | 252 | 1.0153 | 0.8 |
| 0.0002 | 43.0 | 258 | 1.0153 | 0.8 |
| 0.0002 | 44.0 | 264 | 1.0153 | 0.8 |
| 0.0002 | 45.0 | 270 | 1.0153 | 0.8 |
| 0.0002 | 46.0 | 276 | 1.0153 | 0.8 |
| 0.002 | 47.0 | 282 | 1.0153 | 0.8 |
| 0.002 | 48.0 | 288 | 1.0153 | 0.8 |
| 0.0006 | 49.0 | 294 | 1.0153 | 0.8 |
| 0.0001 | 50.0 | 300 | 1.0153 | 0.8 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_1x_beit_base_adamax_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_beit_base_adamax_0001_fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5855
- Accuracy: 0.8605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.0849 | 0.5349 |
| 1.2852 | 2.0 | 12 | 0.7460 | 0.7907 |
| 1.2852 | 3.0 | 18 | 0.5699 | 0.8140 |
| 0.4305 | 4.0 | 24 | 0.3649 | 0.8605 |
| 0.1805 | 5.0 | 30 | 0.2406 | 0.9535 |
| 0.1805 | 6.0 | 36 | 0.4656 | 0.8837 |
| 0.0211 | 7.0 | 42 | 0.4915 | 0.8605 |
| 0.0211 | 8.0 | 48 | 0.5042 | 0.8372 |
| 0.0066 | 9.0 | 54 | 0.6760 | 0.7907 |
| 0.0025 | 10.0 | 60 | 0.6098 | 0.8605 |
| 0.0025 | 11.0 | 66 | 0.6353 | 0.9070 |
| 0.0011 | 12.0 | 72 | 0.6882 | 0.8837 |
| 0.0011 | 13.0 | 78 | 0.6437 | 0.8837 |
| 0.0022 | 14.0 | 84 | 0.5430 | 0.8605 |
| 0.0007 | 15.0 | 90 | 0.5436 | 0.8605 |
| 0.0007 | 16.0 | 96 | 0.5847 | 0.8605 |
| 0.0007 | 17.0 | 102 | 0.7054 | 0.8605 |
| 0.0007 | 18.0 | 108 | 0.7624 | 0.8372 |
| 0.0006 | 19.0 | 114 | 0.6619 | 0.8605 |
| 0.0007 | 20.0 | 120 | 0.6238 | 0.8372 |
| 0.0007 | 21.0 | 126 | 0.6086 | 0.8372 |
| 0.0003 | 22.0 | 132 | 0.6074 | 0.8372 |
| 0.0003 | 23.0 | 138 | 0.6228 | 0.8605 |
| 0.0003 | 24.0 | 144 | 0.6265 | 0.8605 |
| 0.0003 | 25.0 | 150 | 0.6139 | 0.8372 |
| 0.0003 | 26.0 | 156 | 0.6063 | 0.8372 |
| 0.0002 | 27.0 | 162 | 0.5981 | 0.8372 |
| 0.0002 | 28.0 | 168 | 0.5901 | 0.8372 |
| 0.0002 | 29.0 | 174 | 0.5785 | 0.8605 |
| 0.0001 | 30.0 | 180 | 0.5753 | 0.8605 |
| 0.0001 | 31.0 | 186 | 0.5775 | 0.8605 |
| 0.0002 | 32.0 | 192 | 0.5781 | 0.8605 |
| 0.0002 | 33.0 | 198 | 0.5782 | 0.8605 |
| 0.0002 | 34.0 | 204 | 0.5804 | 0.8605 |
| 0.0003 | 35.0 | 210 | 0.5817 | 0.8605 |
| 0.0003 | 36.0 | 216 | 0.5823 | 0.8605 |
| 0.0001 | 37.0 | 222 | 0.5831 | 0.8605 |
| 0.0001 | 38.0 | 228 | 0.5855 | 0.8605 |
| 0.0002 | 39.0 | 234 | 0.5859 | 0.8605 |
| 0.0002 | 40.0 | 240 | 0.5858 | 0.8605 |
| 0.0002 | 41.0 | 246 | 0.5855 | 0.8605 |
| 0.0002 | 42.0 | 252 | 0.5855 | 0.8605 |
| 0.0002 | 43.0 | 258 | 0.5855 | 0.8605 |
| 0.0002 | 44.0 | 264 | 0.5855 | 0.8605 |
| 0.0001 | 45.0 | 270 | 0.5855 | 0.8605 |
| 0.0001 | 46.0 | 276 | 0.5855 | 0.8605 |
| 0.0005 | 47.0 | 282 | 0.5855 | 0.8605 |
| 0.0005 | 48.0 | 288 | 0.5855 | 0.8605 |
| 0.0002 | 49.0 | 294 | 0.5855 | 0.8605 |
| 0.0001 | 50.0 | 300 | 0.5855 | 0.8605 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_1x_beit_base_adamax_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_beit_base_adamax_0001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2926
- Accuracy: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.1428 | 0.6905 |
| 1.3492 | 2.0 | 12 | 0.5681 | 0.7857 |
| 1.3492 | 3.0 | 18 | 0.2529 | 0.9286 |
| 0.3166 | 4.0 | 24 | 0.2221 | 0.9524 |
| 0.0428 | 5.0 | 30 | 0.2913 | 0.9048 |
| 0.0428 | 6.0 | 36 | 0.3814 | 0.8571 |
| 0.0093 | 7.0 | 42 | 0.2701 | 0.9524 |
| 0.0093 | 8.0 | 48 | 0.2796 | 0.9286 |
| 0.0019 | 9.0 | 54 | 0.3043 | 0.9048 |
| 0.0029 | 10.0 | 60 | 0.4551 | 0.8810 |
| 0.0029 | 11.0 | 66 | 0.3262 | 0.9286 |
| 0.001 | 12.0 | 72 | 0.2680 | 0.9524 |
| 0.001 | 13.0 | 78 | 0.2601 | 0.9524 |
| 0.0006 | 14.0 | 84 | 0.3353 | 0.9048 |
| 0.0008 | 15.0 | 90 | 0.3915 | 0.9048 |
| 0.0008 | 16.0 | 96 | 0.4398 | 0.8810 |
| 0.0004 | 17.0 | 102 | 0.3988 | 0.9048 |
| 0.0004 | 18.0 | 108 | 0.3416 | 0.9048 |
| 0.0053 | 19.0 | 114 | 0.2975 | 0.9286 |
| 0.0004 | 20.0 | 120 | 0.2890 | 0.9286 |
| 0.0004 | 21.0 | 126 | 0.2852 | 0.9286 |
| 0.0061 | 22.0 | 132 | 0.2652 | 0.9286 |
| 0.0061 | 23.0 | 138 | 0.2502 | 0.9286 |
| 0.0002 | 24.0 | 144 | 0.2495 | 0.9286 |
| 0.0003 | 25.0 | 150 | 0.2641 | 0.9286 |
| 0.0003 | 26.0 | 156 | 0.2771 | 0.9286 |
| 0.0002 | 27.0 | 162 | 0.2877 | 0.9286 |
| 0.0002 | 28.0 | 168 | 0.3003 | 0.9286 |
| 0.0002 | 29.0 | 174 | 0.3118 | 0.9286 |
| 0.0002 | 30.0 | 180 | 0.3215 | 0.9286 |
| 0.0002 | 31.0 | 186 | 0.3282 | 0.9286 |
| 0.0003 | 32.0 | 192 | 0.3381 | 0.9286 |
| 0.0003 | 33.0 | 198 | 0.3472 | 0.9048 |
| 0.0002 | 34.0 | 204 | 0.3491 | 0.9048 |
| 0.0049 | 35.0 | 210 | 0.3154 | 0.9048 |
| 0.0049 | 36.0 | 216 | 0.2965 | 0.9048 |
| 0.0002 | 37.0 | 222 | 0.2887 | 0.9048 |
| 0.0002 | 38.0 | 228 | 0.2886 | 0.9048 |
| 0.0002 | 39.0 | 234 | 0.2894 | 0.9048 |
| 0.0002 | 40.0 | 240 | 0.2903 | 0.9048 |
| 0.0002 | 41.0 | 246 | 0.2922 | 0.9048 |
| 0.0004 | 42.0 | 252 | 0.2926 | 0.9048 |
| 0.0004 | 43.0 | 258 | 0.2926 | 0.9048 |
| 0.0002 | 44.0 | 264 | 0.2926 | 0.9048 |
| 0.0002 | 45.0 | 270 | 0.2926 | 0.9048 |
| 0.0002 | 46.0 | 276 | 0.2926 | 0.9048 |
| 0.0009 | 47.0 | 282 | 0.2926 | 0.9048 |
| 0.0009 | 48.0 | 288 | 0.2926 | 0.9048 |
| 0.0004 | 49.0 | 294 | 0.2926 | 0.9048 |
| 0.0001 | 50.0 | 300 | 0.2926 | 0.9048 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.