model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
DazMashaly/swinv2_zindi
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2_zindi
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-large-patch4-window12-192-22k) on the zindi dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6328
- eval_accuracy: 0.7434
- eval_runtime: 234.8425
- eval_samples_per_second: 16.492
- eval_steps_per_second: 0.519
- epoch: 3.0
- step: 520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"dr",
"g",
"nd",
"wd",
"other"
] |
hkivancoral/hushem_40x_deit_tiny_sgd_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3702
- Accuracy: 0.2889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.5833 | 1.0 | 215 | 1.4002 | 0.1778 |
| 1.5381 | 2.0 | 430 | 1.3990 | 0.2 |
| 1.505 | 3.0 | 645 | 1.3978 | 0.2222 |
| 1.446 | 4.0 | 860 | 1.3967 | 0.2444 |
| 1.4742 | 5.0 | 1075 | 1.3956 | 0.2222 |
| 1.3991 | 6.0 | 1290 | 1.3945 | 0.2222 |
| 1.4142 | 7.0 | 1505 | 1.3933 | 0.2222 |
| 1.4895 | 8.0 | 1720 | 1.3923 | 0.2222 |
| 1.4297 | 9.0 | 1935 | 1.3912 | 0.2222 |
| 1.4803 | 10.0 | 2150 | 1.3901 | 0.2222 |
| 1.4253 | 11.0 | 2365 | 1.3890 | 0.2222 |
| 1.4151 | 12.0 | 2580 | 1.3880 | 0.2222 |
| 1.3649 | 13.0 | 2795 | 1.3870 | 0.2222 |
| 1.4058 | 14.0 | 3010 | 1.3860 | 0.2444 |
| 1.3858 | 15.0 | 3225 | 1.3850 | 0.2444 |
| 1.3985 | 16.0 | 3440 | 1.3841 | 0.2444 |
| 1.4078 | 17.0 | 3655 | 1.3832 | 0.2444 |
| 1.3916 | 18.0 | 3870 | 1.3823 | 0.2444 |
| 1.4138 | 19.0 | 4085 | 1.3814 | 0.2444 |
| 1.3697 | 20.0 | 4300 | 1.3807 | 0.2444 |
| 1.3976 | 21.0 | 4515 | 1.3799 | 0.2444 |
| 1.45 | 22.0 | 4730 | 1.3791 | 0.2444 |
| 1.3757 | 23.0 | 4945 | 1.3784 | 0.2444 |
| 1.4088 | 24.0 | 5160 | 1.3777 | 0.2667 |
| 1.3948 | 25.0 | 5375 | 1.3771 | 0.2667 |
| 1.3916 | 26.0 | 5590 | 1.3764 | 0.2667 |
| 1.3383 | 27.0 | 5805 | 1.3759 | 0.2667 |
| 1.3507 | 28.0 | 6020 | 1.3753 | 0.2889 |
| 1.3823 | 29.0 | 6235 | 1.3748 | 0.2889 |
| 1.3489 | 30.0 | 6450 | 1.3743 | 0.2889 |
| 1.3905 | 31.0 | 6665 | 1.3738 | 0.2889 |
| 1.3646 | 32.0 | 6880 | 1.3734 | 0.2889 |
| 1.394 | 33.0 | 7095 | 1.3730 | 0.2889 |
| 1.3256 | 34.0 | 7310 | 1.3726 | 0.2889 |
| 1.342 | 35.0 | 7525 | 1.3723 | 0.2889 |
| 1.3277 | 36.0 | 7740 | 1.3720 | 0.2889 |
| 1.3815 | 37.0 | 7955 | 1.3717 | 0.2889 |
| 1.3516 | 38.0 | 8170 | 1.3714 | 0.2889 |
| 1.3573 | 39.0 | 8385 | 1.3712 | 0.2889 |
| 1.3764 | 40.0 | 8600 | 1.3710 | 0.2889 |
| 1.3508 | 41.0 | 8815 | 1.3708 | 0.2889 |
| 1.4032 | 42.0 | 9030 | 1.3707 | 0.2889 |
| 1.3548 | 43.0 | 9245 | 1.3705 | 0.2889 |
| 1.3623 | 44.0 | 9460 | 1.3704 | 0.2889 |
| 1.3744 | 45.0 | 9675 | 1.3704 | 0.2889 |
| 1.3298 | 46.0 | 9890 | 1.3703 | 0.2889 |
| 1.352 | 47.0 | 10105 | 1.3703 | 0.2889 |
| 1.363 | 48.0 | 10320 | 1.3702 | 0.2889 |
| 1.3844 | 49.0 | 10535 | 1.3702 | 0.2889 |
| 1.3587 | 50.0 | 10750 | 1.3702 | 0.2889 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_adamax_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4103
- Accuracy: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2795 | 1.0 | 219 | 0.2659 | 0.9286 |
| 0.1364 | 2.0 | 438 | 0.4367 | 0.8810 |
| 0.0634 | 3.0 | 657 | 0.4289 | 0.8571 |
| 0.0114 | 4.0 | 876 | 1.3297 | 0.8095 |
| 0.0408 | 5.0 | 1095 | 0.4920 | 0.9286 |
| 0.0457 | 6.0 | 1314 | 0.6322 | 0.8810 |
| 0.0085 | 7.0 | 1533 | 0.2933 | 0.9286 |
| 0.0007 | 8.0 | 1752 | 0.2944 | 0.9286 |
| 0.0151 | 9.0 | 1971 | 0.7354 | 0.9048 |
| 0.012 | 10.0 | 2190 | 0.5037 | 0.9048 |
| 0.0001 | 11.0 | 2409 | 0.1997 | 0.9286 |
| 0.0025 | 12.0 | 2628 | 0.6953 | 0.8810 |
| 0.0005 | 13.0 | 2847 | 0.8257 | 0.8571 |
| 0.0385 | 14.0 | 3066 | 0.0513 | 0.9762 |
| 0.0 | 15.0 | 3285 | 0.4499 | 0.9286 |
| 0.0013 | 16.0 | 3504 | 0.7163 | 0.9048 |
| 0.0 | 17.0 | 3723 | 0.5066 | 0.8810 |
| 0.0 | 18.0 | 3942 | 0.3335 | 0.9524 |
| 0.0 | 19.0 | 4161 | 0.3359 | 0.9524 |
| 0.0 | 20.0 | 4380 | 0.3399 | 0.9524 |
| 0.0 | 21.0 | 4599 | 0.3444 | 0.9524 |
| 0.0 | 22.0 | 4818 | 0.3491 | 0.9524 |
| 0.0 | 23.0 | 5037 | 0.3534 | 0.9286 |
| 0.0 | 24.0 | 5256 | 0.3577 | 0.9286 |
| 0.0 | 25.0 | 5475 | 0.3622 | 0.9286 |
| 0.0 | 26.0 | 5694 | 0.3663 | 0.9286 |
| 0.0 | 27.0 | 5913 | 0.3700 | 0.9286 |
| 0.0 | 28.0 | 6132 | 0.3737 | 0.9286 |
| 0.0 | 29.0 | 6351 | 0.3774 | 0.9286 |
| 0.0 | 30.0 | 6570 | 0.3807 | 0.9286 |
| 0.0 | 31.0 | 6789 | 0.3834 | 0.9286 |
| 0.0 | 32.0 | 7008 | 0.3865 | 0.9286 |
| 0.0 | 33.0 | 7227 | 0.3891 | 0.9286 |
| 0.0 | 34.0 | 7446 | 0.3914 | 0.9286 |
| 0.0 | 35.0 | 7665 | 0.3939 | 0.9286 |
| 0.0 | 36.0 | 7884 | 0.3958 | 0.9286 |
| 0.0 | 37.0 | 8103 | 0.3979 | 0.9286 |
| 0.0 | 38.0 | 8322 | 0.3998 | 0.9286 |
| 0.0 | 39.0 | 8541 | 0.4013 | 0.9286 |
| 0.0 | 40.0 | 8760 | 0.4028 | 0.9286 |
| 0.0 | 41.0 | 8979 | 0.4041 | 0.9286 |
| 0.0 | 42.0 | 9198 | 0.4053 | 0.9286 |
| 0.0 | 43.0 | 9417 | 0.4061 | 0.9286 |
| 0.0 | 44.0 | 9636 | 0.4071 | 0.9286 |
| 0.0 | 45.0 | 9855 | 0.4081 | 0.9286 |
| 0.0 | 46.0 | 10074 | 0.4089 | 0.9286 |
| 0.0 | 47.0 | 10293 | 0.4095 | 0.9286 |
| 0.0 | 48.0 | 10512 | 0.4099 | 0.9286 |
| 0.0 | 49.0 | 10731 | 0.4102 | 0.9286 |
| 0.0 | 50.0 | 10950 | 0.4103 | 0.9286 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_base_rms_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_00001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8836
- Accuracy: 0.9070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0092 | 1.0 | 217 | 0.3600 | 0.9070 |
| 0.0049 | 2.0 | 434 | 0.6644 | 0.8837 |
| 0.0002 | 3.0 | 651 | 0.6352 | 0.8605 |
| 0.0002 | 4.0 | 868 | 0.4194 | 0.8372 |
| 0.0 | 5.0 | 1085 | 0.4806 | 0.8837 |
| 0.0 | 6.0 | 1302 | 0.4943 | 0.8837 |
| 0.0 | 7.0 | 1519 | 0.5374 | 0.8837 |
| 0.0 | 8.0 | 1736 | 0.5739 | 0.8837 |
| 0.0 | 9.0 | 1953 | 0.6244 | 0.8837 |
| 0.0 | 10.0 | 2170 | 0.6958 | 0.8837 |
| 0.0 | 11.0 | 2387 | 0.7044 | 0.8837 |
| 0.0 | 12.0 | 2604 | 0.7420 | 0.8837 |
| 0.0 | 13.0 | 2821 | 0.7779 | 0.8837 |
| 0.0 | 14.0 | 3038 | 0.8260 | 0.8837 |
| 0.0 | 15.0 | 3255 | 0.8100 | 0.8837 |
| 0.0 | 16.0 | 3472 | 0.8334 | 0.8837 |
| 0.0 | 17.0 | 3689 | 0.8315 | 0.8837 |
| 0.0 | 18.0 | 3906 | 0.8407 | 0.8837 |
| 0.0 | 19.0 | 4123 | 0.8449 | 0.8837 |
| 0.0 | 20.0 | 4340 | 0.8517 | 0.9070 |
| 0.0 | 21.0 | 4557 | 0.8539 | 0.9070 |
| 0.0 | 22.0 | 4774 | 0.8566 | 0.9070 |
| 0.0 | 23.0 | 4991 | 0.8670 | 0.9070 |
| 0.0 | 24.0 | 5208 | 0.8582 | 0.9070 |
| 0.0 | 25.0 | 5425 | 0.8799 | 0.9070 |
| 0.0 | 26.0 | 5642 | 0.8723 | 0.9070 |
| 0.0 | 27.0 | 5859 | 0.8712 | 0.9070 |
| 0.0 | 28.0 | 6076 | 0.8721 | 0.9070 |
| 0.0 | 29.0 | 6293 | 0.8741 | 0.9070 |
| 0.0 | 30.0 | 6510 | 0.8806 | 0.9070 |
| 0.0 | 31.0 | 6727 | 0.8875 | 0.9070 |
| 0.0 | 32.0 | 6944 | 0.8790 | 0.9070 |
| 0.0 | 33.0 | 7161 | 0.8844 | 0.9070 |
| 0.0 | 34.0 | 7378 | 0.8840 | 0.9070 |
| 0.0 | 35.0 | 7595 | 0.8876 | 0.9070 |
| 0.0 | 36.0 | 7812 | 0.8874 | 0.9070 |
| 0.0 | 37.0 | 8029 | 0.8892 | 0.9070 |
| 0.0 | 38.0 | 8246 | 0.8786 | 0.9070 |
| 0.0 | 39.0 | 8463 | 0.8835 | 0.9070 |
| 0.0 | 40.0 | 8680 | 0.8858 | 0.9070 |
| 0.0 | 41.0 | 8897 | 0.8804 | 0.9070 |
| 0.0 | 42.0 | 9114 | 0.8847 | 0.9070 |
| 0.0 | 43.0 | 9331 | 0.8839 | 0.9070 |
| 0.0 | 44.0 | 9548 | 0.8847 | 0.9070 |
| 0.0 | 45.0 | 9765 | 0.8817 | 0.9070 |
| 0.0 | 46.0 | 9982 | 0.8847 | 0.9070 |
| 0.0 | 47.0 | 10199 | 0.8836 | 0.9070 |
| 0.0 | 48.0 | 10416 | 0.8831 | 0.9070 |
| 0.0 | 49.0 | 10633 | 0.8834 | 0.9070 |
| 0.0 | 50.0 | 10850 | 0.8836 | 0.9070 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_sgd_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_00001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4218
- Accuracy: 0.2444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4676 | 1.0 | 215 | 1.4979 | 0.2444 |
| 1.453 | 2.0 | 430 | 1.4941 | 0.2444 |
| 1.4514 | 3.0 | 645 | 1.4903 | 0.2444 |
| 1.4264 | 4.0 | 860 | 1.4866 | 0.2222 |
| 1.4845 | 5.0 | 1075 | 1.4831 | 0.2222 |
| 1.4049 | 6.0 | 1290 | 1.4797 | 0.2444 |
| 1.408 | 7.0 | 1505 | 1.4764 | 0.2444 |
| 1.4075 | 8.0 | 1720 | 1.4733 | 0.2444 |
| 1.4274 | 9.0 | 1935 | 1.4702 | 0.2444 |
| 1.4165 | 10.0 | 2150 | 1.4673 | 0.2444 |
| 1.3408 | 11.0 | 2365 | 1.4645 | 0.2444 |
| 1.387 | 12.0 | 2580 | 1.4617 | 0.2444 |
| 1.3966 | 13.0 | 2795 | 1.4591 | 0.2444 |
| 1.3631 | 14.0 | 3010 | 1.4566 | 0.2444 |
| 1.3966 | 15.0 | 3225 | 1.4542 | 0.2444 |
| 1.4085 | 16.0 | 3440 | 1.4520 | 0.2444 |
| 1.3593 | 17.0 | 3655 | 1.4498 | 0.2444 |
| 1.3872 | 18.0 | 3870 | 1.4477 | 0.2444 |
| 1.3857 | 19.0 | 4085 | 1.4457 | 0.2222 |
| 1.3961 | 20.0 | 4300 | 1.4439 | 0.2222 |
| 1.3725 | 21.0 | 4515 | 1.4421 | 0.2222 |
| 1.3634 | 22.0 | 4730 | 1.4405 | 0.2222 |
| 1.3404 | 23.0 | 4945 | 1.4389 | 0.2222 |
| 1.2947 | 24.0 | 5160 | 1.4374 | 0.2222 |
| 1.3286 | 25.0 | 5375 | 1.4360 | 0.2222 |
| 1.3597 | 26.0 | 5590 | 1.4346 | 0.2222 |
| 1.3935 | 27.0 | 5805 | 1.4334 | 0.2222 |
| 1.3126 | 28.0 | 6020 | 1.4322 | 0.2222 |
| 1.3862 | 29.0 | 6235 | 1.4311 | 0.2222 |
| 1.345 | 30.0 | 6450 | 1.4301 | 0.2222 |
| 1.3332 | 31.0 | 6665 | 1.4291 | 0.2222 |
| 1.3215 | 32.0 | 6880 | 1.4283 | 0.2222 |
| 1.3753 | 33.0 | 7095 | 1.4274 | 0.2222 |
| 1.3397 | 34.0 | 7310 | 1.4267 | 0.2222 |
| 1.3085 | 35.0 | 7525 | 1.4260 | 0.2222 |
| 1.3414 | 36.0 | 7740 | 1.4254 | 0.2222 |
| 1.3773 | 37.0 | 7955 | 1.4248 | 0.2222 |
| 1.2916 | 38.0 | 8170 | 1.4243 | 0.2222 |
| 1.2953 | 39.0 | 8385 | 1.4238 | 0.2222 |
| 1.3053 | 40.0 | 8600 | 1.4234 | 0.2222 |
| 1.3127 | 41.0 | 8815 | 1.4230 | 0.2222 |
| 1.2816 | 42.0 | 9030 | 1.4227 | 0.2222 |
| 1.3493 | 43.0 | 9245 | 1.4225 | 0.2222 |
| 1.3258 | 44.0 | 9460 | 1.4223 | 0.2222 |
| 1.3441 | 45.0 | 9675 | 1.4221 | 0.2444 |
| 1.2959 | 46.0 | 9890 | 1.4220 | 0.2444 |
| 1.3609 | 47.0 | 10105 | 1.4219 | 0.2444 |
| 1.276 | 48.0 | 10320 | 1.4219 | 0.2444 |
| 1.2931 | 49.0 | 10535 | 1.4218 | 0.2444 |
| 1.3383 | 50.0 | 10750 | 1.4218 | 0.2444 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_adamax_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6357
- Accuracy: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2206 | 1.0 | 220 | 1.1411 | 0.6829 |
| 0.1873 | 2.0 | 440 | 0.5940 | 0.8780 |
| 0.0203 | 3.0 | 660 | 0.9936 | 0.7805 |
| 0.0624 | 4.0 | 880 | 0.3597 | 0.9024 |
| 0.0108 | 5.0 | 1100 | 1.3539 | 0.7805 |
| 0.0858 | 6.0 | 1320 | 0.8241 | 0.8049 |
| 0.0246 | 7.0 | 1540 | 1.0359 | 0.8049 |
| 0.0131 | 8.0 | 1760 | 0.7509 | 0.8049 |
| 0.0013 | 9.0 | 1980 | 1.4351 | 0.7805 |
| 0.0095 | 10.0 | 2200 | 1.1916 | 0.7561 |
| 0.0002 | 11.0 | 2420 | 0.7203 | 0.8293 |
| 0.0011 | 12.0 | 2640 | 1.0391 | 0.8293 |
| 0.007 | 13.0 | 2860 | 1.8906 | 0.7317 |
| 0.0002 | 14.0 | 3080 | 0.4058 | 0.9512 |
| 0.0 | 15.0 | 3300 | 0.3547 | 0.9268 |
| 0.0 | 16.0 | 3520 | 0.3764 | 0.9268 |
| 0.0 | 17.0 | 3740 | 0.3894 | 0.9268 |
| 0.0 | 18.0 | 3960 | 0.4031 | 0.9268 |
| 0.0 | 19.0 | 4180 | 0.4138 | 0.9268 |
| 0.0 | 20.0 | 4400 | 0.4231 | 0.9268 |
| 0.0 | 21.0 | 4620 | 0.4326 | 0.9268 |
| 0.0 | 22.0 | 4840 | 0.4413 | 0.9268 |
| 0.0 | 23.0 | 5060 | 0.4490 | 0.9268 |
| 0.0 | 24.0 | 5280 | 0.4564 | 0.9268 |
| 0.0 | 25.0 | 5500 | 0.4638 | 0.9268 |
| 0.0 | 26.0 | 5720 | 0.4710 | 0.9268 |
| 0.0 | 27.0 | 5940 | 0.4779 | 0.9268 |
| 0.0 | 28.0 | 6160 | 0.4851 | 0.9268 |
| 0.0 | 29.0 | 6380 | 0.4923 | 0.9268 |
| 0.0 | 30.0 | 6600 | 0.4998 | 0.9268 |
| 0.0 | 31.0 | 6820 | 0.5069 | 0.9268 |
| 0.0 | 32.0 | 7040 | 0.5143 | 0.9268 |
| 0.0 | 33.0 | 7260 | 0.5224 | 0.9268 |
| 0.0 | 34.0 | 7480 | 0.5303 | 0.9268 |
| 0.0 | 35.0 | 7700 | 0.5381 | 0.9268 |
| 0.0 | 36.0 | 7920 | 0.5458 | 0.9268 |
| 0.0 | 37.0 | 8140 | 0.5543 | 0.9268 |
| 0.0 | 38.0 | 8360 | 0.5622 | 0.9268 |
| 0.0 | 39.0 | 8580 | 0.5706 | 0.9268 |
| 0.0 | 40.0 | 8800 | 0.5791 | 0.9268 |
| 0.0 | 41.0 | 9020 | 0.5871 | 0.9268 |
| 0.0 | 42.0 | 9240 | 0.5951 | 0.9268 |
| 0.0 | 43.0 | 9460 | 0.6028 | 0.9268 |
| 0.0 | 44.0 | 9680 | 0.6101 | 0.9268 |
| 0.0 | 45.0 | 9900 | 0.6166 | 0.9268 |
| 0.0 | 46.0 | 10120 | 0.6227 | 0.9268 |
| 0.0 | 47.0 | 10340 | 0.6281 | 0.9268 |
| 0.0 | 48.0 | 10560 | 0.6322 | 0.9268 |
| 0.0 | 49.0 | 10780 | 0.6350 | 0.9268 |
| 0.0 | 50.0 | 11000 | 0.6357 | 0.9268 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_base_rms_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_00001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1007
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0198 | 1.0 | 219 | 0.1507 | 0.9762 |
| 0.0011 | 2.0 | 438 | 0.0970 | 0.9524 |
| 0.0001 | 3.0 | 657 | 0.1076 | 0.9762 |
| 0.0 | 4.0 | 876 | 0.1639 | 0.9762 |
| 0.0 | 5.0 | 1095 | 0.1725 | 0.9762 |
| 0.0 | 6.0 | 1314 | 0.1878 | 0.9762 |
| 0.0 | 7.0 | 1533 | 0.1938 | 0.9762 |
| 0.0 | 8.0 | 1752 | 0.2324 | 0.9762 |
| 0.0 | 9.0 | 1971 | 0.2291 | 0.9762 |
| 0.0 | 10.0 | 2190 | 0.2439 | 0.9762 |
| 0.0 | 11.0 | 2409 | 0.2561 | 0.9762 |
| 0.0 | 12.0 | 2628 | 0.2485 | 0.9762 |
| 0.0 | 13.0 | 2847 | 0.2405 | 0.9762 |
| 0.0 | 14.0 | 3066 | 0.2304 | 0.9762 |
| 0.0 | 15.0 | 3285 | 0.2413 | 0.9762 |
| 0.0 | 16.0 | 3504 | 0.2115 | 0.9762 |
| 0.0 | 17.0 | 3723 | 0.1910 | 0.9762 |
| 0.0 | 18.0 | 3942 | 0.1870 | 0.9762 |
| 0.0 | 19.0 | 4161 | 0.1739 | 0.9762 |
| 0.0 | 20.0 | 4380 | 0.1578 | 0.9762 |
| 0.0 | 21.0 | 4599 | 0.1547 | 0.9762 |
| 0.0 | 22.0 | 4818 | 0.1669 | 0.9762 |
| 0.0 | 23.0 | 5037 | 0.1432 | 0.9762 |
| 0.0 | 24.0 | 5256 | 0.1575 | 0.9762 |
| 0.0 | 25.0 | 5475 | 0.1416 | 0.9762 |
| 0.0 | 26.0 | 5694 | 0.1393 | 0.9762 |
| 0.0 | 27.0 | 5913 | 0.1475 | 0.9762 |
| 0.0 | 28.0 | 6132 | 0.1366 | 0.9762 |
| 0.0 | 29.0 | 6351 | 0.1260 | 0.9762 |
| 0.0 | 30.0 | 6570 | 0.1297 | 0.9762 |
| 0.0 | 31.0 | 6789 | 0.1297 | 0.9762 |
| 0.0 | 32.0 | 7008 | 0.1203 | 0.9762 |
| 0.0 | 33.0 | 7227 | 0.1255 | 0.9762 |
| 0.0 | 34.0 | 7446 | 0.1098 | 0.9762 |
| 0.0 | 35.0 | 7665 | 0.1089 | 0.9762 |
| 0.0 | 36.0 | 7884 | 0.1102 | 0.9762 |
| 0.0 | 37.0 | 8103 | 0.1098 | 0.9762 |
| 0.0 | 38.0 | 8322 | 0.1034 | 0.9762 |
| 0.0 | 39.0 | 8541 | 0.1028 | 0.9762 |
| 0.0 | 40.0 | 8760 | 0.1065 | 0.9762 |
| 0.0 | 41.0 | 8979 | 0.1129 | 0.9762 |
| 0.0 | 42.0 | 9198 | 0.1039 | 0.9762 |
| 0.0 | 43.0 | 9417 | 0.1041 | 0.9762 |
| 0.0 | 44.0 | 9636 | 0.1021 | 0.9762 |
| 0.0 | 45.0 | 9855 | 0.0980 | 0.9762 |
| 0.0 | 46.0 | 10074 | 0.1036 | 0.9762 |
| 0.0 | 47.0 | 10293 | 0.1002 | 0.9762 |
| 0.0 | 48.0 | 10512 | 0.1011 | 0.9762 |
| 0.0 | 49.0 | 10731 | 0.1008 | 0.9762 |
| 0.0 | 50.0 | 10950 | 0.1007 | 0.9762 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_sgd_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_00001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4228
- Accuracy: 0.2791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4808 | 1.0 | 217 | 1.4816 | 0.2791 |
| 1.4823 | 2.0 | 434 | 1.4791 | 0.2791 |
| 1.4134 | 3.0 | 651 | 1.4766 | 0.2791 |
| 1.4759 | 4.0 | 868 | 1.4742 | 0.2791 |
| 1.4883 | 5.0 | 1085 | 1.4718 | 0.2791 |
| 1.4518 | 6.0 | 1302 | 1.4695 | 0.3023 |
| 1.4499 | 7.0 | 1519 | 1.4671 | 0.2791 |
| 1.4363 | 8.0 | 1736 | 1.4648 | 0.2791 |
| 1.4639 | 9.0 | 1953 | 1.4626 | 0.2791 |
| 1.447 | 10.0 | 2170 | 1.4604 | 0.2791 |
| 1.4636 | 11.0 | 2387 | 1.4583 | 0.3023 |
| 1.4249 | 12.0 | 2604 | 1.4562 | 0.3023 |
| 1.4551 | 13.0 | 2821 | 1.4542 | 0.3023 |
| 1.4299 | 14.0 | 3038 | 1.4523 | 0.2791 |
| 1.4254 | 15.0 | 3255 | 1.4505 | 0.2791 |
| 1.3712 | 16.0 | 3472 | 1.4487 | 0.2791 |
| 1.4294 | 17.0 | 3689 | 1.4469 | 0.2791 |
| 1.3982 | 18.0 | 3906 | 1.4452 | 0.2791 |
| 1.39 | 19.0 | 4123 | 1.4437 | 0.2791 |
| 1.3895 | 20.0 | 4340 | 1.4422 | 0.2791 |
| 1.3897 | 21.0 | 4557 | 1.4407 | 0.2791 |
| 1.381 | 22.0 | 4774 | 1.4393 | 0.2791 |
| 1.3878 | 23.0 | 4991 | 1.4380 | 0.2791 |
| 1.4255 | 24.0 | 5208 | 1.4367 | 0.2791 |
| 1.397 | 25.0 | 5425 | 1.4355 | 0.2791 |
| 1.3946 | 26.0 | 5642 | 1.4344 | 0.2791 |
| 1.4141 | 27.0 | 5859 | 1.4334 | 0.2791 |
| 1.391 | 28.0 | 6076 | 1.4324 | 0.2791 |
| 1.3772 | 29.0 | 6293 | 1.4314 | 0.2791 |
| 1.4053 | 30.0 | 6510 | 1.4305 | 0.2791 |
| 1.3414 | 31.0 | 6727 | 1.4297 | 0.2791 |
| 1.368 | 32.0 | 6944 | 1.4288 | 0.2791 |
| 1.3993 | 33.0 | 7161 | 1.4281 | 0.2791 |
| 1.3039 | 34.0 | 7378 | 1.4274 | 0.2791 |
| 1.3467 | 35.0 | 7595 | 1.4268 | 0.2791 |
| 1.3754 | 36.0 | 7812 | 1.4262 | 0.2791 |
| 1.3681 | 37.0 | 8029 | 1.4257 | 0.2791 |
| 1.3927 | 38.0 | 8246 | 1.4252 | 0.2791 |
| 1.3307 | 39.0 | 8463 | 1.4248 | 0.2791 |
| 1.3625 | 40.0 | 8680 | 1.4244 | 0.2791 |
| 1.3775 | 41.0 | 8897 | 1.4240 | 0.2791 |
| 1.3411 | 42.0 | 9114 | 1.4237 | 0.2791 |
| 1.3645 | 43.0 | 9331 | 1.4235 | 0.2791 |
| 1.3775 | 44.0 | 9548 | 1.4233 | 0.2791 |
| 1.3259 | 45.0 | 9765 | 1.4231 | 0.2791 |
| 1.3653 | 46.0 | 9982 | 1.4230 | 0.2791 |
| 1.3629 | 47.0 | 10199 | 1.4229 | 0.2791 |
| 1.3538 | 48.0 | 10416 | 1.4229 | 0.2791 |
| 1.3676 | 49.0 | 10633 | 1.4228 | 0.2791 |
| 1.357 | 50.0 | 10850 | 1.4228 | 0.2791 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_base_rms_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_00001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0771
- Accuracy: 0.9024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0052 | 1.0 | 220 | 0.4130 | 0.8293 |
| 0.001 | 2.0 | 440 | 0.7408 | 0.7561 |
| 0.0001 | 3.0 | 660 | 0.5404 | 0.8293 |
| 0.0 | 4.0 | 880 | 0.6551 | 0.8537 |
| 0.0 | 5.0 | 1100 | 0.6935 | 0.8537 |
| 0.0 | 6.0 | 1320 | 0.7841 | 0.8537 |
| 0.0 | 7.0 | 1540 | 0.8406 | 0.8780 |
| 0.0 | 8.0 | 1760 | 0.9535 | 0.8537 |
| 0.0 | 9.0 | 1980 | 1.0718 | 0.8537 |
| 0.0 | 10.0 | 2200 | 1.0880 | 0.8780 |
| 0.0 | 11.0 | 2420 | 1.2633 | 0.8537 |
| 0.0 | 12.0 | 2640 | 1.2651 | 0.8537 |
| 0.0 | 13.0 | 2860 | 1.2350 | 0.8537 |
| 0.0 | 14.0 | 3080 | 1.2663 | 0.8537 |
| 0.0 | 15.0 | 3300 | 1.2417 | 0.9024 |
| 0.0 | 16.0 | 3520 | 1.2096 | 0.9024 |
| 0.0 | 17.0 | 3740 | 1.2005 | 0.9024 |
| 0.0 | 18.0 | 3960 | 1.2265 | 0.9024 |
| 0.0 | 19.0 | 4180 | 1.2150 | 0.9024 |
| 0.0 | 20.0 | 4400 | 1.1897 | 0.9024 |
| 0.0 | 21.0 | 4620 | 1.1885 | 0.9024 |
| 0.0 | 22.0 | 4840 | 1.1639 | 0.9024 |
| 0.0 | 23.0 | 5060 | 1.1923 | 0.9024 |
| 0.0 | 24.0 | 5280 | 1.1525 | 0.9024 |
| 0.0 | 25.0 | 5500 | 1.1552 | 0.9024 |
| 0.0 | 26.0 | 5720 | 1.1279 | 0.9024 |
| 0.0 | 27.0 | 5940 | 1.1258 | 0.9024 |
| 0.0 | 28.0 | 6160 | 1.1169 | 0.9024 |
| 0.0 | 29.0 | 6380 | 1.1315 | 0.9024 |
| 0.0 | 30.0 | 6600 | 1.1119 | 0.9024 |
| 0.0 | 31.0 | 6820 | 1.1225 | 0.9024 |
| 0.0 | 32.0 | 7040 | 1.1189 | 0.9024 |
| 0.0 | 33.0 | 7260 | 1.0917 | 0.9024 |
| 0.0 | 34.0 | 7480 | 1.0931 | 0.9024 |
| 0.0 | 35.0 | 7700 | 1.0932 | 0.9024 |
| 0.0 | 36.0 | 7920 | 1.0876 | 0.9024 |
| 0.0 | 37.0 | 8140 | 1.0945 | 0.9024 |
| 0.0 | 38.0 | 8360 | 1.0978 | 0.9024 |
| 0.0 | 39.0 | 8580 | 1.0919 | 0.9024 |
| 0.0 | 40.0 | 8800 | 1.0814 | 0.9024 |
| 0.0 | 41.0 | 9020 | 1.0857 | 0.9024 |
| 0.0 | 42.0 | 9240 | 1.0857 | 0.9024 |
| 0.0 | 43.0 | 9460 | 1.0769 | 0.9024 |
| 0.0 | 44.0 | 9680 | 1.0814 | 0.9024 |
| 0.0 | 45.0 | 9900 | 1.0783 | 0.9024 |
| 0.0 | 46.0 | 10120 | 1.0792 | 0.9024 |
| 0.0 | 47.0 | 10340 | 1.0787 | 0.9024 |
| 0.0 | 48.0 | 10560 | 1.0779 | 0.9024 |
| 0.0 | 49.0 | 10780 | 1.0759 | 0.9024 |
| 0.0 | 50.0 | 11000 | 1.0771 | 0.9024 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_sgd_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_00001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5700
- Accuracy: 0.1429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.5195 | 1.0 | 219 | 1.6573 | 0.1429 |
| 1.4299 | 2.0 | 438 | 1.6538 | 0.1429 |
| 1.5002 | 3.0 | 657 | 1.6502 | 0.1429 |
| 1.4923 | 4.0 | 876 | 1.6467 | 0.1429 |
| 1.4507 | 5.0 | 1095 | 1.6433 | 0.1667 |
| 1.4843 | 6.0 | 1314 | 1.6398 | 0.1667 |
| 1.4439 | 7.0 | 1533 | 1.6365 | 0.1667 |
| 1.461 | 8.0 | 1752 | 1.6332 | 0.1667 |
| 1.4438 | 9.0 | 1971 | 1.6299 | 0.1667 |
| 1.421 | 10.0 | 2190 | 1.6268 | 0.1667 |
| 1.3797 | 11.0 | 2409 | 1.6238 | 0.1667 |
| 1.4481 | 12.0 | 2628 | 1.6208 | 0.1429 |
| 1.37 | 13.0 | 2847 | 1.6179 | 0.1429 |
| 1.4257 | 14.0 | 3066 | 1.6150 | 0.1429 |
| 1.3565 | 15.0 | 3285 | 1.6123 | 0.1429 |
| 1.3893 | 16.0 | 3504 | 1.6097 | 0.1429 |
| 1.4087 | 17.0 | 3723 | 1.6071 | 0.1429 |
| 1.3822 | 18.0 | 3942 | 1.6047 | 0.1429 |
| 1.3943 | 19.0 | 4161 | 1.6023 | 0.1429 |
| 1.4156 | 20.0 | 4380 | 1.6000 | 0.1429 |
| 1.397 | 21.0 | 4599 | 1.5977 | 0.1429 |
| 1.3921 | 22.0 | 4818 | 1.5956 | 0.1429 |
| 1.345 | 23.0 | 5037 | 1.5936 | 0.1429 |
| 1.3941 | 24.0 | 5256 | 1.5916 | 0.1429 |
| 1.3428 | 25.0 | 5475 | 1.5898 | 0.1429 |
| 1.3959 | 26.0 | 5694 | 1.5880 | 0.1429 |
| 1.3527 | 27.0 | 5913 | 1.5863 | 0.1429 |
| 1.3622 | 28.0 | 6132 | 1.5847 | 0.1429 |
| 1.3233 | 29.0 | 6351 | 1.5833 | 0.1429 |
| 1.3602 | 30.0 | 6570 | 1.5819 | 0.1429 |
| 1.3369 | 31.0 | 6789 | 1.5805 | 0.1429 |
| 1.3891 | 32.0 | 7008 | 1.5793 | 0.1429 |
| 1.3567 | 33.0 | 7227 | 1.5782 | 0.1429 |
| 1.3341 | 34.0 | 7446 | 1.5771 | 0.1429 |
| 1.3827 | 35.0 | 7665 | 1.5761 | 0.1429 |
| 1.3365 | 36.0 | 7884 | 1.5752 | 0.1429 |
| 1.3545 | 37.0 | 8103 | 1.5744 | 0.1429 |
| 1.3874 | 38.0 | 8322 | 1.5736 | 0.1429 |
| 1.3749 | 39.0 | 8541 | 1.5729 | 0.1429 |
| 1.3026 | 40.0 | 8760 | 1.5724 | 0.1429 |
| 1.3806 | 41.0 | 8979 | 1.5718 | 0.1429 |
| 1.3467 | 42.0 | 9198 | 1.5714 | 0.1429 |
| 1.3584 | 43.0 | 9417 | 1.5710 | 0.1429 |
| 1.3511 | 44.0 | 9636 | 1.5707 | 0.1429 |
| 1.3397 | 45.0 | 9855 | 1.5704 | 0.1429 |
| 1.35 | 46.0 | 10074 | 1.5703 | 0.1429 |
| 1.328 | 47.0 | 10293 | 1.5701 | 0.1429 |
| 1.3922 | 48.0 | 10512 | 1.5700 | 0.1429 |
| 1.3345 | 49.0 | 10731 | 1.5700 | 0.1429 |
| 1.3302 | 50.0 | 10950 | 1.5700 | 0.1429 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_sgd_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_sgd_00001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5613
- Accuracy: 0.1951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4842 | 1.0 | 220 | 1.6446 | 0.1951 |
| 1.5035 | 2.0 | 440 | 1.6414 | 0.1951 |
| 1.4995 | 3.0 | 660 | 1.6381 | 0.1951 |
| 1.5021 | 4.0 | 880 | 1.6349 | 0.1707 |
| 1.454 | 5.0 | 1100 | 1.6317 | 0.1707 |
| 1.4629 | 6.0 | 1320 | 1.6285 | 0.1707 |
| 1.4161 | 7.0 | 1540 | 1.6253 | 0.1707 |
| 1.4101 | 8.0 | 1760 | 1.6223 | 0.1707 |
| 1.4149 | 9.0 | 1980 | 1.6192 | 0.1707 |
| 1.4443 | 10.0 | 2200 | 1.6162 | 0.1707 |
| 1.4163 | 11.0 | 2420 | 1.6133 | 0.1707 |
| 1.4351 | 12.0 | 2640 | 1.6104 | 0.1707 |
| 1.4104 | 13.0 | 2860 | 1.6076 | 0.1707 |
| 1.3915 | 14.0 | 3080 | 1.6048 | 0.1707 |
| 1.4251 | 15.0 | 3300 | 1.6022 | 0.1707 |
| 1.4091 | 16.0 | 3520 | 1.5996 | 0.1951 |
| 1.384 | 17.0 | 3740 | 1.5971 | 0.1951 |
| 1.3979 | 18.0 | 3960 | 1.5947 | 0.1951 |
| 1.3842 | 19.0 | 4180 | 1.5923 | 0.1951 |
| 1.3555 | 20.0 | 4400 | 1.5900 | 0.1951 |
| 1.3519 | 21.0 | 4620 | 1.5879 | 0.1951 |
| 1.3873 | 22.0 | 4840 | 1.5859 | 0.1951 |
| 1.3791 | 23.0 | 5060 | 1.5839 | 0.1951 |
| 1.3799 | 24.0 | 5280 | 1.5820 | 0.1951 |
| 1.3568 | 25.0 | 5500 | 1.5802 | 0.1951 |
| 1.369 | 26.0 | 5720 | 1.5786 | 0.1951 |
| 1.3732 | 27.0 | 5940 | 1.5770 | 0.1951 |
| 1.3491 | 28.0 | 6160 | 1.5754 | 0.1951 |
| 1.3457 | 29.0 | 6380 | 1.5740 | 0.1951 |
| 1.3169 | 30.0 | 6600 | 1.5726 | 0.1951 |
| 1.3748 | 31.0 | 6820 | 1.5714 | 0.1951 |
| 1.3384 | 32.0 | 7040 | 1.5702 | 0.1951 |
| 1.3281 | 33.0 | 7260 | 1.5691 | 0.1951 |
| 1.3359 | 34.0 | 7480 | 1.5681 | 0.1951 |
| 1.3414 | 35.0 | 7700 | 1.5671 | 0.1951 |
| 1.3339 | 36.0 | 7920 | 1.5662 | 0.1951 |
| 1.3778 | 37.0 | 8140 | 1.5654 | 0.1951 |
| 1.3669 | 38.0 | 8360 | 1.5647 | 0.1951 |
| 1.3509 | 39.0 | 8580 | 1.5641 | 0.1951 |
| 1.3269 | 40.0 | 8800 | 1.5635 | 0.1951 |
| 1.3717 | 41.0 | 9020 | 1.5630 | 0.1951 |
| 1.3455 | 42.0 | 9240 | 1.5626 | 0.1951 |
| 1.3737 | 43.0 | 9460 | 1.5622 | 0.1951 |
| 1.3166 | 44.0 | 9680 | 1.5619 | 0.1951 |
| 1.3504 | 45.0 | 9900 | 1.5617 | 0.1951 |
| 1.3509 | 46.0 | 10120 | 1.5615 | 0.1951 |
| 1.3526 | 47.0 | 10340 | 1.5614 | 0.1951 |
| 1.3222 | 48.0 | 10560 | 1.5613 | 0.1951 |
| 1.3165 | 49.0 | 10780 | 1.5613 | 0.1951 |
| 1.3501 | 50.0 | 11000 | 1.5613 | 0.1951 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_rms_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 7.3853
- Accuracy: 0.5556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1884 | 1.0 | 215 | 1.1880 | 0.4444 |
| 0.7323 | 2.0 | 430 | 1.1545 | 0.5111 |
| 0.6559 | 3.0 | 645 | 3.0305 | 0.3556 |
| 0.5702 | 4.0 | 860 | 1.2302 | 0.5111 |
| 0.5375 | 5.0 | 1075 | 2.2528 | 0.4222 |
| 0.4472 | 6.0 | 1290 | 1.9208 | 0.5111 |
| 0.4382 | 7.0 | 1505 | 1.8095 | 0.4889 |
| 0.3809 | 8.0 | 1720 | 2.0821 | 0.4222 |
| 0.3012 | 9.0 | 1935 | 1.8136 | 0.4 |
| 0.2478 | 10.0 | 2150 | 2.1397 | 0.4889 |
| 0.2029 | 11.0 | 2365 | 1.9762 | 0.5556 |
| 0.1971 | 12.0 | 2580 | 2.3756 | 0.5778 |
| 0.213 | 13.0 | 2795 | 1.6329 | 0.6444 |
| 0.1476 | 14.0 | 3010 | 2.7699 | 0.5333 |
| 0.098 | 15.0 | 3225 | 2.9763 | 0.5556 |
| 0.1482 | 16.0 | 3440 | 3.4825 | 0.5111 |
| 0.1197 | 17.0 | 3655 | 2.4388 | 0.6444 |
| 0.0935 | 18.0 | 3870 | 2.6931 | 0.5778 |
| 0.0698 | 19.0 | 4085 | 3.8147 | 0.5333 |
| 0.1713 | 20.0 | 4300 | 3.1091 | 0.5556 |
| 0.0331 | 21.0 | 4515 | 3.7485 | 0.5778 |
| 0.0687 | 22.0 | 4730 | 3.9845 | 0.5778 |
| 0.0351 | 23.0 | 4945 | 3.2773 | 0.6 |
| 0.0341 | 24.0 | 5160 | 4.2021 | 0.5333 |
| 0.0324 | 25.0 | 5375 | 6.0388 | 0.4889 |
| 0.022 | 26.0 | 5590 | 4.1761 | 0.6 |
| 0.1048 | 27.0 | 5805 | 2.9470 | 0.6 |
| 0.0202 | 28.0 | 6020 | 3.8209 | 0.5778 |
| 0.0085 | 29.0 | 6235 | 4.1758 | 0.5111 |
| 0.0013 | 30.0 | 6450 | 4.2128 | 0.5556 |
| 0.0026 | 31.0 | 6665 | 4.4304 | 0.4667 |
| 0.0003 | 32.0 | 6880 | 4.6210 | 0.5111 |
| 0.015 | 33.0 | 7095 | 3.7643 | 0.5556 |
| 0.0066 | 34.0 | 7310 | 4.8748 | 0.5778 |
| 0.0 | 35.0 | 7525 | 4.7438 | 0.5556 |
| 0.0 | 36.0 | 7740 | 5.0565 | 0.5111 |
| 0.0 | 37.0 | 7955 | 5.3178 | 0.5333 |
| 0.0 | 38.0 | 8170 | 5.6008 | 0.5333 |
| 0.0 | 39.0 | 8385 | 5.8863 | 0.5333 |
| 0.0 | 40.0 | 8600 | 6.1779 | 0.5333 |
| 0.0 | 41.0 | 8815 | 6.4282 | 0.5333 |
| 0.0 | 42.0 | 9030 | 6.6702 | 0.5556 |
| 0.0 | 43.0 | 9245 | 6.8800 | 0.5556 |
| 0.0 | 44.0 | 9460 | 7.0514 | 0.5556 |
| 0.0 | 45.0 | 9675 | 7.1938 | 0.5556 |
| 0.0 | 46.0 | 9890 | 7.2836 | 0.5556 |
| 0.0 | 47.0 | 10105 | 7.3402 | 0.5556 |
| 0.0 | 48.0 | 10320 | 7.3740 | 0.5556 |
| 0.0 | 49.0 | 10535 | 7.3831 | 0.5556 |
| 0.0 | 50.0 | 10750 | 7.3853 | 0.5556 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_rms_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 8.4415
- Accuracy: 0.4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2259 | 1.0 | 215 | 1.1893 | 0.3778 |
| 0.7887 | 2.0 | 430 | 1.7800 | 0.3778 |
| 0.6967 | 3.0 | 645 | 1.8757 | 0.4667 |
| 0.6862 | 4.0 | 860 | 1.9414 | 0.4444 |
| 0.4924 | 5.0 | 1075 | 1.8919 | 0.4444 |
| 0.5712 | 6.0 | 1290 | 1.8316 | 0.5111 |
| 0.4554 | 7.0 | 1505 | 2.6959 | 0.5111 |
| 0.3791 | 8.0 | 1720 | 3.2703 | 0.4889 |
| 0.2815 | 9.0 | 1935 | 2.8206 | 0.4222 |
| 0.3229 | 10.0 | 2150 | 2.1796 | 0.3778 |
| 0.2732 | 11.0 | 2365 | 2.6937 | 0.4222 |
| 0.2161 | 12.0 | 2580 | 2.7085 | 0.4 |
| 0.2247 | 13.0 | 2795 | 2.7907 | 0.5556 |
| 0.1656 | 14.0 | 3010 | 3.5588 | 0.5111 |
| 0.2252 | 15.0 | 3225 | 3.4710 | 0.4667 |
| 0.1912 | 16.0 | 3440 | 4.0799 | 0.4667 |
| 0.2296 | 17.0 | 3655 | 3.3917 | 0.5778 |
| 0.0717 | 18.0 | 3870 | 5.2253 | 0.4222 |
| 0.0776 | 19.0 | 4085 | 4.4474 | 0.4667 |
| 0.0565 | 20.0 | 4300 | 5.1424 | 0.4222 |
| 0.0848 | 21.0 | 4515 | 4.9397 | 0.4667 |
| 0.0607 | 22.0 | 4730 | 4.6748 | 0.4667 |
| 0.0626 | 23.0 | 4945 | 5.0805 | 0.4889 |
| 0.0796 | 24.0 | 5160 | 5.0210 | 0.4444 |
| 0.0786 | 25.0 | 5375 | 5.7741 | 0.4667 |
| 0.011 | 26.0 | 5590 | 4.7102 | 0.5333 |
| 0.0247 | 27.0 | 5805 | 5.9220 | 0.4444 |
| 0.0051 | 28.0 | 6020 | 6.4658 | 0.4222 |
| 0.0246 | 29.0 | 6235 | 5.3041 | 0.5111 |
| 0.025 | 30.0 | 6450 | 5.4166 | 0.5333 |
| 0.0321 | 31.0 | 6665 | 5.7245 | 0.4444 |
| 0.0467 | 32.0 | 6880 | 5.9082 | 0.5111 |
| 0.0354 | 33.0 | 7095 | 5.7199 | 0.4667 |
| 0.0267 | 34.0 | 7310 | 6.9737 | 0.4444 |
| 0.0012 | 35.0 | 7525 | 6.7506 | 0.4222 |
| 0.0014 | 36.0 | 7740 | 7.0113 | 0.4222 |
| 0.0151 | 37.0 | 7955 | 6.8314 | 0.4444 |
| 0.0325 | 38.0 | 8170 | 6.8690 | 0.4444 |
| 0.0 | 39.0 | 8385 | 6.9350 | 0.4667 |
| 0.0006 | 40.0 | 8600 | 7.6894 | 0.4444 |
| 0.0001 | 41.0 | 8815 | 7.8369 | 0.4222 |
| 0.0001 | 42.0 | 9030 | 7.3604 | 0.4 |
| 0.0 | 43.0 | 9245 | 7.8724 | 0.3778 |
| 0.0 | 44.0 | 9460 | 7.8044 | 0.3333 |
| 0.0 | 45.0 | 9675 | 8.3094 | 0.4 |
| 0.001 | 46.0 | 9890 | 8.3688 | 0.4 |
| 0.0018 | 47.0 | 10105 | 8.4135 | 0.4 |
| 0.0 | 48.0 | 10320 | 8.3955 | 0.4 |
| 0.0 | 49.0 | 10535 | 8.4293 | 0.4 |
| 0.0 | 50.0 | 10750 | 8.4415 | 0.4 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_rms_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2392
- Accuracy: 0.7907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.246 | 1.0 | 217 | 1.4100 | 0.2558 |
| 0.9627 | 2.0 | 434 | 1.1709 | 0.5116 |
| 0.8333 | 3.0 | 651 | 1.1793 | 0.4186 |
| 0.7844 | 4.0 | 868 | 0.8077 | 0.6279 |
| 0.7035 | 5.0 | 1085 | 1.3327 | 0.5116 |
| 0.7976 | 6.0 | 1302 | 0.7941 | 0.6977 |
| 0.7352 | 7.0 | 1519 | 0.9909 | 0.6279 |
| 0.6247 | 8.0 | 1736 | 0.7281 | 0.6977 |
| 0.6212 | 9.0 | 1953 | 1.1902 | 0.6279 |
| 0.6647 | 10.0 | 2170 | 1.0897 | 0.5581 |
| 0.4763 | 11.0 | 2387 | 0.9383 | 0.6047 |
| 0.5076 | 12.0 | 2604 | 0.5861 | 0.7907 |
| 0.4573 | 13.0 | 2821 | 0.9438 | 0.5349 |
| 0.3857 | 14.0 | 3038 | 0.7991 | 0.6977 |
| 0.3919 | 15.0 | 3255 | 0.9377 | 0.6047 |
| 0.352 | 16.0 | 3472 | 1.0859 | 0.5814 |
| 0.3551 | 17.0 | 3689 | 1.2113 | 0.6744 |
| 0.3196 | 18.0 | 3906 | 1.2889 | 0.6279 |
| 0.2405 | 19.0 | 4123 | 0.9915 | 0.6512 |
| 0.2367 | 20.0 | 4340 | 1.6136 | 0.6279 |
| 0.2222 | 21.0 | 4557 | 1.4836 | 0.5814 |
| 0.1901 | 22.0 | 4774 | 1.0739 | 0.7209 |
| 0.173 | 23.0 | 4991 | 1.3956 | 0.6512 |
| 0.1711 | 24.0 | 5208 | 1.7072 | 0.6279 |
| 0.1027 | 25.0 | 5425 | 1.4657 | 0.6512 |
| 0.0952 | 26.0 | 5642 | 1.6372 | 0.6744 |
| 0.1462 | 27.0 | 5859 | 2.2566 | 0.5814 |
| 0.1003 | 28.0 | 6076 | 1.5093 | 0.6512 |
| 0.0764 | 29.0 | 6293 | 1.9318 | 0.6512 |
| 0.1025 | 30.0 | 6510 | 1.9630 | 0.6047 |
| 0.0702 | 31.0 | 6727 | 2.1273 | 0.6512 |
| 0.0313 | 32.0 | 6944 | 1.6171 | 0.7209 |
| 0.0732 | 33.0 | 7161 | 1.2147 | 0.7209 |
| 0.0384 | 34.0 | 7378 | 1.9804 | 0.7209 |
| 0.0177 | 35.0 | 7595 | 1.8221 | 0.6512 |
| 0.0098 | 36.0 | 7812 | 2.4941 | 0.6744 |
| 0.0407 | 37.0 | 8029 | 2.6063 | 0.6512 |
| 0.0798 | 38.0 | 8246 | 3.5391 | 0.5581 |
| 0.0022 | 39.0 | 8463 | 2.7971 | 0.6512 |
| 0.0004 | 40.0 | 8680 | 1.8602 | 0.7209 |
| 0.0547 | 41.0 | 8897 | 2.4427 | 0.6744 |
| 0.0003 | 42.0 | 9114 | 2.1061 | 0.6977 |
| 0.0003 | 43.0 | 9331 | 2.0283 | 0.5814 |
| 0.0017 | 44.0 | 9548 | 2.1926 | 0.6744 |
| 0.0001 | 45.0 | 9765 | 1.9704 | 0.7674 |
| 0.0 | 46.0 | 9982 | 2.2645 | 0.7442 |
| 0.0001 | 47.0 | 10199 | 2.3408 | 0.7674 |
| 0.0 | 48.0 | 10416 | 2.2312 | 0.7674 |
| 0.0 | 49.0 | 10633 | 2.2302 | 0.7907 |
| 0.0 | 50.0 | 10850 | 2.2392 | 0.7907 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_rms_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6233
- Accuracy: 0.6905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1473 | 1.0 | 219 | 0.8756 | 0.6190 |
| 0.9719 | 2.0 | 438 | 0.9893 | 0.5714 |
| 0.7611 | 3.0 | 657 | 0.7217 | 0.7619 |
| 0.6995 | 4.0 | 876 | 0.7516 | 0.6429 |
| 0.6928 | 5.0 | 1095 | 1.0447 | 0.5952 |
| 0.6114 | 6.0 | 1314 | 0.9410 | 0.6667 |
| 0.4906 | 7.0 | 1533 | 1.4457 | 0.5238 |
| 0.4956 | 8.0 | 1752 | 1.1229 | 0.6429 |
| 0.3708 | 9.0 | 1971 | 0.5610 | 0.7381 |
| 0.3213 | 10.0 | 2190 | 1.1632 | 0.6667 |
| 0.279 | 11.0 | 2409 | 0.8853 | 0.7381 |
| 0.26 | 12.0 | 2628 | 1.0316 | 0.6905 |
| 0.2004 | 13.0 | 2847 | 0.8001 | 0.7619 |
| 0.2396 | 14.0 | 3066 | 1.0495 | 0.7381 |
| 0.1937 | 15.0 | 3285 | 1.2736 | 0.7381 |
| 0.1386 | 16.0 | 3504 | 0.9949 | 0.7381 |
| 0.1459 | 17.0 | 3723 | 1.2302 | 0.6905 |
| 0.0754 | 18.0 | 3942 | 1.9238 | 0.6667 |
| 0.0996 | 19.0 | 4161 | 1.4396 | 0.6905 |
| 0.0438 | 20.0 | 4380 | 1.1891 | 0.7143 |
| 0.1349 | 21.0 | 4599 | 1.4228 | 0.7381 |
| 0.0058 | 22.0 | 4818 | 1.2340 | 0.7619 |
| 0.0345 | 23.0 | 5037 | 1.1630 | 0.6667 |
| 0.0461 | 24.0 | 5256 | 2.1318 | 0.6429 |
| 0.0595 | 25.0 | 5475 | 1.7499 | 0.6905 |
| 0.004 | 26.0 | 5694 | 1.6488 | 0.6905 |
| 0.0014 | 27.0 | 5913 | 1.8134 | 0.6905 |
| 0.0335 | 28.0 | 6132 | 2.3351 | 0.6905 |
| 0.0071 | 29.0 | 6351 | 2.4170 | 0.5714 |
| 0.0006 | 30.0 | 6570 | 1.5965 | 0.7381 |
| 0.0014 | 31.0 | 6789 | 2.0937 | 0.7381 |
| 0.0419 | 32.0 | 7008 | 1.8845 | 0.6905 |
| 0.042 | 33.0 | 7227 | 3.6234 | 0.6190 |
| 0.0018 | 34.0 | 7446 | 1.5177 | 0.6905 |
| 0.027 | 35.0 | 7665 | 1.3824 | 0.7857 |
| 0.0005 | 36.0 | 7884 | 2.3915 | 0.7619 |
| 0.0226 | 37.0 | 8103 | 1.6001 | 0.7381 |
| 0.001 | 38.0 | 8322 | 2.3141 | 0.6905 |
| 0.0 | 39.0 | 8541 | 2.5460 | 0.7143 |
| 0.0 | 40.0 | 8760 | 2.6724 | 0.6905 |
| 0.0 | 41.0 | 8979 | 2.7005 | 0.6905 |
| 0.0 | 42.0 | 9198 | 2.8171 | 0.7143 |
| 0.0 | 43.0 | 9417 | 2.9876 | 0.7143 |
| 0.0 | 44.0 | 9636 | 3.1125 | 0.7143 |
| 0.0 | 45.0 | 9855 | 3.2479 | 0.7143 |
| 0.0 | 46.0 | 10074 | 3.4344 | 0.7143 |
| 0.0 | 47.0 | 10293 | 3.4573 | 0.7143 |
| 0.0 | 48.0 | 10512 | 3.5752 | 0.6905 |
| 0.0 | 49.0 | 10731 | 3.5910 | 0.6905 |
| 0.0 | 50.0 | 10950 | 3.6233 | 0.6905 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_rms_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1758
- Accuracy: 0.7317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1377 | 1.0 | 220 | 1.4706 | 0.3171 |
| 0.8838 | 2.0 | 440 | 1.1169 | 0.4634 |
| 0.7297 | 3.0 | 660 | 0.9759 | 0.6098 |
| 0.6075 | 4.0 | 880 | 0.6205 | 0.7317 |
| 0.4922 | 5.0 | 1100 | 1.0787 | 0.6098 |
| 0.4263 | 6.0 | 1320 | 0.7206 | 0.7561 |
| 0.1851 | 7.0 | 1540 | 0.8644 | 0.7317 |
| 0.1792 | 8.0 | 1760 | 0.9149 | 0.7073 |
| 0.1954 | 9.0 | 1980 | 0.9202 | 0.7561 |
| 0.1313 | 10.0 | 2200 | 0.8113 | 0.8049 |
| 0.0886 | 11.0 | 2420 | 1.2174 | 0.7073 |
| 0.0848 | 12.0 | 2640 | 1.1156 | 0.7805 |
| 0.1044 | 13.0 | 2860 | 1.2101 | 0.7073 |
| 0.1348 | 14.0 | 3080 | 1.1458 | 0.7317 |
| 0.0626 | 15.0 | 3300 | 1.0690 | 0.7317 |
| 0.0309 | 16.0 | 3520 | 1.3430 | 0.7073 |
| 0.0473 | 17.0 | 3740 | 1.3747 | 0.7317 |
| 0.1091 | 18.0 | 3960 | 1.0321 | 0.7805 |
| 0.1069 | 19.0 | 4180 | 1.7755 | 0.7073 |
| 0.0244 | 20.0 | 4400 | 1.5213 | 0.7317 |
| 0.0331 | 21.0 | 4620 | 1.3935 | 0.7805 |
| 0.1167 | 22.0 | 4840 | 1.2913 | 0.7561 |
| 0.0052 | 23.0 | 5060 | 0.9792 | 0.8049 |
| 0.0198 | 24.0 | 5280 | 0.8211 | 0.8780 |
| 0.0358 | 25.0 | 5500 | 1.1277 | 0.8049 |
| 0.0008 | 26.0 | 5720 | 1.4574 | 0.7073 |
| 0.0001 | 27.0 | 5940 | 1.0665 | 0.7561 |
| 0.0001 | 28.0 | 6160 | 1.1272 | 0.7805 |
| 0.0006 | 29.0 | 6380 | 1.2564 | 0.7073 |
| 0.0011 | 30.0 | 6600 | 1.4453 | 0.7073 |
| 0.0002 | 31.0 | 6820 | 1.9210 | 0.7805 |
| 0.0108 | 32.0 | 7040 | 2.0987 | 0.7317 |
| 0.0 | 33.0 | 7260 | 2.5706 | 0.7317 |
| 0.0 | 34.0 | 7480 | 2.5154 | 0.7317 |
| 0.0 | 35.0 | 7700 | 2.7600 | 0.7317 |
| 0.0 | 36.0 | 7920 | 2.9535 | 0.7317 |
| 0.0 | 37.0 | 8140 | 3.1263 | 0.7317 |
| 0.0 | 38.0 | 8360 | 3.3175 | 0.7317 |
| 0.0 | 39.0 | 8580 | 3.4937 | 0.7317 |
| 0.0 | 40.0 | 8800 | 3.6752 | 0.7317 |
| 0.0 | 41.0 | 9020 | 3.8325 | 0.7317 |
| 0.0 | 42.0 | 9240 | 3.9616 | 0.7317 |
| 0.0 | 43.0 | 9460 | 4.0429 | 0.7317 |
| 0.0 | 44.0 | 9680 | 4.1055 | 0.7317 |
| 0.0 | 45.0 | 9900 | 4.1340 | 0.7317 |
| 0.0 | 46.0 | 10120 | 4.1489 | 0.7317 |
| 0.0 | 47.0 | 10340 | 4.1566 | 0.7317 |
| 0.0 | 48.0 | 10560 | 4.1688 | 0.7317 |
| 0.0 | 49.0 | 10780 | 4.1743 | 0.7317 |
| 0.0 | 50.0 | 11000 | 4.1758 | 0.7317 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
tiennguyenbnbk/teacher-status-van-tiny-256
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teacher-status-van-tiny-256
This model is a fine-tuned version of [Visual-Attention-Network/van-tiny](https://huggingface.co/Visual-Attention-Network/van-tiny) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0988
- Accuracy: 0.9831
- F1 Score: 0.9841
- Recall: 0.9789
- Precision: 0.9894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|:---------:|
| 0.6928 | 0.96 | 12 | 0.6904 | 0.6685 | 0.7631 | 1.0 | 0.6169 |
| 0.6893 | 2.0 | 25 | 0.6683 | 0.5393 | 0.6985 | 1.0 | 0.5367 |
| 0.6726 | 2.96 | 37 | 0.5704 | 0.5843 | 0.7197 | 1.0 | 0.5621 |
| 0.5295 | 4.0 | 50 | 0.4148 | 0.9213 | 0.9263 | 0.9263 | 0.9263 |
| 0.4745 | 4.96 | 62 | 0.3108 | 0.9382 | 0.9430 | 0.9579 | 0.9286 |
| 0.4206 | 6.0 | 75 | 0.2301 | 0.9438 | 0.9474 | 0.9474 | 0.9474 |
| 0.3898 | 6.96 | 87 | 0.1820 | 0.9494 | 0.9519 | 0.9368 | 0.9674 |
| 0.3153 | 8.0 | 100 | 0.1545 | 0.9494 | 0.9538 | 0.9789 | 0.93 |
| 0.3077 | 8.96 | 112 | 0.1521 | 0.9607 | 0.9622 | 0.9368 | 0.9889 |
| 0.3048 | 10.0 | 125 | 0.1331 | 0.9607 | 0.9626 | 0.9474 | 0.9783 |
| 0.3004 | 10.96 | 137 | 0.1314 | 0.9607 | 0.9634 | 0.9684 | 0.9583 |
| 0.2839 | 12.0 | 150 | 0.1272 | 0.9607 | 0.9622 | 0.9368 | 0.9889 |
| 0.286 | 12.96 | 162 | 0.1189 | 0.9607 | 0.9622 | 0.9368 | 0.9889 |
| 0.2473 | 14.0 | 175 | 0.0977 | 0.9719 | 0.9733 | 0.9579 | 0.9891 |
| 0.2774 | 14.96 | 187 | 0.0988 | 0.9831 | 0.9841 | 0.9789 | 0.9894 |
| 0.2541 | 16.0 | 200 | 0.0969 | 0.9719 | 0.9733 | 0.9579 | 0.9891 |
| 0.2383 | 16.96 | 212 | 0.1042 | 0.9719 | 0.9733 | 0.9579 | 0.9891 |
| 0.2552 | 18.0 | 225 | 0.1081 | 0.9719 | 0.9733 | 0.9579 | 0.9891 |
| 0.2223 | 18.96 | 237 | 0.1150 | 0.9663 | 0.9681 | 0.9579 | 0.9785 |
| 0.2561 | 20.0 | 250 | 0.1234 | 0.9551 | 0.9574 | 0.9474 | 0.9677 |
| 0.2462 | 20.96 | 262 | 0.1178 | 0.9607 | 0.9630 | 0.9579 | 0.9681 |
| 0.2294 | 22.0 | 275 | 0.1262 | 0.9382 | 0.9430 | 0.9579 | 0.9286 |
| 0.2296 | 22.96 | 287 | 0.1290 | 0.9438 | 0.9479 | 0.9579 | 0.9381 |
| 0.2224 | 24.0 | 300 | 0.1153 | 0.9494 | 0.9529 | 0.9579 | 0.9479 |
| 0.2205 | 24.96 | 312 | 0.1150 | 0.9494 | 0.9529 | 0.9579 | 0.9479 |
| 0.2169 | 26.0 | 325 | 0.1121 | 0.9551 | 0.9574 | 0.9474 | 0.9677 |
| 0.2212 | 26.96 | 337 | 0.1145 | 0.9494 | 0.9529 | 0.9579 | 0.9479 |
| 0.2188 | 28.0 | 350 | 0.1131 | 0.9494 | 0.9524 | 0.9474 | 0.9574 |
| 0.2015 | 28.8 | 360 | 0.1130 | 0.9494 | 0.9524 | 0.9474 | 0.9574 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"abnormal",
"normal"
] |
yuanhuaisen/autotrain-dnfus-10aod
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.337873637676239
f1_macro: 0.9183673469387754
f1_micro: 0.9367088607594937
f1_weighted: 0.9354172048566262
precision_macro: 0.9382716049382717
precision_micro: 0.9367088607594937
precision_weighted: 0.9484294421003282
recall_macro: 0.9166666666666666
recall_micro: 0.9367088607594937
recall_weighted: 0.9367088607594937
accuracy: 0.9367088607594937
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
saileshaman/vit-base-patch16-224-in21k-finetuned-cxr
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cxr
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1758
- Accuracy: 0.9357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2994 | 0.99 | 85 | 0.3337 | 0.8854 |
| 0.2806 | 2.0 | 171 | 0.2670 | 0.9101 |
| 0.2519 | 2.99 | 256 | 0.2495 | 0.9134 |
| 0.2456 | 4.0 | 342 | 0.2450 | 0.9143 |
| 0.2094 | 4.99 | 427 | 0.2105 | 0.9258 |
| 0.1808 | 6.0 | 513 | 0.1984 | 0.9308 |
| 0.1959 | 6.99 | 598 | 0.2022 | 0.9258 |
| 0.179 | 8.0 | 684 | 0.1980 | 0.9299 |
| 0.1915 | 8.99 | 769 | 0.1889 | 0.9308 |
| 0.1735 | 10.0 | 855 | 0.1931 | 0.9324 |
| 0.174 | 10.99 | 940 | 0.1872 | 0.9324 |
| 0.167 | 12.0 | 1026 | 0.1758 | 0.9357 |
| 0.1408 | 12.99 | 1111 | 0.1890 | 0.9349 |
| 0.1442 | 14.0 | 1197 | 0.1849 | 0.9324 |
| 0.1661 | 14.91 | 1275 | 0.1879 | 0.9266 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
[
"0-normal",
"1-pneumonia",
"2-covid19"
] |
sunhaozhepy/tropical_cyclone_classify_2022
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tropical_cyclone_classify_2022
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0074
- Accuracy: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9765 | 1.0 | 133 | 1.1528 | 0.6042 |
| 0.5649 | 2.0 | 266 | 0.9589 | 0.625 |
| 0.2572 | 3.0 | 399 | 1.0074 | 0.6667 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
[
"td",
"ts",
"sts",
"ty",
"sty",
"superty"
] |
fftx0907/autotrain-a7953-y2qmi
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.031825795644891124
f1_micro: 0.10555555555555556
f1_weighted: 0.020156337241764376
precision_macro: 0.017592592592592594
precision_micro: 0.10555555555555556
precision_weighted: 0.011141975308641975
recall_macro: 0.16666666666666666
recall_micro: 0.10555555555555556
recall_weighted: 0.10555555555555556
accuracy: 0.10555555555555556
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above",
"21covered_with_a_quilt_only_the_head_and_shoulders_exposed",
"22covered_with_a_quilt_exposed_head_and_shoulders_except_for_other_organs",
"23has_nothing_to_do_with_21_and_22_above"
] |
yuanhuaisen/autotrain-icqp0-g21fb
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.11805555555555557
f1_micro: 0.21518987341772153
f1_weighted: 0.07621308016877638
precision_macro: 0.07172995780590717
precision_micro: 0.21518987341772153
precision_weighted: 0.04630668162153501
recall_macro: 0.3333333333333333
recall_micro: 0.21518987341772153
recall_weighted: 0.21518987341772153
accuracy: 0.21518987341772153
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_deit_tiny_rms_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_0001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8879
- Accuracy: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0915 | 1.0 | 215 | 0.6993 | 0.7556 |
| 0.0445 | 2.0 | 430 | 1.0894 | 0.8 |
| 0.0994 | 3.0 | 645 | 1.3749 | 0.7333 |
| 0.0599 | 4.0 | 860 | 1.2197 | 0.8222 |
| 0.123 | 5.0 | 1075 | 1.0909 | 0.7333 |
| 0.0208 | 6.0 | 1290 | 1.5815 | 0.7778 |
| 0.0152 | 7.0 | 1505 | 1.3429 | 0.7556 |
| 0.1165 | 8.0 | 1720 | 1.4059 | 0.8222 |
| 0.0038 | 9.0 | 1935 | 2.1318 | 0.6889 |
| 0.0001 | 10.0 | 2150 | 1.8677 | 0.7556 |
| 0.0 | 11.0 | 2365 | 1.9514 | 0.7556 |
| 0.0 | 12.0 | 2580 | 2.1201 | 0.7556 |
| 0.0 | 13.0 | 2795 | 2.2375 | 0.7556 |
| 0.0 | 14.0 | 3010 | 2.3713 | 0.7556 |
| 0.0 | 15.0 | 3225 | 2.4919 | 0.7556 |
| 0.0 | 16.0 | 3440 | 2.5759 | 0.7556 |
| 0.0 | 17.0 | 3655 | 2.6720 | 0.7556 |
| 0.0 | 18.0 | 3870 | 2.7211 | 0.7556 |
| 0.0 | 19.0 | 4085 | 2.7243 | 0.7778 |
| 0.0 | 20.0 | 4300 | 2.7162 | 0.7778 |
| 0.0 | 21.0 | 4515 | 2.7396 | 0.7778 |
| 0.0 | 22.0 | 4730 | 2.8636 | 0.7556 |
| 0.0 | 23.0 | 4945 | 2.7180 | 0.7778 |
| 0.0 | 24.0 | 5160 | 2.5977 | 0.7778 |
| 0.0 | 25.0 | 5375 | 2.4267 | 0.7778 |
| 0.0 | 26.0 | 5590 | 2.3791 | 0.7778 |
| 0.0 | 27.0 | 5805 | 2.3560 | 0.7778 |
| 0.0 | 28.0 | 6020 | 2.2693 | 0.7778 |
| 0.0 | 29.0 | 6235 | 2.3818 | 0.7778 |
| 0.0 | 30.0 | 6450 | 2.1093 | 0.7778 |
| 0.0 | 31.0 | 6665 | 2.1403 | 0.7778 |
| 0.0 | 32.0 | 6880 | 2.0697 | 0.7778 |
| 0.0 | 33.0 | 7095 | 2.1077 | 0.7778 |
| 0.0 | 34.0 | 7310 | 1.9838 | 0.7778 |
| 0.0 | 35.0 | 7525 | 2.0013 | 0.7778 |
| 0.0 | 36.0 | 7740 | 1.9512 | 0.7778 |
| 0.0 | 37.0 | 7955 | 1.9785 | 0.7778 |
| 0.0 | 38.0 | 8170 | 1.9833 | 0.7778 |
| 0.0 | 39.0 | 8385 | 1.9247 | 0.7778 |
| 0.0 | 40.0 | 8600 | 1.9583 | 0.7778 |
| 0.0 | 41.0 | 8815 | 1.9257 | 0.7778 |
| 0.0 | 42.0 | 9030 | 1.9718 | 0.7778 |
| 0.0 | 43.0 | 9245 | 1.9220 | 0.7778 |
| 0.0 | 44.0 | 9460 | 1.9083 | 0.7778 |
| 0.0 | 45.0 | 9675 | 1.9217 | 0.7778 |
| 0.0 | 46.0 | 9890 | 1.8800 | 0.7778 |
| 0.0 | 47.0 | 10105 | 1.8880 | 0.7778 |
| 0.0 | 48.0 | 10320 | 1.8890 | 0.7778 |
| 0.0 | 49.0 | 10535 | 1.8815 | 0.7778 |
| 0.0 | 50.0 | 10750 | 1.8879 | 0.7778 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
clewiston/autotrain-98vyi-233rf
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.347914457321167
f1_macro: 0.196969696969697
f1_micro: 0.65
f1_weighted: 0.5121212121212122
precision_macro: 0.1625
precision_micro: 0.65
precision_weighted: 0.42250000000000004
recall_macro: 0.25
recall_micro: 0.65
recall_weighted: 0.65
accuracy: 0.65
|
[
"multiple",
"none",
"partial",
"whole"
] |
yuanhuaisen/autotrain-48ada-ps8zf
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.11805555555555557
f1_micro: 0.21518987341772153
f1_weighted: 0.07621308016877638
precision_macro: 0.07172995780590717
precision_micro: 0.21518987341772153
precision_weighted: 0.04630668162153501
recall_macro: 0.3333333333333333
recall_micro: 0.21518987341772153
recall_weighted: 0.21518987341772153
accuracy: 0.21518987341772153
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_deit_tiny_rms_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_0001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3313
- Accuracy: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1374 | 1.0 | 215 | 1.3718 | 0.7111 |
| 0.0248 | 2.0 | 430 | 1.5033 | 0.7778 |
| 0.0726 | 3.0 | 645 | 1.7295 | 0.7556 |
| 0.0372 | 4.0 | 860 | 1.5869 | 0.7778 |
| 0.0411 | 5.0 | 1075 | 1.1809 | 0.7778 |
| 0.0235 | 6.0 | 1290 | 2.1699 | 0.6889 |
| 0.0004 | 7.0 | 1505 | 1.8564 | 0.7333 |
| 0.0351 | 8.0 | 1720 | 2.6913 | 0.5556 |
| 0.0436 | 9.0 | 1935 | 1.7899 | 0.6889 |
| 0.0311 | 10.0 | 2150 | 2.2763 | 0.7333 |
| 0.0318 | 11.0 | 2365 | 2.1440 | 0.7111 |
| 0.0601 | 12.0 | 2580 | 1.3738 | 0.8 |
| 0.0036 | 13.0 | 2795 | 1.9492 | 0.7556 |
| 0.0024 | 14.0 | 3010 | 2.0010 | 0.7778 |
| 0.0119 | 15.0 | 3225 | 2.9477 | 0.7111 |
| 0.0001 | 16.0 | 3440 | 2.0050 | 0.8222 |
| 0.0 | 17.0 | 3655 | 2.2043 | 0.7778 |
| 0.0045 | 18.0 | 3870 | 2.9253 | 0.6889 |
| 0.0002 | 19.0 | 4085 | 2.4235 | 0.7333 |
| 0.0 | 20.0 | 4300 | 3.4852 | 0.6 |
| 0.0276 | 21.0 | 4515 | 3.0762 | 0.6667 |
| 0.0098 | 22.0 | 4730 | 3.3340 | 0.6222 |
| 0.0328 | 23.0 | 4945 | 1.8687 | 0.8 |
| 0.0 | 24.0 | 5160 | 1.6806 | 0.8 |
| 0.0 | 25.0 | 5375 | 2.3408 | 0.7333 |
| 0.0208 | 26.0 | 5590 | 2.3251 | 0.7778 |
| 0.0 | 27.0 | 5805 | 2.8347 | 0.7111 |
| 0.0 | 28.0 | 6020 | 2.2742 | 0.7333 |
| 0.0 | 29.0 | 6235 | 2.4267 | 0.7111 |
| 0.0 | 30.0 | 6450 | 2.5951 | 0.7111 |
| 0.0 | 31.0 | 6665 | 2.7772 | 0.6889 |
| 0.0 | 32.0 | 6880 | 2.9769 | 0.6889 |
| 0.0 | 33.0 | 7095 | 3.1694 | 0.6889 |
| 0.0 | 34.0 | 7310 | 3.3770 | 0.6889 |
| 0.0 | 35.0 | 7525 | 3.5369 | 0.6889 |
| 0.0 | 36.0 | 7740 | 3.6892 | 0.7111 |
| 0.0 | 37.0 | 7955 | 3.8241 | 0.6889 |
| 0.0 | 38.0 | 8170 | 3.9473 | 0.6889 |
| 0.0 | 39.0 | 8385 | 4.0424 | 0.6889 |
| 0.0 | 40.0 | 8600 | 4.1157 | 0.6889 |
| 0.0 | 41.0 | 8815 | 4.1738 | 0.6667 |
| 0.0 | 42.0 | 9030 | 4.2155 | 0.6667 |
| 0.0 | 43.0 | 9245 | 4.2470 | 0.6667 |
| 0.0 | 44.0 | 9460 | 4.2729 | 0.6667 |
| 0.0 | 45.0 | 9675 | 4.2929 | 0.6667 |
| 0.0 | 46.0 | 9890 | 4.3080 | 0.6667 |
| 0.0 | 47.0 | 10105 | 4.3190 | 0.6667 |
| 0.0 | 48.0 | 10320 | 4.3263 | 0.6667 |
| 0.0 | 49.0 | 10535 | 4.3304 | 0.6667 |
| 0.0 | 50.0 | 10750 | 4.3313 | 0.6667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
yuanhuaisen/autotrain-wyc4a-xh08c
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.11805555555555557
f1_micro: 0.21518987341772153
f1_weighted: 0.07621308016877638
precision_macro: 0.07172995780590717
precision_micro: 0.21518987341772153
precision_weighted: 0.04630668162153501
recall_macro: 0.3333333333333333
recall_micro: 0.21518987341772153
recall_weighted: 0.21518987341772153
accuracy: 0.21518987341772153
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_deit_tiny_rms_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_0001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8398
- Accuracy: 0.8140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1282 | 1.0 | 217 | 0.8975 | 0.8372 |
| 0.0209 | 2.0 | 434 | 0.8245 | 0.7674 |
| 0.0218 | 3.0 | 651 | 0.3670 | 0.9302 |
| 0.0205 | 4.0 | 868 | 1.2586 | 0.8372 |
| 0.0008 | 5.0 | 1085 | 0.8797 | 0.7907 |
| 0.0606 | 6.0 | 1302 | 1.1984 | 0.8605 |
| 0.1313 | 7.0 | 1519 | 1.3827 | 0.8372 |
| 0.0283 | 8.0 | 1736 | 0.8068 | 0.8605 |
| 0.0037 | 9.0 | 1953 | 1.0055 | 0.8837 |
| 0.0058 | 10.0 | 2170 | 1.7904 | 0.8140 |
| 0.0074 | 11.0 | 2387 | 1.3591 | 0.8140 |
| 0.0197 | 12.0 | 2604 | 1.3843 | 0.8605 |
| 0.0 | 13.0 | 2821 | 1.1075 | 0.8837 |
| 0.0155 | 14.0 | 3038 | 1.0442 | 0.8837 |
| 0.0002 | 15.0 | 3255 | 1.5088 | 0.8605 |
| 0.0288 | 16.0 | 3472 | 0.6806 | 0.8605 |
| 0.0057 | 17.0 | 3689 | 0.9450 | 0.8837 |
| 0.0 | 18.0 | 3906 | 1.1935 | 0.8372 |
| 0.0 | 19.0 | 4123 | 1.2605 | 0.8605 |
| 0.0 | 20.0 | 4340 | 1.0286 | 0.8140 |
| 0.0001 | 21.0 | 4557 | 0.9245 | 0.8605 |
| 0.0039 | 22.0 | 4774 | 1.3627 | 0.8372 |
| 0.0 | 23.0 | 4991 | 1.4994 | 0.8605 |
| 0.0001 | 24.0 | 5208 | 1.2134 | 0.7907 |
| 0.0001 | 25.0 | 5425 | 1.0301 | 0.8372 |
| 0.0 | 26.0 | 5642 | 1.0457 | 0.8837 |
| 0.0 | 27.0 | 5859 | 1.2728 | 0.8140 |
| 0.0 | 28.0 | 6076 | 1.0821 | 0.8837 |
| 0.0 | 29.0 | 6293 | 1.1243 | 0.8837 |
| 0.0 | 30.0 | 6510 | 1.1728 | 0.8837 |
| 0.0 | 31.0 | 6727 | 1.2386 | 0.8605 |
| 0.0 | 32.0 | 6944 | 1.3089 | 0.8605 |
| 0.0 | 33.0 | 7161 | 1.3713 | 0.8605 |
| 0.0 | 34.0 | 7378 | 1.4458 | 0.8605 |
| 0.0 | 35.0 | 7595 | 1.5096 | 0.8605 |
| 0.0 | 36.0 | 7812 | 1.5439 | 0.8605 |
| 0.0 | 37.0 | 8029 | 1.5992 | 0.8605 |
| 0.0 | 38.0 | 8246 | 1.6228 | 0.8605 |
| 0.0 | 39.0 | 8463 | 1.6686 | 0.8372 |
| 0.0 | 40.0 | 8680 | 1.7133 | 0.8372 |
| 0.0 | 41.0 | 8897 | 1.7502 | 0.8372 |
| 0.0 | 42.0 | 9114 | 1.7750 | 0.8372 |
| 0.0 | 43.0 | 9331 | 1.7947 | 0.8372 |
| 0.0 | 44.0 | 9548 | 1.8093 | 0.8372 |
| 0.0 | 45.0 | 9765 | 1.8201 | 0.8372 |
| 0.0 | 46.0 | 9982 | 1.8280 | 0.8372 |
| 0.0 | 47.0 | 10199 | 1.8337 | 0.8372 |
| 0.0 | 48.0 | 10416 | 1.8373 | 0.8372 |
| 0.0 | 49.0 | 10633 | 1.8394 | 0.8372 |
| 0.0 | 50.0 | 10850 | 1.8398 | 0.8140 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
yuanhuaisen/autotrain-14tj1-1lmxo
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.11805555555555557
f1_micro: 0.21518987341772153
f1_weighted: 0.07621308016877638
precision_macro: 0.07172995780590717
precision_micro: 0.21518987341772153
precision_weighted: 0.04630668162153501
recall_macro: 0.3333333333333333
recall_micro: 0.21518987341772153
recall_weighted: 0.21518987341772153
accuracy: 0.21518987341772153
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_deit_tiny_rms_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_0001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4037
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1838 | 1.0 | 219 | 0.4926 | 0.8571 |
| 0.0446 | 2.0 | 438 | 0.2754 | 0.9286 |
| 0.0295 | 3.0 | 657 | 0.9751 | 0.8810 |
| 0.0096 | 4.0 | 876 | 0.1123 | 0.9762 |
| 0.0055 | 5.0 | 1095 | 0.3687 | 0.9048 |
| 0.0033 | 6.0 | 1314 | 0.3076 | 0.9524 |
| 0.0283 | 7.0 | 1533 | 0.8089 | 0.8571 |
| 0.0044 | 8.0 | 1752 | 0.2435 | 0.9286 |
| 0.0018 | 9.0 | 1971 | 0.7038 | 0.8571 |
| 0.0191 | 10.0 | 2190 | 0.5242 | 0.9048 |
| 0.0001 | 11.0 | 2409 | 0.8130 | 0.9286 |
| 0.0007 | 12.0 | 2628 | 0.6030 | 0.9048 |
| 0.0189 | 13.0 | 2847 | 0.5406 | 0.9048 |
| 0.0002 | 14.0 | 3066 | 0.6774 | 0.8571 |
| 0.0018 | 15.0 | 3285 | 0.6982 | 0.9286 |
| 0.0001 | 16.0 | 3504 | 0.3877 | 0.9524 |
| 0.0008 | 17.0 | 3723 | 0.6996 | 0.8810 |
| 0.0 | 18.0 | 3942 | 0.5507 | 0.9286 |
| 0.0 | 19.0 | 4161 | 0.3796 | 0.9524 |
| 0.0001 | 20.0 | 4380 | 0.3967 | 0.9286 |
| 0.0 | 21.0 | 4599 | 0.4081 | 0.9286 |
| 0.0 | 22.0 | 4818 | 0.3898 | 0.9286 |
| 0.0 | 23.0 | 5037 | 0.3709 | 0.9286 |
| 0.0 | 24.0 | 5256 | 0.3640 | 0.9524 |
| 0.0 | 25.0 | 5475 | 0.3789 | 0.9524 |
| 0.0 | 26.0 | 5694 | 0.3987 | 0.9286 |
| 0.0 | 27.0 | 5913 | 0.4326 | 0.9286 |
| 0.0 | 28.0 | 6132 | 0.4566 | 0.9286 |
| 0.0 | 29.0 | 6351 | 0.4673 | 0.9286 |
| 0.0 | 30.0 | 6570 | 0.4642 | 0.9286 |
| 0.0 | 31.0 | 6789 | 0.4534 | 0.9286 |
| 0.0 | 32.0 | 7008 | 0.4388 | 0.9286 |
| 0.0 | 33.0 | 7227 | 0.4268 | 0.9286 |
| 0.0 | 34.0 | 7446 | 0.4182 | 0.9286 |
| 0.0 | 35.0 | 7665 | 0.4134 | 0.9286 |
| 0.0 | 36.0 | 7884 | 0.4102 | 0.9286 |
| 0.0 | 37.0 | 8103 | 0.4079 | 0.9286 |
| 0.0 | 38.0 | 8322 | 0.4066 | 0.9286 |
| 0.0 | 39.0 | 8541 | 0.4041 | 0.9286 |
| 0.0 | 40.0 | 8760 | 0.4048 | 0.9286 |
| 0.0 | 41.0 | 8979 | 0.4034 | 0.9524 |
| 0.0 | 42.0 | 9198 | 0.4032 | 0.9524 |
| 0.0 | 43.0 | 9417 | 0.4038 | 0.9524 |
| 0.0 | 44.0 | 9636 | 0.4040 | 0.9524 |
| 0.0 | 45.0 | 9855 | 0.4040 | 0.9524 |
| 0.0 | 46.0 | 10074 | 0.4038 | 0.9524 |
| 0.0 | 47.0 | 10293 | 0.4038 | 0.9524 |
| 0.0 | 48.0 | 10512 | 0.4039 | 0.9524 |
| 0.0 | 49.0 | 10731 | 0.4037 | 0.9524 |
| 0.0 | 50.0 | 10950 | 0.4037 | 0.9524 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
yuanhuaisen/autotrain-ap1fu-6mtjs
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.3793713450431824
f1_macro: 0.8554966830360989
f1_micro: 0.8860759493670886
f1_weighted: 0.8860489577982732
precision_macro: 0.861904761904762
precision_micro: 0.8860759493670886
precision_weighted: 0.8891500904159133
recall_macro: 0.8529517753047164
recall_micro: 0.8860759493670886
recall_weighted: 0.8860759493670886
accuracy: 0.8860759493670886
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
yuanhuaisen/autotrain-c9zbz-0tb92
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.11805555555555557
f1_micro: 0.21518987341772153
f1_weighted: 0.07621308016877638
precision_macro: 0.07172995780590717
precision_micro: 0.21518987341772153
precision_weighted: 0.04630668162153501
recall_macro: 0.3333333333333333
recall_micro: 0.21518987341772153
recall_weighted: 0.21518987341772153
accuracy: 0.21518987341772153
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_deit_tiny_rms_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_0001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8832
- Accuracy: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1509 | 1.0 | 220 | 0.5608 | 0.8537 |
| 0.0292 | 2.0 | 440 | 0.1504 | 0.9512 |
| 0.1009 | 3.0 | 660 | 0.7468 | 0.8537 |
| 0.011 | 4.0 | 880 | 0.6340 | 0.7805 |
| 0.0031 | 5.0 | 1100 | 0.8446 | 0.8293 |
| 0.0646 | 6.0 | 1320 | 1.0420 | 0.8537 |
| 0.0678 | 7.0 | 1540 | 0.6521 | 0.8293 |
| 0.0002 | 8.0 | 1760 | 1.1011 | 0.8537 |
| 0.0677 | 9.0 | 1980 | 1.2605 | 0.8049 |
| 0.0002 | 10.0 | 2200 | 0.4029 | 0.9024 |
| 0.0011 | 11.0 | 2420 | 0.5279 | 0.9512 |
| 0.0002 | 12.0 | 2640 | 0.5883 | 0.9268 |
| 0.0801 | 13.0 | 2860 | 1.0161 | 0.8293 |
| 0.0 | 14.0 | 3080 | 0.7618 | 0.9024 |
| 0.0 | 15.0 | 3300 | 0.7876 | 0.8293 |
| 0.0144 | 16.0 | 3520 | 0.6802 | 0.8780 |
| 0.0032 | 17.0 | 3740 | 0.2440 | 0.9268 |
| 0.0 | 18.0 | 3960 | 0.4384 | 0.8293 |
| 0.0 | 19.0 | 4180 | 0.6787 | 0.8537 |
| 0.0 | 20.0 | 4400 | 0.6527 | 0.8293 |
| 0.0 | 21.0 | 4620 | 0.6512 | 0.8537 |
| 0.0 | 22.0 | 4840 | 0.6749 | 0.8537 |
| 0.0 | 23.0 | 5060 | 0.6838 | 0.8537 |
| 0.0 | 24.0 | 5280 | 0.7554 | 0.8537 |
| 0.0 | 25.0 | 5500 | 0.8097 | 0.8780 |
| 0.0 | 26.0 | 5720 | 0.8183 | 0.8780 |
| 0.0 | 27.0 | 5940 | 0.8490 | 0.8780 |
| 0.0 | 28.0 | 6160 | 0.9053 | 0.8537 |
| 0.0 | 29.0 | 6380 | 0.9213 | 0.8537 |
| 0.0 | 30.0 | 6600 | 0.9237 | 0.8780 |
| 0.0 | 31.0 | 6820 | 0.9293 | 0.8537 |
| 0.0 | 32.0 | 7040 | 0.9309 | 0.8780 |
| 0.0 | 33.0 | 7260 | 0.9345 | 0.8780 |
| 0.0 | 34.0 | 7480 | 0.9273 | 0.8780 |
| 0.0 | 35.0 | 7700 | 0.9432 | 0.8780 |
| 0.0 | 36.0 | 7920 | 0.9371 | 0.8780 |
| 0.0 | 37.0 | 8140 | 0.9224 | 0.9024 |
| 0.0 | 38.0 | 8360 | 0.9410 | 0.8780 |
| 0.0 | 39.0 | 8580 | 0.9241 | 0.8780 |
| 0.0 | 40.0 | 8800 | 0.9144 | 0.8780 |
| 0.0 | 41.0 | 9020 | 0.9167 | 0.8780 |
| 0.0 | 42.0 | 9240 | 0.8992 | 0.8780 |
| 0.0 | 43.0 | 9460 | 0.9050 | 0.8780 |
| 0.0 | 44.0 | 9680 | 0.8956 | 0.8780 |
| 0.0 | 45.0 | 9900 | 0.8902 | 0.8780 |
| 0.0 | 46.0 | 10120 | 0.8925 | 0.8780 |
| 0.0 | 47.0 | 10340 | 0.8847 | 0.8780 |
| 0.0 | 48.0 | 10560 | 0.8839 | 0.8780 |
| 0.0 | 49.0 | 10780 | 0.8833 | 0.8780 |
| 0.0 | 50.0 | 11000 | 0.8832 | 0.8780 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_rms_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_00001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4542
- Accuracy: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0572 | 1.0 | 215 | 0.5452 | 0.8222 |
| 0.0213 | 2.0 | 430 | 0.8514 | 0.8222 |
| 0.0002 | 3.0 | 645 | 1.1716 | 0.7778 |
| 0.0001 | 4.0 | 860 | 1.1956 | 0.8 |
| 0.0 | 5.0 | 1075 | 1.3312 | 0.7778 |
| 0.0 | 6.0 | 1290 | 1.3747 | 0.8 |
| 0.0 | 7.0 | 1505 | 1.5420 | 0.7778 |
| 0.0 | 8.0 | 1720 | 1.5431 | 0.7778 |
| 0.0 | 9.0 | 1935 | 1.6767 | 0.7778 |
| 0.0 | 10.0 | 2150 | 1.6620 | 0.8 |
| 0.0141 | 11.0 | 2365 | 0.7752 | 0.8667 |
| 0.0 | 12.0 | 2580 | 1.2616 | 0.7556 |
| 0.0 | 13.0 | 2795 | 1.1161 | 0.8667 |
| 0.0 | 14.0 | 3010 | 1.1254 | 0.8444 |
| 0.0 | 15.0 | 3225 | 1.1188 | 0.8889 |
| 0.0 | 16.0 | 3440 | 1.1820 | 0.8889 |
| 0.0 | 17.0 | 3655 | 1.2564 | 0.8889 |
| 0.0 | 18.0 | 3870 | 1.3559 | 0.8889 |
| 0.0 | 19.0 | 4085 | 1.4292 | 0.8667 |
| 0.0 | 20.0 | 4300 | 1.5164 | 0.8667 |
| 0.0 | 21.0 | 4515 | 1.5191 | 0.8667 |
| 0.0 | 22.0 | 4730 | 1.4544 | 0.8667 |
| 0.0 | 23.0 | 4945 | 1.4836 | 0.8667 |
| 0.0 | 24.0 | 5160 | 1.5747 | 0.8222 |
| 0.0 | 25.0 | 5375 | 1.5707 | 0.8222 |
| 0.0 | 26.0 | 5590 | 1.5222 | 0.8222 |
| 0.0 | 27.0 | 5805 | 1.4844 | 0.8667 |
| 0.0 | 28.0 | 6020 | 1.4898 | 0.8667 |
| 0.0 | 29.0 | 6235 | 1.5381 | 0.8444 |
| 0.0 | 30.0 | 6450 | 1.5320 | 0.8222 |
| 0.0 | 31.0 | 6665 | 1.5518 | 0.8222 |
| 0.0 | 32.0 | 6880 | 1.4681 | 0.8667 |
| 0.0 | 33.0 | 7095 | 1.5245 | 0.8444 |
| 0.0 | 34.0 | 7310 | 1.4517 | 0.8667 |
| 0.0 | 35.0 | 7525 | 1.4519 | 0.8667 |
| 0.0 | 36.0 | 7740 | 1.4734 | 0.8667 |
| 0.0 | 37.0 | 7955 | 1.5324 | 0.8444 |
| 0.0 | 38.0 | 8170 | 1.4772 | 0.8444 |
| 0.0 | 39.0 | 8385 | 1.4506 | 0.8444 |
| 0.0 | 40.0 | 8600 | 1.4509 | 0.8444 |
| 0.0 | 41.0 | 8815 | 1.5306 | 0.8444 |
| 0.0 | 42.0 | 9030 | 1.4735 | 0.8444 |
| 0.0 | 43.0 | 9245 | 1.4585 | 0.8444 |
| 0.0 | 44.0 | 9460 | 1.4843 | 0.8444 |
| 0.0 | 45.0 | 9675 | 1.4519 | 0.8444 |
| 0.0 | 46.0 | 9890 | 1.4772 | 0.8444 |
| 0.0 | 47.0 | 10105 | 1.4373 | 0.8444 |
| 0.0 | 48.0 | 10320 | 1.4662 | 0.8444 |
| 0.0 | 49.0 | 10535 | 1.4530 | 0.8444 |
| 0.0 | 50.0 | 10750 | 1.4542 | 0.8444 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
yuanhuaisen/autotrain-1pboy-weoon
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.07177734375
f1_macro: 0.5555555555555555
f1_micro: 0.6666666666666666
f1_weighted: 0.5555555555555555
precision_macro: 0.5
precision_micro: 0.6666666666666666
precision_weighted: 0.5
recall_macro: 0.6666666666666666
recall_micro: 0.6666666666666666
recall_weighted: 0.6666666666666666
accuracy: 0.6666666666666666
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_deit_tiny_rms_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_00001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3474
- Accuracy: 0.6889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0591 | 1.0 | 215 | 0.9478 | 0.7111 |
| 0.0016 | 2.0 | 430 | 1.0737 | 0.6889 |
| 0.0003 | 3.0 | 645 | 1.0732 | 0.7111 |
| 0.0001 | 4.0 | 860 | 1.2338 | 0.7111 |
| 0.0 | 5.0 | 1075 | 1.3886 | 0.7111 |
| 0.0 | 6.0 | 1290 | 1.5328 | 0.6889 |
| 0.0 | 7.0 | 1505 | 1.6761 | 0.6889 |
| 0.0 | 8.0 | 1720 | 1.8802 | 0.6889 |
| 0.0 | 9.0 | 1935 | 2.1375 | 0.6889 |
| 0.0 | 10.0 | 2150 | 2.2804 | 0.6889 |
| 0.0 | 11.0 | 2365 | 2.5018 | 0.6667 |
| 0.0 | 12.0 | 2580 | 2.6034 | 0.7111 |
| 0.0 | 13.0 | 2795 | 2.1119 | 0.7556 |
| 0.0 | 14.0 | 3010 | 2.5118 | 0.7111 |
| 0.0 | 15.0 | 3225 | 2.4215 | 0.6889 |
| 0.0 | 16.0 | 3440 | 2.4416 | 0.6889 |
| 0.0 | 17.0 | 3655 | 2.4789 | 0.6889 |
| 0.0 | 18.0 | 3870 | 2.5530 | 0.6889 |
| 0.0 | 19.0 | 4085 | 2.6223 | 0.6889 |
| 0.0 | 20.0 | 4300 | 2.7198 | 0.6889 |
| 0.0 | 21.0 | 4515 | 2.8171 | 0.7111 |
| 0.0 | 22.0 | 4730 | 2.8585 | 0.7111 |
| 0.0 | 23.0 | 4945 | 2.8584 | 0.7111 |
| 0.0 | 24.0 | 5160 | 2.7240 | 0.7111 |
| 0.0 | 25.0 | 5375 | 2.6522 | 0.7111 |
| 0.0 | 26.0 | 5590 | 2.6766 | 0.7111 |
| 0.0 | 27.0 | 5805 | 2.6051 | 0.7333 |
| 0.0 | 28.0 | 6020 | 2.4780 | 0.7333 |
| 0.0 | 29.0 | 6235 | 2.4371 | 0.7333 |
| 0.0 | 30.0 | 6450 | 2.3680 | 0.7333 |
| 0.0 | 31.0 | 6665 | 2.3696 | 0.7111 |
| 0.0 | 32.0 | 6880 | 2.3638 | 0.7333 |
| 0.0 | 33.0 | 7095 | 2.3261 | 0.7333 |
| 0.0 | 34.0 | 7310 | 2.3611 | 0.7333 |
| 0.0 | 35.0 | 7525 | 2.3737 | 0.7333 |
| 0.0 | 36.0 | 7740 | 2.3371 | 0.6889 |
| 0.0 | 37.0 | 7955 | 2.3450 | 0.7111 |
| 0.0 | 38.0 | 8170 | 2.3727 | 0.6889 |
| 0.0 | 39.0 | 8385 | 2.3620 | 0.6889 |
| 0.0 | 40.0 | 8600 | 2.3928 | 0.6889 |
| 0.0 | 41.0 | 8815 | 2.3547 | 0.6889 |
| 0.0 | 42.0 | 9030 | 2.3935 | 0.6889 |
| 0.0 | 43.0 | 9245 | 2.3835 | 0.6889 |
| 0.0 | 44.0 | 9460 | 2.3407 | 0.6889 |
| 0.0 | 45.0 | 9675 | 2.3628 | 0.6889 |
| 0.0 | 46.0 | 9890 | 2.3464 | 0.6889 |
| 0.0 | 47.0 | 10105 | 2.3571 | 0.6889 |
| 0.0 | 48.0 | 10320 | 2.3604 | 0.6889 |
| 0.0 | 49.0 | 10535 | 2.3495 | 0.6889 |
| 0.0 | 50.0 | 10750 | 2.3474 | 0.6889 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
yuanhuaisen/autotrain-yg0zn-6y4s5
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.0810546875
f1_macro: 0.26666666666666666
f1_micro: 0.5
f1_weighted: 0.4
precision_macro: 0.2222222222222222
precision_micro: 0.5
precision_weighted: 0.3333333333333333
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
tfyxj/autotrain-mkp2u-20ss0
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.03333333333333333
f1_micro: 0.1111111111111111
f1_weighted: 0.02222222222222222
precision_macro: 0.018518518518518517
precision_micro: 0.1111111111111111
precision_weighted: 0.012345679012345678
recall_macro: 0.16666666666666666
recall_micro: 0.1111111111111111
recall_weighted: 0.1111111111111111
accuracy: 0.1111111111111111
|
[
"a",
"b",
"c",
"d",
"e",
"f"
] |
hkivancoral/hushem_40x_deit_tiny_rms_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_00001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4488
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0843 | 1.0 | 217 | 0.4280 | 0.8837 |
| 0.0143 | 2.0 | 434 | 0.2889 | 0.9302 |
| 0.0172 | 3.0 | 651 | 0.5423 | 0.9070 |
| 0.0189 | 4.0 | 868 | 1.1419 | 0.7907 |
| 0.0003 | 5.0 | 1085 | 0.4120 | 0.9302 |
| 0.0 | 6.0 | 1302 | 0.4870 | 0.9302 |
| 0.0 | 7.0 | 1519 | 0.5568 | 0.9070 |
| 0.0 | 8.0 | 1736 | 0.5757 | 0.8837 |
| 0.0 | 9.0 | 1953 | 0.6076 | 0.8837 |
| 0.0 | 10.0 | 2170 | 0.6516 | 0.8837 |
| 0.0 | 11.0 | 2387 | 0.6056 | 0.8837 |
| 0.0 | 12.0 | 2604 | 0.6691 | 0.8837 |
| 0.0 | 13.0 | 2821 | 0.6559 | 0.8837 |
| 0.0 | 14.0 | 3038 | 0.7098 | 0.9070 |
| 0.0 | 15.0 | 3255 | 0.6515 | 0.9070 |
| 0.0157 | 16.0 | 3472 | 0.6215 | 0.8837 |
| 0.0 | 17.0 | 3689 | 0.6307 | 0.8837 |
| 0.0 | 18.0 | 3906 | 0.7467 | 0.8837 |
| 0.0 | 19.0 | 4123 | 0.7677 | 0.8837 |
| 0.0 | 20.0 | 4340 | 0.7998 | 0.8605 |
| 0.0 | 21.0 | 4557 | 0.8197 | 0.8605 |
| 0.0 | 22.0 | 4774 | 0.8507 | 0.8605 |
| 0.0 | 23.0 | 4991 | 0.8634 | 0.8605 |
| 0.0 | 24.0 | 5208 | 0.8853 | 0.8605 |
| 0.0 | 25.0 | 5425 | 0.7783 | 0.9070 |
| 0.0 | 26.0 | 5642 | 0.7092 | 0.9302 |
| 0.0 | 27.0 | 5859 | 0.6309 | 0.9302 |
| 0.0 | 28.0 | 6076 | 0.6509 | 0.9302 |
| 0.0 | 29.0 | 6293 | 0.5569 | 0.9070 |
| 0.0 | 30.0 | 6510 | 0.5554 | 0.9302 |
| 0.0 | 31.0 | 6727 | 0.5595 | 0.9070 |
| 0.0 | 32.0 | 6944 | 0.5154 | 0.9302 |
| 0.0 | 33.0 | 7161 | 0.5043 | 0.9070 |
| 0.0 | 34.0 | 7378 | 0.5110 | 0.9535 |
| 0.0 | 35.0 | 7595 | 0.4416 | 0.9302 |
| 0.0 | 36.0 | 7812 | 0.4610 | 0.9535 |
| 0.0 | 37.0 | 8029 | 0.5159 | 0.9302 |
| 0.0 | 38.0 | 8246 | 0.5232 | 0.9302 |
| 0.0 | 39.0 | 8463 | 0.5109 | 0.9302 |
| 0.0 | 40.0 | 8680 | 0.4511 | 0.9535 |
| 0.0 | 41.0 | 8897 | 0.4620 | 0.9302 |
| 0.0 | 42.0 | 9114 | 0.4370 | 0.9302 |
| 0.0 | 43.0 | 9331 | 0.4660 | 0.9302 |
| 0.0 | 44.0 | 9548 | 0.4561 | 0.9302 |
| 0.0 | 45.0 | 9765 | 0.4386 | 0.9302 |
| 0.0 | 46.0 | 9982 | 0.4625 | 0.9302 |
| 0.0 | 47.0 | 10199 | 0.4505 | 0.9302 |
| 0.0 | 48.0 | 10416 | 0.4377 | 0.9302 |
| 0.0 | 49.0 | 10633 | 0.4484 | 0.9302 |
| 0.0 | 50.0 | 10850 | 0.4488 | 0.9302 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
tangfei/autotrain-fo40u-s5iv3
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.03333333333333333
f1_micro: 0.1111111111111111
f1_weighted: 0.02222222222222222
precision_macro: 0.018518518518518517
precision_micro: 0.1111111111111111
precision_weighted: 0.012345679012345678
recall_macro: 0.16666666666666666
recall_micro: 0.1111111111111111
recall_weighted: 0.1111111111111111
accuracy: 0.1111111111111111
|
[
"a",
"b",
"c",
"d",
"e",
"f"
] |
yuanhuaisen/autotrain-mrgp6-glp79
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: inf
f1_macro: 0.13333333333333333
f1_micro: 0.25
f1_weighted: 0.1
precision_macro: 0.08333333333333333
precision_micro: 0.25
precision_weighted: 0.0625
recall_macro: 0.3333333333333333
recall_micro: 0.25
recall_weighted: 0.25
accuracy: 0.25
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
tangfei/autotrain-3u170-rx6d7
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.0810903310775757
f1_macro: 0.26666666666666666
f1_micro: 0.5
f1_weighted: 0.4
precision_macro: 0.2222222222222222
precision_micro: 0.5
precision_weighted: 0.3333333333333333
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_deit_small_adamax_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_0001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3810
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0123 | 1.0 | 215 | 0.5689 | 0.8444 |
| 0.0004 | 2.0 | 430 | 0.8953 | 0.8 |
| 0.0001 | 3.0 | 645 | 0.8009 | 0.8444 |
| 0.0 | 4.0 | 860 | 0.8720 | 0.8444 |
| 0.0 | 5.0 | 1075 | 0.8921 | 0.8444 |
| 0.0 | 6.0 | 1290 | 0.9099 | 0.8667 |
| 0.0 | 7.0 | 1505 | 0.9275 | 0.8667 |
| 0.0 | 8.0 | 1720 | 0.9408 | 0.8667 |
| 0.0 | 9.0 | 1935 | 0.9502 | 0.8667 |
| 0.0 | 10.0 | 2150 | 0.9643 | 0.8667 |
| 0.0 | 11.0 | 2365 | 0.9768 | 0.8667 |
| 0.0 | 12.0 | 2580 | 0.9909 | 0.8667 |
| 0.0 | 13.0 | 2795 | 0.9967 | 0.8667 |
| 0.0 | 14.0 | 3010 | 1.0112 | 0.8667 |
| 0.0 | 15.0 | 3225 | 1.0262 | 0.8667 |
| 0.0 | 16.0 | 3440 | 1.0419 | 0.8667 |
| 0.0 | 17.0 | 3655 | 1.0545 | 0.8667 |
| 0.0 | 18.0 | 3870 | 1.0694 | 0.8667 |
| 0.0 | 19.0 | 4085 | 1.0814 | 0.8667 |
| 0.0 | 20.0 | 4300 | 1.1005 | 0.8667 |
| 0.0 | 21.0 | 4515 | 1.1143 | 0.8667 |
| 0.0 | 22.0 | 4730 | 1.1229 | 0.8667 |
| 0.0 | 23.0 | 4945 | 1.1410 | 0.8667 |
| 0.0 | 24.0 | 5160 | 1.1551 | 0.8667 |
| 0.0 | 25.0 | 5375 | 1.1684 | 0.8667 |
| 0.0 | 26.0 | 5590 | 1.1783 | 0.8667 |
| 0.0 | 27.0 | 5805 | 1.1949 | 0.8667 |
| 0.0 | 28.0 | 6020 | 1.2105 | 0.8667 |
| 0.0 | 29.0 | 6235 | 1.2222 | 0.8667 |
| 0.0 | 30.0 | 6450 | 1.2326 | 0.8667 |
| 0.0 | 31.0 | 6665 | 1.2508 | 0.8667 |
| 0.0 | 32.0 | 6880 | 1.2642 | 0.8667 |
| 0.0 | 33.0 | 7095 | 1.2738 | 0.8667 |
| 0.0 | 34.0 | 7310 | 1.2845 | 0.8667 |
| 0.0 | 35.0 | 7525 | 1.3026 | 0.8667 |
| 0.0 | 36.0 | 7740 | 1.3061 | 0.8667 |
| 0.0 | 37.0 | 7955 | 1.3174 | 0.8667 |
| 0.0 | 38.0 | 8170 | 1.3303 | 0.8667 |
| 0.0 | 39.0 | 8385 | 1.3454 | 0.8667 |
| 0.0 | 40.0 | 8600 | 1.3589 | 0.8667 |
| 0.0 | 41.0 | 8815 | 1.3623 | 0.8667 |
| 0.0 | 42.0 | 9030 | 1.3744 | 0.8667 |
| 0.0 | 43.0 | 9245 | 1.3726 | 0.8667 |
| 0.0 | 44.0 | 9460 | 1.3791 | 0.8667 |
| 0.0 | 45.0 | 9675 | 1.3831 | 0.8667 |
| 0.0 | 46.0 | 9890 | 1.3792 | 0.8667 |
| 0.0 | 47.0 | 10105 | 1.3827 | 0.8667 |
| 0.0 | 48.0 | 10320 | 1.3841 | 0.8667 |
| 0.0 | 49.0 | 10535 | 1.3828 | 0.8667 |
| 0.0 | 50.0 | 10750 | 1.3810 | 0.8667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_adamax_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_00001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3015
- Accuracy: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.238 | 1.0 | 215 | 0.7567 | 0.6667 |
| 0.0267 | 2.0 | 430 | 0.5445 | 0.7778 |
| 0.0051 | 3.0 | 645 | 0.6144 | 0.8 |
| 0.0012 | 4.0 | 860 | 0.6615 | 0.8 |
| 0.0005 | 5.0 | 1075 | 0.6553 | 0.8 |
| 0.0003 | 6.0 | 1290 | 0.6621 | 0.8222 |
| 0.0003 | 7.0 | 1505 | 0.6958 | 0.8222 |
| 0.0002 | 8.0 | 1720 | 0.7076 | 0.8222 |
| 0.0001 | 9.0 | 1935 | 0.7375 | 0.8222 |
| 0.0001 | 10.0 | 2150 | 0.7327 | 0.8222 |
| 0.0001 | 11.0 | 2365 | 0.7423 | 0.8222 |
| 0.0001 | 12.0 | 2580 | 0.7689 | 0.8 |
| 0.0001 | 13.0 | 2795 | 0.7876 | 0.8 |
| 0.0 | 14.0 | 3010 | 0.7990 | 0.8 |
| 0.0 | 15.0 | 3225 | 0.8203 | 0.8 |
| 0.0 | 16.0 | 3440 | 0.8447 | 0.8 |
| 0.0 | 17.0 | 3655 | 0.8558 | 0.8 |
| 0.0 | 18.0 | 3870 | 0.8774 | 0.8 |
| 0.0 | 19.0 | 4085 | 0.8896 | 0.8 |
| 0.0 | 20.0 | 4300 | 0.8965 | 0.8 |
| 0.0 | 21.0 | 4515 | 0.9254 | 0.8 |
| 0.0 | 22.0 | 4730 | 0.9318 | 0.8 |
| 0.0 | 23.0 | 4945 | 0.9571 | 0.8 |
| 0.0 | 24.0 | 5160 | 0.9711 | 0.8222 |
| 0.0 | 25.0 | 5375 | 0.9833 | 0.8222 |
| 0.0 | 26.0 | 5590 | 0.9915 | 0.8222 |
| 0.0 | 27.0 | 5805 | 1.0134 | 0.8222 |
| 0.0 | 28.0 | 6020 | 1.0327 | 0.8222 |
| 0.0 | 29.0 | 6235 | 1.0249 | 0.8222 |
| 0.0 | 30.0 | 6450 | 1.0679 | 0.8222 |
| 0.0 | 31.0 | 6665 | 1.0896 | 0.8222 |
| 0.0 | 32.0 | 6880 | 1.0990 | 0.8222 |
| 0.0 | 33.0 | 7095 | 1.1103 | 0.8222 |
| 0.0 | 34.0 | 7310 | 1.1167 | 0.8222 |
| 0.0 | 35.0 | 7525 | 1.1494 | 0.8222 |
| 0.0 | 36.0 | 7740 | 1.1474 | 0.8444 |
| 0.0 | 37.0 | 7955 | 1.1611 | 0.8444 |
| 0.0 | 38.0 | 8170 | 1.2104 | 0.8222 |
| 0.0 | 39.0 | 8385 | 1.1969 | 0.8444 |
| 0.0 | 40.0 | 8600 | 1.2127 | 0.8222 |
| 0.0 | 41.0 | 8815 | 1.2186 | 0.8444 |
| 0.0 | 42.0 | 9030 | 1.2356 | 0.8444 |
| 0.0 | 43.0 | 9245 | 1.2578 | 0.8444 |
| 0.0 | 44.0 | 9460 | 1.2543 | 0.8444 |
| 0.0 | 45.0 | 9675 | 1.2707 | 0.8222 |
| 0.0 | 46.0 | 9890 | 1.2807 | 0.8444 |
| 0.0 | 47.0 | 10105 | 1.2891 | 0.8444 |
| 0.0 | 48.0 | 10320 | 1.3057 | 0.8222 |
| 0.0 | 49.0 | 10535 | 1.3045 | 0.8444 |
| 0.0 | 50.0 | 10750 | 1.3015 | 0.8444 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
yuanhuaisen/autotrain-zbr27-krbt1
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 6600.52001953125
f1_macro: 0.2222222222222222
f1_micro: 0.5
f1_weighted: 0.3333333333333333
precision_macro: 0.16666666666666666
precision_micro: 0.5
precision_weighted: 0.25
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
ZhaoYoujia/autotrain-vit-base
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 789.2529296875
f1_macro: 0.26666666666666666
f1_micro: 0.5
f1_weighted: 0.4
precision_macro: 0.2222222222222222
precision_micro: 0.5
precision_weighted: 0.3333333333333333
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
tangfei/autotrain-j6d02-1kjo6
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.13333333333333333
f1_micro: 0.25
f1_weighted: 0.1
precision_macro: 0.08333333333333333
precision_micro: 0.25
precision_weighted: 0.0625
recall_macro: 0.3333333333333333
recall_micro: 0.25
recall_weighted: 0.25
accuracy: 0.25
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_deit_tiny_rms_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_00001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4229
- Accuracy: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1412 | 1.0 | 219 | 0.1610 | 0.9286 |
| 0.013 | 2.0 | 438 | 0.1553 | 0.9524 |
| 0.0005 | 3.0 | 657 | 0.1135 | 0.9762 |
| 0.0002 | 4.0 | 876 | 0.2956 | 0.9286 |
| 0.0001 | 5.0 | 1095 | 0.1278 | 0.9762 |
| 0.0 | 6.0 | 1314 | 0.2416 | 0.9286 |
| 0.0031 | 7.0 | 1533 | 0.2692 | 0.9286 |
| 0.0 | 8.0 | 1752 | 0.1088 | 0.9524 |
| 0.0 | 9.0 | 1971 | 0.1134 | 0.9524 |
| 0.0 | 10.0 | 2190 | 0.1607 | 0.9524 |
| 0.0 | 11.0 | 2409 | 0.2098 | 0.9524 |
| 0.0 | 12.0 | 2628 | 0.2244 | 0.9524 |
| 0.0 | 13.0 | 2847 | 0.2259 | 0.9524 |
| 0.0 | 14.0 | 3066 | 0.2811 | 0.9524 |
| 0.0 | 15.0 | 3285 | 0.3300 | 0.9524 |
| 0.0 | 16.0 | 3504 | 0.3199 | 0.9524 |
| 0.0 | 17.0 | 3723 | 0.3615 | 0.9524 |
| 0.0 | 18.0 | 3942 | 0.4872 | 0.9524 |
| 0.0 | 19.0 | 4161 | 0.4327 | 0.9524 |
| 0.0 | 20.0 | 4380 | 0.4099 | 0.9524 |
| 0.0 | 21.0 | 4599 | 0.4211 | 0.9524 |
| 0.0 | 22.0 | 4818 | 0.3019 | 0.9524 |
| 0.0 | 23.0 | 5037 | 0.3473 | 0.9524 |
| 0.0 | 24.0 | 5256 | 0.3822 | 0.9524 |
| 0.0 | 25.0 | 5475 | 0.4512 | 0.9524 |
| 0.0 | 26.0 | 5694 | 0.3963 | 0.9524 |
| 0.0 | 27.0 | 5913 | 0.5056 | 0.9524 |
| 0.0 | 28.0 | 6132 | 0.4587 | 0.9524 |
| 0.0 | 29.0 | 6351 | 0.4379 | 0.9524 |
| 0.0 | 30.0 | 6570 | 0.4500 | 0.9524 |
| 0.0 | 31.0 | 6789 | 0.4166 | 0.9524 |
| 0.0 | 32.0 | 7008 | 0.3798 | 0.9524 |
| 0.0 | 33.0 | 7227 | 0.4566 | 0.9524 |
| 0.0 | 34.0 | 7446 | 0.3959 | 0.9524 |
| 0.0 | 35.0 | 7665 | 0.3429 | 0.9524 |
| 0.0 | 36.0 | 7884 | 0.3690 | 0.9524 |
| 0.0 | 37.0 | 8103 | 0.4056 | 0.9524 |
| 0.0 | 38.0 | 8322 | 0.4315 | 0.9286 |
| 0.0 | 39.0 | 8541 | 0.4336 | 0.9286 |
| 0.0 | 40.0 | 8760 | 0.4561 | 0.9524 |
| 0.0 | 41.0 | 8979 | 0.4723 | 0.9286 |
| 0.0 | 42.0 | 9198 | 0.3818 | 0.9286 |
| 0.0 | 43.0 | 9417 | 0.4220 | 0.9286 |
| 0.0 | 44.0 | 9636 | 0.4298 | 0.9286 |
| 0.0 | 45.0 | 9855 | 0.4315 | 0.9286 |
| 0.0 | 46.0 | 10074 | 0.4212 | 0.9286 |
| 0.0 | 47.0 | 10293 | 0.4170 | 0.9286 |
| 0.0 | 48.0 | 10512 | 0.4294 | 0.9286 |
| 0.0 | 49.0 | 10731 | 0.4253 | 0.9286 |
| 0.0 | 50.0 | 10950 | 0.4229 | 0.9286 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
yuanhuaisen/autotrain-7fwz1-ioqdv
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.2222222222222222
f1_micro: 0.5
f1_weighted: 0.3333333333333333
precision_macro: 0.16666666666666666
precision_micro: 0.5
precision_weighted: 0.25
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
ZhaoYoujia/autotrain-vit-base-2
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.019775390625
f1_macro: 0.6
f1_micro: 0.75
f1_weighted: 0.65
precision_macro: 0.5555555555555555
precision_micro: 0.75
precision_weighted: 0.5833333333333333
recall_macro: 0.6666666666666666
recall_micro: 0.75
recall_weighted: 0.75
accuracy: 0.75
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_deit_small_adamax_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_0001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3727
- Accuracy: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0173 | 1.0 | 215 | 1.0013 | 0.8222 |
| 0.0269 | 2.0 | 430 | 1.2649 | 0.8 |
| 0.0002 | 3.0 | 645 | 1.2781 | 0.7778 |
| 0.0001 | 4.0 | 860 | 1.3529 | 0.8444 |
| 0.0 | 5.0 | 1075 | 1.4404 | 0.7778 |
| 0.0 | 6.0 | 1290 | 1.4900 | 0.7778 |
| 0.0 | 7.0 | 1505 | 1.5042 | 0.7778 |
| 0.0 | 8.0 | 1720 | 1.5235 | 0.8 |
| 0.0 | 9.0 | 1935 | 1.5456 | 0.8 |
| 0.0 | 10.0 | 2150 | 1.5663 | 0.8 |
| 0.0 | 11.0 | 2365 | 1.5868 | 0.8 |
| 0.0 | 12.0 | 2580 | 1.6070 | 0.8 |
| 0.0 | 13.0 | 2795 | 1.6276 | 0.8 |
| 0.0 | 14.0 | 3010 | 1.6480 | 0.8 |
| 0.0 | 15.0 | 3225 | 1.6681 | 0.8 |
| 0.0 | 16.0 | 3440 | 1.6904 | 0.8 |
| 0.0 | 17.0 | 3655 | 1.7113 | 0.8 |
| 0.0 | 18.0 | 3870 | 1.7306 | 0.8 |
| 0.0 | 19.0 | 4085 | 1.7537 | 0.8 |
| 0.0 | 20.0 | 4300 | 1.7734 | 0.8 |
| 0.0 | 21.0 | 4515 | 1.7960 | 0.8 |
| 0.0 | 22.0 | 4730 | 1.8166 | 0.8 |
| 0.0 | 23.0 | 4945 | 1.8372 | 0.7778 |
| 0.0 | 24.0 | 5160 | 1.8608 | 0.7778 |
| 0.0 | 25.0 | 5375 | 1.8855 | 0.7778 |
| 0.0 | 26.0 | 5590 | 1.9093 | 0.7778 |
| 0.0 | 27.0 | 5805 | 1.9324 | 0.7778 |
| 0.0 | 28.0 | 6020 | 1.9540 | 0.7778 |
| 0.0 | 29.0 | 6235 | 1.9776 | 0.7778 |
| 0.0 | 30.0 | 6450 | 2.0038 | 0.7778 |
| 0.0 | 31.0 | 6665 | 2.0284 | 0.7778 |
| 0.0 | 32.0 | 6880 | 2.0516 | 0.7778 |
| 0.0 | 33.0 | 7095 | 2.0714 | 0.7778 |
| 0.0 | 34.0 | 7310 | 2.0980 | 0.7778 |
| 0.0 | 35.0 | 7525 | 2.1240 | 0.7778 |
| 0.0 | 36.0 | 7740 | 2.1547 | 0.7778 |
| 0.0 | 37.0 | 7955 | 2.1788 | 0.7778 |
| 0.0 | 38.0 | 8170 | 2.2033 | 0.7778 |
| 0.0 | 39.0 | 8385 | 2.2233 | 0.7778 |
| 0.0 | 40.0 | 8600 | 2.2485 | 0.7778 |
| 0.0 | 41.0 | 8815 | 2.2722 | 0.7778 |
| 0.0 | 42.0 | 9030 | 2.2942 | 0.7778 |
| 0.0 | 43.0 | 9245 | 2.3070 | 0.7778 |
| 0.0 | 44.0 | 9460 | 2.3255 | 0.7778 |
| 0.0 | 45.0 | 9675 | 2.3404 | 0.7778 |
| 0.0 | 46.0 | 9890 | 2.3482 | 0.7778 |
| 0.0 | 47.0 | 10105 | 2.3577 | 0.7778 |
| 0.0 | 48.0 | 10320 | 2.3666 | 0.7778 |
| 0.0 | 49.0 | 10535 | 2.3708 | 0.7778 |
| 0.0 | 50.0 | 10750 | 2.3727 | 0.7778 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_adamax_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_00001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9649
- Accuracy: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2083 | 1.0 | 215 | 0.9013 | 0.7111 |
| 0.0287 | 2.0 | 430 | 0.8511 | 0.6889 |
| 0.0032 | 3.0 | 645 | 0.9572 | 0.7556 |
| 0.0007 | 4.0 | 860 | 1.0000 | 0.7778 |
| 0.0005 | 5.0 | 1075 | 1.0506 | 0.7778 |
| 0.0003 | 6.0 | 1290 | 1.0611 | 0.8 |
| 0.0002 | 7.0 | 1505 | 1.1165 | 0.8 |
| 0.0002 | 8.0 | 1720 | 1.1228 | 0.8 |
| 0.0001 | 9.0 | 1935 | 1.1485 | 0.8 |
| 0.0001 | 10.0 | 2150 | 1.1778 | 0.8 |
| 0.0001 | 11.0 | 2365 | 1.2118 | 0.8 |
| 0.0001 | 12.0 | 2580 | 1.2131 | 0.8 |
| 0.0001 | 13.0 | 2795 | 1.2712 | 0.8 |
| 0.0 | 14.0 | 3010 | 1.2931 | 0.8 |
| 0.0 | 15.0 | 3225 | 1.3148 | 0.8 |
| 0.0 | 16.0 | 3440 | 1.3419 | 0.8 |
| 0.0 | 17.0 | 3655 | 1.3678 | 0.8 |
| 0.0 | 18.0 | 3870 | 1.3854 | 0.8 |
| 0.0 | 19.0 | 4085 | 1.4044 | 0.8 |
| 0.0 | 20.0 | 4300 | 1.4309 | 0.8 |
| 0.0 | 21.0 | 4515 | 1.4555 | 0.8 |
| 0.0 | 22.0 | 4730 | 1.4779 | 0.8 |
| 0.0 | 23.0 | 4945 | 1.4898 | 0.8 |
| 0.0 | 24.0 | 5160 | 1.5327 | 0.8 |
| 0.0 | 25.0 | 5375 | 1.5448 | 0.8 |
| 0.0 | 26.0 | 5590 | 1.5564 | 0.8 |
| 0.0 | 27.0 | 5805 | 1.5873 | 0.8 |
| 0.0 | 28.0 | 6020 | 1.5904 | 0.8 |
| 0.0 | 29.0 | 6235 | 1.6335 | 0.7778 |
| 0.0 | 30.0 | 6450 | 1.6414 | 0.8 |
| 0.0 | 31.0 | 6665 | 1.6621 | 0.8 |
| 0.0 | 32.0 | 6880 | 1.6886 | 0.7778 |
| 0.0 | 33.0 | 7095 | 1.7082 | 0.7778 |
| 0.0 | 34.0 | 7310 | 1.7346 | 0.7778 |
| 0.0 | 35.0 | 7525 | 1.7374 | 0.7778 |
| 0.0 | 36.0 | 7740 | 1.7722 | 0.7778 |
| 0.0 | 37.0 | 7955 | 1.7793 | 0.7778 |
| 0.0 | 38.0 | 8170 | 1.8303 | 0.7778 |
| 0.0 | 39.0 | 8385 | 1.8111 | 0.7778 |
| 0.0 | 40.0 | 8600 | 1.8625 | 0.7778 |
| 0.0 | 41.0 | 8815 | 1.8394 | 0.7778 |
| 0.0 | 42.0 | 9030 | 1.9233 | 0.7778 |
| 0.0 | 43.0 | 9245 | 1.8877 | 0.7778 |
| 0.0 | 44.0 | 9460 | 1.8931 | 0.7778 |
| 0.0 | 45.0 | 9675 | 1.9380 | 0.7778 |
| 0.0 | 46.0 | 9890 | 1.9422 | 0.7778 |
| 0.0 | 47.0 | 10105 | 1.9560 | 0.7778 |
| 0.0 | 48.0 | 10320 | 1.9635 | 0.7778 |
| 0.0 | 49.0 | 10535 | 1.9545 | 0.7778 |
| 0.0 | 50.0 | 10750 | 1.9649 | 0.7778 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
fftx0907/autotrain-s4m14-yyyit
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.031825795644891124
f1_micro: 0.10555555555555556
f1_weighted: 0.020156337241764376
precision_macro: 0.017592592592592594
precision_micro: 0.10555555555555556
precision_weighted: 0.011141975308641975
recall_macro: 0.16666666666666666
recall_micro: 0.10555555555555556
recall_weighted: 0.10555555555555556
accuracy: 0.10555555555555556
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above",
"21covered_with_a_quilt_only_the_head_and_shoulders_exposed",
"22covered_with_a_quilt_exposed_head_and_shoulders_except_for_other_organs",
"23has_nothing_to_do_with_21_and_22_above"
] |
ZhaoYoujia/autotrain-vit-base-v3
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.11111111111111112
f1_micro: 0.19480519480519481
f1_weighted: 0.06493506493506494
precision_macro: 0.06666666666666667
precision_micro: 0.19480519480519481
precision_weighted: 0.03896103896103896
recall_macro: 0.3333333333333333
recall_micro: 0.19480519480519481
recall_weighted: 0.19480519480519481
accuracy: 0.19480519480519481
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
yuanhuaisen/autotrain-wo0g3-9eb7w
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.087890625
f1_macro: 0.26666666666666666
f1_micro: 0.5
f1_weighted: 0.4
precision_macro: 0.2222222222222222
precision_micro: 0.5
precision_weighted: 0.3333333333333333
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_deit_small_adamax_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_0001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7579
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0315 | 1.0 | 217 | 0.2572 | 0.9070 |
| 0.0049 | 2.0 | 434 | 0.4551 | 0.8837 |
| 0.0004 | 3.0 | 651 | 0.3965 | 0.8837 |
| 0.0001 | 4.0 | 868 | 0.4995 | 0.9070 |
| 0.0 | 5.0 | 1085 | 0.3370 | 0.9535 |
| 0.0 | 6.0 | 1302 | 0.4294 | 0.9302 |
| 0.0 | 7.0 | 1519 | 0.4525 | 0.9302 |
| 0.0 | 8.0 | 1736 | 0.4672 | 0.9302 |
| 0.0 | 9.0 | 1953 | 0.4797 | 0.9302 |
| 0.0 | 10.0 | 2170 | 0.4904 | 0.9302 |
| 0.0 | 11.0 | 2387 | 0.4947 | 0.9302 |
| 0.0 | 12.0 | 2604 | 0.5020 | 0.9302 |
| 0.0 | 13.0 | 2821 | 0.5084 | 0.9302 |
| 0.0 | 14.0 | 3038 | 0.5153 | 0.9302 |
| 0.0 | 15.0 | 3255 | 0.5246 | 0.9302 |
| 0.0 | 16.0 | 3472 | 0.5296 | 0.9302 |
| 0.0 | 17.0 | 3689 | 0.5346 | 0.9302 |
| 0.0 | 18.0 | 3906 | 0.5408 | 0.9302 |
| 0.0 | 19.0 | 4123 | 0.5469 | 0.9302 |
| 0.0 | 20.0 | 4340 | 0.5538 | 0.9302 |
| 0.0 | 21.0 | 4557 | 0.5570 | 0.9302 |
| 0.0 | 22.0 | 4774 | 0.5610 | 0.9302 |
| 0.0 | 23.0 | 4991 | 0.5712 | 0.9302 |
| 0.0 | 24.0 | 5208 | 0.5753 | 0.9302 |
| 0.0 | 25.0 | 5425 | 0.5846 | 0.9302 |
| 0.0 | 26.0 | 5642 | 0.5887 | 0.9302 |
| 0.0 | 27.0 | 5859 | 0.5949 | 0.9302 |
| 0.0 | 28.0 | 6076 | 0.6007 | 0.9302 |
| 0.0 | 29.0 | 6293 | 0.6068 | 0.9302 |
| 0.0 | 30.0 | 6510 | 0.6184 | 0.9302 |
| 0.0 | 31.0 | 6727 | 0.6280 | 0.9302 |
| 0.0 | 32.0 | 6944 | 0.6394 | 0.9302 |
| 0.0 | 33.0 | 7161 | 0.6407 | 0.9302 |
| 0.0 | 34.0 | 7378 | 0.6480 | 0.9302 |
| 0.0 | 35.0 | 7595 | 0.6588 | 0.9302 |
| 0.0 | 36.0 | 7812 | 0.6700 | 0.9302 |
| 0.0 | 37.0 | 8029 | 0.6709 | 0.9302 |
| 0.0 | 38.0 | 8246 | 0.6850 | 0.9302 |
| 0.0 | 39.0 | 8463 | 0.6933 | 0.9302 |
| 0.0 | 40.0 | 8680 | 0.7079 | 0.9302 |
| 0.0 | 41.0 | 8897 | 0.7123 | 0.9302 |
| 0.0 | 42.0 | 9114 | 0.7231 | 0.9302 |
| 0.0 | 43.0 | 9331 | 0.7313 | 0.9302 |
| 0.0 | 44.0 | 9548 | 0.7417 | 0.9302 |
| 0.0 | 45.0 | 9765 | 0.7473 | 0.9302 |
| 0.0 | 46.0 | 9982 | 0.7513 | 0.9302 |
| 0.0 | 47.0 | 10199 | 0.7551 | 0.9302 |
| 0.0 | 48.0 | 10416 | 0.7564 | 0.9302 |
| 0.0 | 49.0 | 10633 | 0.7578 | 0.9302 |
| 0.0 | 50.0 | 10850 | 0.7579 | 0.9302 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_adamax_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_00001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4888
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3152 | 1.0 | 217 | 0.4303 | 0.8140 |
| 0.0395 | 2.0 | 434 | 0.3110 | 0.8605 |
| 0.0052 | 3.0 | 651 | 0.1960 | 0.8837 |
| 0.0014 | 4.0 | 868 | 0.1973 | 0.9070 |
| 0.0007 | 5.0 | 1085 | 0.1799 | 0.9070 |
| 0.0004 | 6.0 | 1302 | 0.1913 | 0.9070 |
| 0.0003 | 7.0 | 1519 | 0.2030 | 0.9070 |
| 0.0002 | 8.0 | 1736 | 0.1949 | 0.9302 |
| 0.0002 | 9.0 | 1953 | 0.2095 | 0.9302 |
| 0.0001 | 10.0 | 2170 | 0.2248 | 0.9302 |
| 0.0001 | 11.0 | 2387 | 0.1957 | 0.9070 |
| 0.0001 | 12.0 | 2604 | 0.2287 | 0.9302 |
| 0.0001 | 13.0 | 2821 | 0.2292 | 0.9302 |
| 0.0 | 14.0 | 3038 | 0.2168 | 0.9302 |
| 0.0 | 15.0 | 3255 | 0.2321 | 0.9302 |
| 0.0 | 16.0 | 3472 | 0.2331 | 0.9302 |
| 0.0 | 17.0 | 3689 | 0.2639 | 0.9302 |
| 0.0 | 18.0 | 3906 | 0.2552 | 0.9302 |
| 0.0 | 19.0 | 4123 | 0.2773 | 0.9302 |
| 0.0 | 20.0 | 4340 | 0.2788 | 0.9302 |
| 0.0 | 21.0 | 4557 | 0.3072 | 0.9302 |
| 0.0 | 22.0 | 4774 | 0.2995 | 0.9302 |
| 0.0 | 23.0 | 4991 | 0.3235 | 0.9302 |
| 0.0 | 24.0 | 5208 | 0.3152 | 0.9302 |
| 0.0 | 25.0 | 5425 | 0.3196 | 0.9302 |
| 0.0 | 26.0 | 5642 | 0.3244 | 0.9302 |
| 0.0 | 27.0 | 5859 | 0.3243 | 0.9302 |
| 0.0 | 28.0 | 6076 | 0.3343 | 0.9302 |
| 0.0 | 29.0 | 6293 | 0.3666 | 0.9302 |
| 0.0 | 30.0 | 6510 | 0.3811 | 0.9302 |
| 0.0 | 31.0 | 6727 | 0.3978 | 0.9302 |
| 0.0 | 32.0 | 6944 | 0.3769 | 0.9302 |
| 0.0 | 33.0 | 7161 | 0.4052 | 0.9302 |
| 0.0 | 34.0 | 7378 | 0.4150 | 0.9302 |
| 0.0 | 35.0 | 7595 | 0.4227 | 0.9302 |
| 0.0 | 36.0 | 7812 | 0.4100 | 0.9302 |
| 0.0 | 37.0 | 8029 | 0.3974 | 0.9302 |
| 0.0 | 38.0 | 8246 | 0.4427 | 0.9302 |
| 0.0 | 39.0 | 8463 | 0.4150 | 0.9302 |
| 0.0 | 40.0 | 8680 | 0.4448 | 0.9302 |
| 0.0 | 41.0 | 8897 | 0.4616 | 0.9302 |
| 0.0 | 42.0 | 9114 | 0.4839 | 0.9302 |
| 0.0 | 43.0 | 9331 | 0.4831 | 0.9302 |
| 0.0 | 44.0 | 9548 | 0.4641 | 0.9302 |
| 0.0 | 45.0 | 9765 | 0.4680 | 0.9302 |
| 0.0 | 46.0 | 9982 | 0.4903 | 0.9302 |
| 0.0 | 47.0 | 10199 | 0.4721 | 0.9302 |
| 0.0 | 48.0 | 10416 | 0.4832 | 0.9302 |
| 0.0 | 49.0 | 10633 | 0.4900 | 0.9302 |
| 0.0 | 50.0 | 10850 | 0.4888 | 0.9302 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_tiny_rms_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_00001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8811
- Accuracy: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0782 | 1.0 | 220 | 0.5815 | 0.8049 |
| 0.0225 | 2.0 | 440 | 0.3978 | 0.9024 |
| 0.0004 | 3.0 | 660 | 0.7541 | 0.8537 |
| 0.0039 | 4.0 | 880 | 0.7210 | 0.8293 |
| 0.0001 | 5.0 | 1100 | 0.5527 | 0.8780 |
| 0.0 | 6.0 | 1320 | 0.6823 | 0.9024 |
| 0.0001 | 7.0 | 1540 | 1.0039 | 0.8537 |
| 0.0 | 8.0 | 1760 | 0.6347 | 0.9024 |
| 0.0 | 9.0 | 1980 | 0.7021 | 0.8780 |
| 0.0 | 10.0 | 2200 | 0.6472 | 0.9024 |
| 0.0 | 11.0 | 2420 | 0.6252 | 0.9024 |
| 0.0 | 12.0 | 2640 | 0.5139 | 0.9268 |
| 0.0 | 13.0 | 2860 | 0.5354 | 0.9268 |
| 0.0 | 14.0 | 3080 | 0.5375 | 0.9268 |
| 0.0 | 15.0 | 3300 | 0.5909 | 0.9268 |
| 0.0 | 16.0 | 3520 | 0.6027 | 0.9268 |
| 0.0 | 17.0 | 3740 | 0.6214 | 0.9024 |
| 0.0 | 18.0 | 3960 | 0.7047 | 0.9024 |
| 0.0 | 19.0 | 4180 | 0.6477 | 0.9024 |
| 0.0 | 20.0 | 4400 | 0.6743 | 0.9024 |
| 0.0 | 21.0 | 4620 | 0.8503 | 0.9024 |
| 0.0 | 22.0 | 4840 | 0.7510 | 0.9024 |
| 0.0 | 23.0 | 5060 | 0.7888 | 0.9024 |
| 0.0 | 24.0 | 5280 | 0.7941 | 0.9024 |
| 0.0 | 25.0 | 5500 | 0.7357 | 0.9024 |
| 0.0 | 26.0 | 5720 | 0.7919 | 0.9024 |
| 0.0 | 27.0 | 5940 | 0.8554 | 0.9024 |
| 0.0 | 28.0 | 6160 | 0.8483 | 0.9024 |
| 0.0 | 29.0 | 6380 | 0.8353 | 0.9024 |
| 0.0 | 30.0 | 6600 | 0.8426 | 0.9024 |
| 0.0 | 31.0 | 6820 | 0.8345 | 0.9024 |
| 0.0 | 32.0 | 7040 | 0.8722 | 0.8780 |
| 0.0 | 33.0 | 7260 | 0.8796 | 0.9024 |
| 0.0 | 34.0 | 7480 | 0.8619 | 0.9024 |
| 0.0 | 35.0 | 7700 | 0.8170 | 0.8780 |
| 0.0 | 36.0 | 7920 | 0.8507 | 0.8780 |
| 0.0 | 37.0 | 8140 | 0.8419 | 0.8780 |
| 0.0 | 38.0 | 8360 | 0.8654 | 0.8780 |
| 0.0 | 39.0 | 8580 | 0.8111 | 0.8780 |
| 0.0 | 40.0 | 8800 | 0.8294 | 0.8780 |
| 0.0 | 41.0 | 9020 | 0.8633 | 0.8780 |
| 0.0 | 42.0 | 9240 | 0.8863 | 0.8780 |
| 0.0 | 43.0 | 9460 | 0.9061 | 0.8780 |
| 0.0 | 44.0 | 9680 | 0.9039 | 0.8780 |
| 0.0 | 45.0 | 9900 | 0.9053 | 0.8780 |
| 0.0 | 46.0 | 10120 | 0.8784 | 0.8780 |
| 0.0 | 47.0 | 10340 | 0.8824 | 0.8780 |
| 0.0 | 48.0 | 10560 | 0.8924 | 0.8780 |
| 0.0 | 49.0 | 10780 | 0.8784 | 0.8780 |
| 0.0 | 50.0 | 11000 | 0.8811 | 0.8780 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
ZhaoYoujia/autotrain-vit-base-v5
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.16666666666666666
f1_micro: 0.3333333333333333
f1_weighted: 0.16666666666666666
precision_macro: 0.1111111111111111
precision_micro: 0.3333333333333333
precision_weighted: 0.1111111111111111
recall_macro: 0.3333333333333333
recall_micro: 0.3333333333333333
recall_weighted: 0.3333333333333333
accuracy: 0.3333333333333333
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
hkivancoral/hushem_40x_deit_small_adamax_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_0001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2200
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0294 | 1.0 | 219 | 0.5356 | 0.8571 |
| 0.0128 | 2.0 | 438 | 0.3893 | 0.9286 |
| 0.0002 | 3.0 | 657 | 0.2025 | 0.9762 |
| 0.0012 | 4.0 | 876 | 0.0996 | 0.9524 |
| 0.0042 | 5.0 | 1095 | 0.3099 | 0.8810 |
| 0.0 | 6.0 | 1314 | 0.3304 | 0.9524 |
| 0.0 | 7.0 | 1533 | 0.0291 | 0.9762 |
| 0.0 | 8.0 | 1752 | 0.3258 | 0.9524 |
| 0.0 | 9.0 | 1971 | 0.2200 | 0.9762 |
| 0.0 | 10.0 | 2190 | 0.2242 | 0.9762 |
| 0.0 | 11.0 | 2409 | 0.2270 | 0.9762 |
| 0.0 | 12.0 | 2628 | 0.2293 | 0.9762 |
| 0.0 | 13.0 | 2847 | 0.2306 | 0.9762 |
| 0.0 | 14.0 | 3066 | 0.2313 | 0.9762 |
| 0.0 | 15.0 | 3285 | 0.2317 | 0.9762 |
| 0.0 | 16.0 | 3504 | 0.2327 | 0.9762 |
| 0.0 | 17.0 | 3723 | 0.2330 | 0.9762 |
| 0.0 | 18.0 | 3942 | 0.2343 | 0.9762 |
| 0.0 | 19.0 | 4161 | 0.2344 | 0.9762 |
| 0.0 | 20.0 | 4380 | 0.2350 | 0.9762 |
| 0.0 | 21.0 | 4599 | 0.2360 | 0.9762 |
| 0.0 | 22.0 | 4818 | 0.2352 | 0.9762 |
| 0.0 | 23.0 | 5037 | 0.2356 | 0.9762 |
| 0.0 | 24.0 | 5256 | 0.2355 | 0.9762 |
| 0.0 | 25.0 | 5475 | 0.2362 | 0.9762 |
| 0.0 | 26.0 | 5694 | 0.2364 | 0.9762 |
| 0.0 | 27.0 | 5913 | 0.2365 | 0.9762 |
| 0.0 | 28.0 | 6132 | 0.2373 | 0.9762 |
| 0.0 | 29.0 | 6351 | 0.2369 | 0.9762 |
| 0.0 | 30.0 | 6570 | 0.2371 | 0.9762 |
| 0.0 | 31.0 | 6789 | 0.2360 | 0.9762 |
| 0.0 | 32.0 | 7008 | 0.2375 | 0.9762 |
| 0.0 | 33.0 | 7227 | 0.2373 | 0.9762 |
| 0.0 | 34.0 | 7446 | 0.2372 | 0.9762 |
| 0.0 | 35.0 | 7665 | 0.2377 | 0.9762 |
| 0.0 | 36.0 | 7884 | 0.2367 | 0.9762 |
| 0.0 | 37.0 | 8103 | 0.2369 | 0.9762 |
| 0.0 | 38.0 | 8322 | 0.2356 | 0.9762 |
| 0.0 | 39.0 | 8541 | 0.2350 | 0.9762 |
| 0.0 | 40.0 | 8760 | 0.2356 | 0.9762 |
| 0.0 | 41.0 | 8979 | 0.2346 | 0.9762 |
| 0.0 | 42.0 | 9198 | 0.2341 | 0.9762 |
| 0.0 | 43.0 | 9417 | 0.2328 | 0.9762 |
| 0.0 | 44.0 | 9636 | 0.2314 | 0.9762 |
| 0.0 | 45.0 | 9855 | 0.2283 | 0.9762 |
| 0.0 | 46.0 | 10074 | 0.2261 | 0.9762 |
| 0.0 | 47.0 | 10293 | 0.2239 | 0.9762 |
| 0.0 | 48.0 | 10512 | 0.2219 | 0.9762 |
| 0.0 | 49.0 | 10731 | 0.2199 | 0.9762 |
| 0.0 | 50.0 | 10950 | 0.2200 | 0.9762 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_adamax_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_00001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4871
- Accuracy: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3231 | 1.0 | 219 | 0.5052 | 0.8333 |
| 0.0379 | 2.0 | 438 | 0.1942 | 0.9524 |
| 0.0035 | 3.0 | 657 | 0.1985 | 0.9524 |
| 0.0013 | 4.0 | 876 | 0.1951 | 0.9524 |
| 0.0011 | 5.0 | 1095 | 0.2264 | 0.9524 |
| 0.0004 | 6.0 | 1314 | 0.2512 | 0.9286 |
| 0.0002 | 7.0 | 1533 | 0.2460 | 0.9286 |
| 0.0002 | 8.0 | 1752 | 0.2596 | 0.9286 |
| 0.0001 | 9.0 | 1971 | 0.2767 | 0.9286 |
| 0.0001 | 10.0 | 2190 | 0.2928 | 0.9286 |
| 0.0001 | 11.0 | 2409 | 0.2970 | 0.9286 |
| 0.0001 | 12.0 | 2628 | 0.2820 | 0.9286 |
| 0.0001 | 13.0 | 2847 | 0.3043 | 0.9286 |
| 0.0 | 14.0 | 3066 | 0.3118 | 0.9286 |
| 0.0 | 15.0 | 3285 | 0.3261 | 0.9286 |
| 0.0 | 16.0 | 3504 | 0.3432 | 0.9286 |
| 0.0 | 17.0 | 3723 | 0.3644 | 0.9286 |
| 0.0 | 18.0 | 3942 | 0.3650 | 0.9286 |
| 0.0 | 19.0 | 4161 | 0.3445 | 0.9286 |
| 0.0 | 20.0 | 4380 | 0.3724 | 0.9286 |
| 0.0 | 21.0 | 4599 | 0.3804 | 0.9286 |
| 0.0 | 22.0 | 4818 | 0.3614 | 0.9286 |
| 0.0 | 23.0 | 5037 | 0.3623 | 0.9286 |
| 0.0 | 24.0 | 5256 | 0.3788 | 0.9286 |
| 0.0 | 25.0 | 5475 | 0.3915 | 0.9286 |
| 0.0 | 26.0 | 5694 | 0.3845 | 0.9286 |
| 0.0 | 27.0 | 5913 | 0.4164 | 0.9286 |
| 0.0 | 28.0 | 6132 | 0.4005 | 0.9286 |
| 0.0 | 29.0 | 6351 | 0.4219 | 0.9286 |
| 0.0 | 30.0 | 6570 | 0.4046 | 0.9286 |
| 0.0 | 31.0 | 6789 | 0.4132 | 0.9286 |
| 0.0 | 32.0 | 7008 | 0.4320 | 0.9286 |
| 0.0 | 33.0 | 7227 | 0.4247 | 0.9286 |
| 0.0 | 34.0 | 7446 | 0.4486 | 0.9286 |
| 0.0 | 35.0 | 7665 | 0.4281 | 0.9286 |
| 0.0 | 36.0 | 7884 | 0.4371 | 0.9286 |
| 0.0 | 37.0 | 8103 | 0.4603 | 0.9286 |
| 0.0 | 38.0 | 8322 | 0.4397 | 0.9286 |
| 0.0 | 39.0 | 8541 | 0.4445 | 0.9286 |
| 0.0 | 40.0 | 8760 | 0.4589 | 0.9286 |
| 0.0 | 41.0 | 8979 | 0.4480 | 0.9286 |
| 0.0 | 42.0 | 9198 | 0.4530 | 0.9286 |
| 0.0 | 43.0 | 9417 | 0.4708 | 0.9286 |
| 0.0 | 44.0 | 9636 | 0.4852 | 0.9286 |
| 0.0 | 45.0 | 9855 | 0.4662 | 0.9286 |
| 0.0 | 46.0 | 10074 | 0.4885 | 0.9286 |
| 0.0 | 47.0 | 10293 | 0.4916 | 0.9286 |
| 0.0 | 48.0 | 10512 | 0.4924 | 0.9286 |
| 0.0 | 49.0 | 10731 | 0.4929 | 0.9286 |
| 0.0 | 50.0 | 10950 | 0.4871 | 0.9286 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_adamax_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_0001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4597
- Accuracy: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0516 | 1.0 | 220 | 0.1392 | 0.9512 |
| 0.0011 | 2.0 | 440 | 0.2522 | 0.9268 |
| 0.0002 | 3.0 | 660 | 0.1314 | 0.9268 |
| 0.0006 | 4.0 | 880 | 0.4185 | 0.9024 |
| 0.0 | 5.0 | 1100 | 0.4569 | 0.9024 |
| 0.0 | 6.0 | 1320 | 0.4037 | 0.9024 |
| 0.0 | 7.0 | 1540 | 0.3953 | 0.9024 |
| 0.0 | 8.0 | 1760 | 0.3986 | 0.9024 |
| 0.0 | 9.0 | 1980 | 0.4084 | 0.9024 |
| 0.0 | 10.0 | 2200 | 0.4117 | 0.9024 |
| 0.0 | 11.0 | 2420 | 0.4151 | 0.9024 |
| 0.0 | 12.0 | 2640 | 0.4165 | 0.9024 |
| 0.0 | 13.0 | 2860 | 0.4199 | 0.9024 |
| 0.0 | 14.0 | 3080 | 0.4254 | 0.9024 |
| 0.0 | 15.0 | 3300 | 0.4180 | 0.9024 |
| 0.0 | 16.0 | 3520 | 0.4254 | 0.9024 |
| 0.0 | 17.0 | 3740 | 0.4273 | 0.9024 |
| 0.0 | 18.0 | 3960 | 0.4239 | 0.9024 |
| 0.0 | 19.0 | 4180 | 0.4240 | 0.9024 |
| 0.0 | 20.0 | 4400 | 0.4255 | 0.9024 |
| 0.0 | 21.0 | 4620 | 0.4197 | 0.9024 |
| 0.0 | 22.0 | 4840 | 0.4256 | 0.9024 |
| 0.0 | 23.0 | 5060 | 0.4276 | 0.9024 |
| 0.0 | 24.0 | 5280 | 0.4178 | 0.9024 |
| 0.0 | 25.0 | 5500 | 0.4247 | 0.9024 |
| 0.0 | 26.0 | 5720 | 0.4224 | 0.9024 |
| 0.0 | 27.0 | 5940 | 0.4294 | 0.9024 |
| 0.0 | 28.0 | 6160 | 0.4224 | 0.9268 |
| 0.0 | 29.0 | 6380 | 0.4213 | 0.9268 |
| 0.0 | 30.0 | 6600 | 0.4256 | 0.9268 |
| 0.0 | 31.0 | 6820 | 0.4281 | 0.9268 |
| 0.0 | 32.0 | 7040 | 0.4157 | 0.9268 |
| 0.0 | 33.0 | 7260 | 0.4223 | 0.9268 |
| 0.0 | 34.0 | 7480 | 0.4175 | 0.9268 |
| 0.0 | 35.0 | 7700 | 0.4230 | 0.9268 |
| 0.0 | 36.0 | 7920 | 0.4204 | 0.9268 |
| 0.0 | 37.0 | 8140 | 0.4311 | 0.9268 |
| 0.0 | 38.0 | 8360 | 0.4343 | 0.9268 |
| 0.0 | 39.0 | 8580 | 0.4379 | 0.9268 |
| 0.0 | 40.0 | 8800 | 0.4426 | 0.9268 |
| 0.0 | 41.0 | 9020 | 0.4413 | 0.9268 |
| 0.0 | 42.0 | 9240 | 0.4428 | 0.9268 |
| 0.0 | 43.0 | 9460 | 0.4470 | 0.9268 |
| 0.0 | 44.0 | 9680 | 0.4517 | 0.9268 |
| 0.0 | 45.0 | 9900 | 0.4526 | 0.9268 |
| 0.0 | 46.0 | 10120 | 0.4472 | 0.9268 |
| 0.0 | 47.0 | 10340 | 0.4509 | 0.9268 |
| 0.0 | 48.0 | 10560 | 0.4588 | 0.9268 |
| 0.0 | 49.0 | 10780 | 0.4589 | 0.9268 |
| 0.0 | 50.0 | 11000 | 0.4597 | 0.9268 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_adamax_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_00001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4380
- Accuracy: 0.9024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2763 | 1.0 | 220 | 0.4688 | 0.7561 |
| 0.0369 | 2.0 | 440 | 0.2296 | 0.9024 |
| 0.0054 | 3.0 | 660 | 0.3595 | 0.8537 |
| 0.0018 | 4.0 | 880 | 0.2500 | 0.9024 |
| 0.0006 | 5.0 | 1100 | 0.2205 | 0.9268 |
| 0.0003 | 6.0 | 1320 | 0.2552 | 0.9024 |
| 0.0003 | 7.0 | 1540 | 0.2520 | 0.9024 |
| 0.0002 | 8.0 | 1760 | 0.2923 | 0.8780 |
| 0.0001 | 9.0 | 1980 | 0.2754 | 0.8780 |
| 0.0001 | 10.0 | 2200 | 0.2945 | 0.8780 |
| 0.0001 | 11.0 | 2420 | 0.2841 | 0.9024 |
| 0.0001 | 12.0 | 2640 | 0.3077 | 0.9024 |
| 0.0001 | 13.0 | 2860 | 0.3076 | 0.8780 |
| 0.0 | 14.0 | 3080 | 0.3160 | 0.8780 |
| 0.0 | 15.0 | 3300 | 0.2918 | 0.9024 |
| 0.0 | 16.0 | 3520 | 0.3305 | 0.8780 |
| 0.0 | 17.0 | 3740 | 0.3206 | 0.8780 |
| 0.0 | 18.0 | 3960 | 0.3174 | 0.8780 |
| 0.0 | 19.0 | 4180 | 0.3189 | 0.8780 |
| 0.0 | 20.0 | 4400 | 0.3130 | 0.9024 |
| 0.0 | 21.0 | 4620 | 0.3383 | 0.8780 |
| 0.0 | 22.0 | 4840 | 0.3473 | 0.8780 |
| 0.0 | 23.0 | 5060 | 0.3548 | 0.8780 |
| 0.0 | 24.0 | 5280 | 0.3221 | 0.8780 |
| 0.0 | 25.0 | 5500 | 0.3554 | 0.8780 |
| 0.0 | 26.0 | 5720 | 0.3715 | 0.8780 |
| 0.0 | 27.0 | 5940 | 0.3690 | 0.8780 |
| 0.0 | 28.0 | 6160 | 0.3648 | 0.8780 |
| 0.0 | 29.0 | 6380 | 0.3806 | 0.8780 |
| 0.0 | 30.0 | 6600 | 0.3725 | 0.8780 |
| 0.0 | 31.0 | 6820 | 0.4022 | 0.8780 |
| 0.0 | 32.0 | 7040 | 0.3871 | 0.8780 |
| 0.0 | 33.0 | 7260 | 0.4133 | 0.8780 |
| 0.0 | 34.0 | 7480 | 0.4117 | 0.8780 |
| 0.0 | 35.0 | 7700 | 0.3832 | 0.8780 |
| 0.0 | 36.0 | 7920 | 0.3977 | 0.8780 |
| 0.0 | 37.0 | 8140 | 0.3959 | 0.8780 |
| 0.0 | 38.0 | 8360 | 0.4687 | 0.8780 |
| 0.0 | 39.0 | 8580 | 0.4404 | 0.8780 |
| 0.0 | 40.0 | 8800 | 0.3819 | 0.9024 |
| 0.0 | 41.0 | 9020 | 0.4514 | 0.8780 |
| 0.0 | 42.0 | 9240 | 0.4623 | 0.8780 |
| 0.0 | 43.0 | 9460 | 0.4136 | 0.9024 |
| 0.0 | 44.0 | 9680 | 0.4401 | 0.9024 |
| 0.0 | 45.0 | 9900 | 0.4714 | 0.9024 |
| 0.0 | 46.0 | 10120 | 0.4588 | 0.9024 |
| 0.0 | 47.0 | 10340 | 0.4584 | 0.9024 |
| 0.0 | 48.0 | 10560 | 0.4588 | 0.9024 |
| 0.0 | 49.0 | 10780 | 0.4430 | 0.9024 |
| 0.0 | 50.0 | 11000 | 0.4380 | 0.9024 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6398
- Accuracy: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1952 | 1.0 | 215 | 1.4276 | 0.3778 |
| 1.0042 | 2.0 | 430 | 1.4162 | 0.2889 |
| 0.8488 | 3.0 | 645 | 1.3544 | 0.3778 |
| 0.7465 | 4.0 | 860 | 1.2674 | 0.4889 |
| 0.6036 | 5.0 | 1075 | 1.1735 | 0.5111 |
| 0.5355 | 6.0 | 1290 | 1.0934 | 0.5556 |
| 0.4712 | 7.0 | 1505 | 1.0208 | 0.6 |
| 0.3805 | 8.0 | 1720 | 0.9372 | 0.6222 |
| 0.3422 | 9.0 | 1935 | 0.8901 | 0.6444 |
| 0.2964 | 10.0 | 2150 | 0.8433 | 0.6667 |
| 0.2485 | 11.0 | 2365 | 0.7909 | 0.6889 |
| 0.2255 | 12.0 | 2580 | 0.7693 | 0.7111 |
| 0.1717 | 13.0 | 2795 | 0.7309 | 0.7556 |
| 0.1588 | 14.0 | 3010 | 0.7252 | 0.7556 |
| 0.1672 | 15.0 | 3225 | 0.6986 | 0.7333 |
| 0.1097 | 16.0 | 3440 | 0.6863 | 0.7333 |
| 0.1167 | 17.0 | 3655 | 0.6753 | 0.7556 |
| 0.0952 | 18.0 | 3870 | 0.6754 | 0.7556 |
| 0.0806 | 19.0 | 4085 | 0.6768 | 0.7556 |
| 0.0794 | 20.0 | 4300 | 0.6533 | 0.7556 |
| 0.0649 | 21.0 | 4515 | 0.6553 | 0.7556 |
| 0.0639 | 22.0 | 4730 | 0.6451 | 0.7556 |
| 0.0578 | 23.0 | 4945 | 0.6498 | 0.7556 |
| 0.0439 | 24.0 | 5160 | 0.6457 | 0.7556 |
| 0.0437 | 25.0 | 5375 | 0.6423 | 0.7556 |
| 0.038 | 26.0 | 5590 | 0.6342 | 0.7556 |
| 0.0346 | 27.0 | 5805 | 0.6184 | 0.7556 |
| 0.0278 | 28.0 | 6020 | 0.6299 | 0.7556 |
| 0.035 | 29.0 | 6235 | 0.6381 | 0.7556 |
| 0.0226 | 30.0 | 6450 | 0.6272 | 0.7556 |
| 0.0178 | 31.0 | 6665 | 0.6325 | 0.7556 |
| 0.019 | 32.0 | 6880 | 0.6409 | 0.7556 |
| 0.0184 | 33.0 | 7095 | 0.6323 | 0.7778 |
| 0.0238 | 34.0 | 7310 | 0.6091 | 0.7556 |
| 0.0126 | 35.0 | 7525 | 0.6363 | 0.7778 |
| 0.0156 | 36.0 | 7740 | 0.6253 | 0.7556 |
| 0.0165 | 37.0 | 7955 | 0.6280 | 0.7556 |
| 0.0106 | 38.0 | 8170 | 0.6294 | 0.7778 |
| 0.0189 | 39.0 | 8385 | 0.6262 | 0.7778 |
| 0.0098 | 40.0 | 8600 | 0.6454 | 0.7556 |
| 0.0098 | 41.0 | 8815 | 0.6342 | 0.7778 |
| 0.0112 | 42.0 | 9030 | 0.6356 | 0.7778 |
| 0.0128 | 43.0 | 9245 | 0.6416 | 0.7778 |
| 0.0115 | 44.0 | 9460 | 0.6374 | 0.7778 |
| 0.0087 | 45.0 | 9675 | 0.6423 | 0.7778 |
| 0.0077 | 46.0 | 9890 | 0.6446 | 0.7778 |
| 0.0087 | 47.0 | 10105 | 0.6388 | 0.7778 |
| 0.0071 | 48.0 | 10320 | 0.6394 | 0.7778 |
| 0.0087 | 49.0 | 10535 | 0.6404 | 0.7778 |
| 0.0108 | 50.0 | 10750 | 0.6398 | 0.7778 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_0001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3807
- Accuracy: 0.3556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7636 | 1.0 | 215 | 1.5271 | 0.2222 |
| 1.5603 | 2.0 | 430 | 1.4695 | 0.2222 |
| 1.4189 | 3.0 | 645 | 1.4485 | 0.2444 |
| 1.4216 | 4.0 | 860 | 1.4404 | 0.3333 |
| 1.3718 | 5.0 | 1075 | 1.4361 | 0.3333 |
| 1.3271 | 6.0 | 1290 | 1.4331 | 0.3556 |
| 1.3291 | 7.0 | 1505 | 1.4309 | 0.3556 |
| 1.2611 | 8.0 | 1720 | 1.4294 | 0.3333 |
| 1.2392 | 9.0 | 1935 | 1.4281 | 0.3333 |
| 1.2352 | 10.0 | 2150 | 1.4273 | 0.3778 |
| 1.2132 | 11.0 | 2365 | 1.4268 | 0.3778 |
| 1.17 | 12.0 | 2580 | 1.4262 | 0.3778 |
| 1.1599 | 13.0 | 2795 | 1.4258 | 0.3778 |
| 1.1465 | 14.0 | 3010 | 1.4259 | 0.3778 |
| 1.1384 | 15.0 | 3225 | 1.4258 | 0.3556 |
| 1.1196 | 16.0 | 3440 | 1.4258 | 0.3333 |
| 1.1235 | 17.0 | 3655 | 1.4254 | 0.3333 |
| 1.092 | 18.0 | 3870 | 1.4252 | 0.3333 |
| 1.0493 | 19.0 | 4085 | 1.4248 | 0.3333 |
| 1.0602 | 20.0 | 4300 | 1.4241 | 0.2889 |
| 1.0537 | 21.0 | 4515 | 1.4232 | 0.2889 |
| 1.0424 | 22.0 | 4730 | 1.4223 | 0.2889 |
| 1.0373 | 23.0 | 4945 | 1.4208 | 0.2889 |
| 1.0255 | 24.0 | 5160 | 1.4191 | 0.3111 |
| 0.9946 | 25.0 | 5375 | 1.4173 | 0.3111 |
| 0.9526 | 26.0 | 5590 | 1.4155 | 0.3111 |
| 0.961 | 27.0 | 5805 | 1.4133 | 0.3111 |
| 0.9603 | 28.0 | 6020 | 1.4115 | 0.3111 |
| 0.9689 | 29.0 | 6235 | 1.4091 | 0.3111 |
| 0.9155 | 30.0 | 6450 | 1.4068 | 0.3111 |
| 0.9244 | 31.0 | 6665 | 1.4046 | 0.3111 |
| 0.9454 | 32.0 | 6880 | 1.4024 | 0.3111 |
| 0.9669 | 33.0 | 7095 | 1.4003 | 0.3111 |
| 0.935 | 34.0 | 7310 | 1.3982 | 0.3333 |
| 0.887 | 35.0 | 7525 | 1.3962 | 0.3333 |
| 0.9142 | 36.0 | 7740 | 1.3943 | 0.3333 |
| 0.9282 | 37.0 | 7955 | 1.3924 | 0.3333 |
| 0.8935 | 38.0 | 8170 | 1.3908 | 0.3333 |
| 0.9345 | 39.0 | 8385 | 1.3890 | 0.3333 |
| 0.8406 | 40.0 | 8600 | 1.3876 | 0.3333 |
| 0.8885 | 41.0 | 8815 | 1.3862 | 0.3333 |
| 0.9974 | 42.0 | 9030 | 1.3851 | 0.3333 |
| 0.9464 | 43.0 | 9245 | 1.3840 | 0.3333 |
| 0.9071 | 44.0 | 9460 | 1.3830 | 0.3333 |
| 0.9277 | 45.0 | 9675 | 1.3823 | 0.3333 |
| 0.8844 | 46.0 | 9890 | 1.3817 | 0.3333 |
| 0.8843 | 47.0 | 10105 | 1.3812 | 0.3333 |
| 0.9119 | 48.0 | 10320 | 1.3809 | 0.3556 |
| 0.9448 | 49.0 | 10535 | 1.3808 | 0.3556 |
| 0.8919 | 50.0 | 10750 | 1.3807 | 0.3556 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_00001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9061
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0132 | 1.0 | 215 | 0.4686 | 0.8444 |
| 0.0004 | 2.0 | 430 | 0.6106 | 0.8222 |
| 0.0016 | 3.0 | 645 | 0.7608 | 0.8 |
| 0.0 | 4.0 | 860 | 0.5588 | 0.8667 |
| 0.0 | 5.0 | 1075 | 0.5395 | 0.8667 |
| 0.0 | 6.0 | 1290 | 0.5368 | 0.8889 |
| 0.0 | 7.0 | 1505 | 0.5575 | 0.8889 |
| 0.0 | 8.0 | 1720 | 0.5516 | 0.9111 |
| 0.0 | 9.0 | 1935 | 0.5817 | 0.9111 |
| 0.0 | 10.0 | 2150 | 0.5914 | 0.8667 |
| 0.0 | 11.0 | 2365 | 0.6168 | 0.8667 |
| 0.0 | 12.0 | 2580 | 0.7197 | 0.8667 |
| 0.0 | 13.0 | 2795 | 0.7066 | 0.8667 |
| 0.0 | 14.0 | 3010 | 0.7905 | 0.8667 |
| 0.0 | 15.0 | 3225 | 0.8099 | 0.8667 |
| 0.0 | 16.0 | 3440 | 0.9402 | 0.8444 |
| 0.0 | 17.0 | 3655 | 0.9239 | 0.8667 |
| 0.0 | 18.0 | 3870 | 0.9014 | 0.8444 |
| 0.0 | 19.0 | 4085 | 0.9346 | 0.8667 |
| 0.0 | 20.0 | 4300 | 0.8551 | 0.8667 |
| 0.0 | 21.0 | 4515 | 0.8933 | 0.8667 |
| 0.0 | 22.0 | 4730 | 0.9137 | 0.8667 |
| 0.0 | 23.0 | 4945 | 0.9179 | 0.8667 |
| 0.0 | 24.0 | 5160 | 0.8411 | 0.8667 |
| 0.0 | 25.0 | 5375 | 0.9276 | 0.8667 |
| 0.0 | 26.0 | 5590 | 0.9081 | 0.8667 |
| 0.0 | 27.0 | 5805 | 0.9378 | 0.8667 |
| 0.0 | 28.0 | 6020 | 0.9015 | 0.8667 |
| 0.0 | 29.0 | 6235 | 0.8989 | 0.8667 |
| 0.0 | 30.0 | 6450 | 0.9223 | 0.8667 |
| 0.0 | 31.0 | 6665 | 0.9424 | 0.8667 |
| 0.0 | 32.0 | 6880 | 0.9057 | 0.8667 |
| 0.0 | 33.0 | 7095 | 0.8894 | 0.8667 |
| 0.0 | 34.0 | 7310 | 0.9300 | 0.8667 |
| 0.0 | 35.0 | 7525 | 0.9491 | 0.8667 |
| 0.0 | 36.0 | 7740 | 0.8980 | 0.8667 |
| 0.0 | 37.0 | 7955 | 0.8706 | 0.8667 |
| 0.0 | 38.0 | 8170 | 0.8943 | 0.8667 |
| 0.0 | 39.0 | 8385 | 0.9073 | 0.8667 |
| 0.0 | 40.0 | 8600 | 0.9075 | 0.8667 |
| 0.0 | 41.0 | 8815 | 0.9113 | 0.8667 |
| 0.0 | 42.0 | 9030 | 0.9138 | 0.8667 |
| 0.0 | 43.0 | 9245 | 0.9218 | 0.8667 |
| 0.0 | 44.0 | 9460 | 0.9089 | 0.8667 |
| 0.0 | 45.0 | 9675 | 0.9120 | 0.8667 |
| 0.0 | 46.0 | 9890 | 0.9019 | 0.8667 |
| 0.0 | 47.0 | 10105 | 0.9058 | 0.8667 |
| 0.0 | 48.0 | 10320 | 0.9063 | 0.8667 |
| 0.0 | 49.0 | 10535 | 0.9035 | 0.8667 |
| 0.0 | 50.0 | 10750 | 0.9061 | 0.8667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9318
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1891 | 1.0 | 215 | 1.3300 | 0.3778 |
| 0.9647 | 2.0 | 430 | 1.2794 | 0.4444 |
| 0.8581 | 3.0 | 645 | 1.2244 | 0.5111 |
| 0.699 | 4.0 | 860 | 1.1784 | 0.5333 |
| 0.6158 | 5.0 | 1075 | 1.1498 | 0.5111 |
| 0.5391 | 6.0 | 1290 | 1.1059 | 0.5556 |
| 0.4953 | 7.0 | 1505 | 1.0650 | 0.5333 |
| 0.4016 | 8.0 | 1720 | 1.0249 | 0.5556 |
| 0.3397 | 9.0 | 1935 | 0.9796 | 0.6222 |
| 0.3003 | 10.0 | 2150 | 0.9463 | 0.7111 |
| 0.246 | 11.0 | 2365 | 0.9270 | 0.7111 |
| 0.1949 | 12.0 | 2580 | 0.9025 | 0.7111 |
| 0.1895 | 13.0 | 2795 | 0.8872 | 0.7111 |
| 0.1659 | 14.0 | 3010 | 0.8723 | 0.7111 |
| 0.1576 | 15.0 | 3225 | 0.8544 | 0.7111 |
| 0.1305 | 16.0 | 3440 | 0.8521 | 0.7111 |
| 0.1123 | 17.0 | 3655 | 0.8414 | 0.7111 |
| 0.1025 | 18.0 | 3870 | 0.8453 | 0.7111 |
| 0.0749 | 19.0 | 4085 | 0.8597 | 0.7111 |
| 0.0854 | 20.0 | 4300 | 0.8467 | 0.7111 |
| 0.0788 | 21.0 | 4515 | 0.8314 | 0.7111 |
| 0.0675 | 22.0 | 4730 | 0.8392 | 0.7111 |
| 0.0523 | 23.0 | 4945 | 0.8293 | 0.7111 |
| 0.0556 | 24.0 | 5160 | 0.8555 | 0.7111 |
| 0.0483 | 25.0 | 5375 | 0.8566 | 0.7111 |
| 0.0417 | 26.0 | 5590 | 0.8533 | 0.7111 |
| 0.0397 | 27.0 | 5805 | 0.8560 | 0.7333 |
| 0.0302 | 28.0 | 6020 | 0.8587 | 0.7333 |
| 0.0286 | 29.0 | 6235 | 0.8633 | 0.7333 |
| 0.0386 | 30.0 | 6450 | 0.8691 | 0.7333 |
| 0.0212 | 31.0 | 6665 | 0.8693 | 0.7333 |
| 0.0221 | 32.0 | 6880 | 0.8714 | 0.7333 |
| 0.0198 | 33.0 | 7095 | 0.8818 | 0.7333 |
| 0.0189 | 34.0 | 7310 | 0.8880 | 0.7333 |
| 0.0167 | 35.0 | 7525 | 0.8939 | 0.7333 |
| 0.0198 | 36.0 | 7740 | 0.9010 | 0.7333 |
| 0.0157 | 37.0 | 7955 | 0.8988 | 0.7333 |
| 0.0177 | 38.0 | 8170 | 0.9154 | 0.7333 |
| 0.0136 | 39.0 | 8385 | 0.9094 | 0.7333 |
| 0.0108 | 40.0 | 8600 | 0.9213 | 0.7333 |
| 0.0119 | 41.0 | 8815 | 0.9173 | 0.7333 |
| 0.0127 | 42.0 | 9030 | 0.9219 | 0.7333 |
| 0.0095 | 43.0 | 9245 | 0.9256 | 0.7333 |
| 0.0124 | 44.0 | 9460 | 0.9223 | 0.7333 |
| 0.0112 | 45.0 | 9675 | 0.9246 | 0.7333 |
| 0.0112 | 46.0 | 9890 | 0.9266 | 0.7333 |
| 0.0102 | 47.0 | 10105 | 0.9301 | 0.7333 |
| 0.0105 | 48.0 | 10320 | 0.9338 | 0.7333 |
| 0.0119 | 49.0 | 10535 | 0.9314 | 0.7333 |
| 0.0144 | 50.0 | 10750 | 0.9318 | 0.7333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_0001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2548
- Accuracy: 0.5111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7252 | 1.0 | 215 | 1.5329 | 0.2667 |
| 1.5501 | 2.0 | 430 | 1.4609 | 0.3111 |
| 1.4663 | 3.0 | 645 | 1.4250 | 0.3111 |
| 1.4117 | 4.0 | 860 | 1.4032 | 0.2889 |
| 1.3533 | 5.0 | 1075 | 1.3870 | 0.2667 |
| 1.3221 | 6.0 | 1290 | 1.3733 | 0.2889 |
| 1.3111 | 7.0 | 1505 | 1.3613 | 0.3111 |
| 1.2698 | 8.0 | 1720 | 1.3509 | 0.3111 |
| 1.2425 | 9.0 | 1935 | 1.3418 | 0.3111 |
| 1.2243 | 10.0 | 2150 | 1.3338 | 0.3778 |
| 1.2016 | 11.0 | 2365 | 1.3268 | 0.3778 |
| 1.1128 | 12.0 | 2580 | 1.3203 | 0.3556 |
| 1.174 | 13.0 | 2795 | 1.3136 | 0.3556 |
| 1.1731 | 14.0 | 3010 | 1.3081 | 0.4 |
| 1.141 | 15.0 | 3225 | 1.3031 | 0.4 |
| 1.1163 | 16.0 | 3440 | 1.2979 | 0.4 |
| 1.1128 | 17.0 | 3655 | 1.2946 | 0.4222 |
| 1.0806 | 18.0 | 3870 | 1.2916 | 0.4222 |
| 1.0332 | 19.0 | 4085 | 1.2893 | 0.3778 |
| 1.0358 | 20.0 | 4300 | 1.2875 | 0.4 |
| 1.0352 | 21.0 | 4515 | 1.2855 | 0.4 |
| 1.0257 | 22.0 | 4730 | 1.2838 | 0.4 |
| 1.0362 | 23.0 | 4945 | 1.2822 | 0.4 |
| 1.0137 | 24.0 | 5160 | 1.2805 | 0.4 |
| 1.0067 | 25.0 | 5375 | 1.2787 | 0.4222 |
| 0.9834 | 26.0 | 5590 | 1.2771 | 0.4667 |
| 0.9889 | 27.0 | 5805 | 1.2753 | 0.4667 |
| 0.9291 | 28.0 | 6020 | 1.2744 | 0.4667 |
| 0.9563 | 29.0 | 6235 | 1.2728 | 0.4667 |
| 0.9949 | 30.0 | 6450 | 1.2710 | 0.4667 |
| 0.9331 | 31.0 | 6665 | 1.2698 | 0.4667 |
| 0.9189 | 32.0 | 6880 | 1.2683 | 0.4889 |
| 0.8977 | 33.0 | 7095 | 1.2667 | 0.4889 |
| 0.9506 | 34.0 | 7310 | 1.2657 | 0.4889 |
| 0.9018 | 35.0 | 7525 | 1.2644 | 0.4889 |
| 0.9085 | 36.0 | 7740 | 1.2632 | 0.4889 |
| 0.9525 | 37.0 | 7955 | 1.2617 | 0.4889 |
| 0.9147 | 38.0 | 8170 | 1.2608 | 0.4889 |
| 0.8837 | 39.0 | 8385 | 1.2597 | 0.5111 |
| 0.9228 | 40.0 | 8600 | 1.2588 | 0.5111 |
| 0.8773 | 41.0 | 8815 | 1.2582 | 0.5111 |
| 0.8964 | 42.0 | 9030 | 1.2574 | 0.5111 |
| 0.8892 | 43.0 | 9245 | 1.2568 | 0.5111 |
| 0.8986 | 44.0 | 9460 | 1.2562 | 0.5111 |
| 0.9114 | 45.0 | 9675 | 1.2557 | 0.5111 |
| 0.8745 | 46.0 | 9890 | 1.2553 | 0.5111 |
| 0.9224 | 47.0 | 10105 | 1.2551 | 0.5111 |
| 0.9229 | 48.0 | 10320 | 1.2549 | 0.5111 |
| 0.9087 | 49.0 | 10535 | 1.2549 | 0.5111 |
| 0.9371 | 50.0 | 10750 | 1.2548 | 0.5111 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3304
- Accuracy: 0.8837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2372 | 1.0 | 217 | 1.2798 | 0.3488 |
| 1.0621 | 2.0 | 434 | 1.1335 | 0.5814 |
| 0.8881 | 3.0 | 651 | 1.0243 | 0.5814 |
| 0.7868 | 4.0 | 868 | 0.9174 | 0.6279 |
| 0.6948 | 5.0 | 1085 | 0.8587 | 0.6279 |
| 0.5714 | 6.0 | 1302 | 0.7810 | 0.7209 |
| 0.4585 | 7.0 | 1519 | 0.7011 | 0.8140 |
| 0.4277 | 8.0 | 1736 | 0.6580 | 0.7907 |
| 0.3688 | 9.0 | 1953 | 0.6164 | 0.8140 |
| 0.2836 | 10.0 | 2170 | 0.5578 | 0.8140 |
| 0.2148 | 11.0 | 2387 | 0.5322 | 0.8140 |
| 0.2211 | 12.0 | 2604 | 0.5199 | 0.8140 |
| 0.2014 | 13.0 | 2821 | 0.4865 | 0.8140 |
| 0.1799 | 14.0 | 3038 | 0.4815 | 0.8140 |
| 0.1565 | 15.0 | 3255 | 0.4749 | 0.7907 |
| 0.1129 | 16.0 | 3472 | 0.4440 | 0.8372 |
| 0.0992 | 17.0 | 3689 | 0.4542 | 0.7907 |
| 0.1 | 18.0 | 3906 | 0.4290 | 0.8140 |
| 0.0944 | 19.0 | 4123 | 0.4149 | 0.8140 |
| 0.0856 | 20.0 | 4340 | 0.4111 | 0.8372 |
| 0.0816 | 21.0 | 4557 | 0.4115 | 0.8140 |
| 0.0563 | 22.0 | 4774 | 0.3956 | 0.7907 |
| 0.0625 | 23.0 | 4991 | 0.3834 | 0.7907 |
| 0.0683 | 24.0 | 5208 | 0.3893 | 0.7907 |
| 0.0454 | 25.0 | 5425 | 0.3773 | 0.8140 |
| 0.0571 | 26.0 | 5642 | 0.3874 | 0.7907 |
| 0.0322 | 27.0 | 5859 | 0.3743 | 0.8140 |
| 0.0339 | 28.0 | 6076 | 0.3713 | 0.8372 |
| 0.0345 | 29.0 | 6293 | 0.3616 | 0.8372 |
| 0.0434 | 30.0 | 6510 | 0.3686 | 0.8372 |
| 0.0377 | 31.0 | 6727 | 0.3495 | 0.8605 |
| 0.0295 | 32.0 | 6944 | 0.3476 | 0.8372 |
| 0.0279 | 33.0 | 7161 | 0.3534 | 0.8605 |
| 0.0232 | 34.0 | 7378 | 0.3489 | 0.8372 |
| 0.0275 | 35.0 | 7595 | 0.3346 | 0.8837 |
| 0.0214 | 36.0 | 7812 | 0.3309 | 0.8605 |
| 0.018 | 37.0 | 8029 | 0.3342 | 0.8605 |
| 0.0167 | 38.0 | 8246 | 0.3289 | 0.8837 |
| 0.0196 | 39.0 | 8463 | 0.3389 | 0.8605 |
| 0.0269 | 40.0 | 8680 | 0.3388 | 0.8605 |
| 0.0126 | 41.0 | 8897 | 0.3309 | 0.8605 |
| 0.0119 | 42.0 | 9114 | 0.3316 | 0.8837 |
| 0.0174 | 43.0 | 9331 | 0.3268 | 0.8837 |
| 0.0199 | 44.0 | 9548 | 0.3304 | 0.8837 |
| 0.0115 | 45.0 | 9765 | 0.3378 | 0.8605 |
| 0.0138 | 46.0 | 9982 | 0.3301 | 0.8837 |
| 0.0107 | 47.0 | 10199 | 0.3312 | 0.8605 |
| 0.0108 | 48.0 | 10416 | 0.3294 | 0.9070 |
| 0.0125 | 49.0 | 10633 | 0.3301 | 0.8837 |
| 0.0148 | 50.0 | 10850 | 0.3304 | 0.8837 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_0001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0466
- Accuracy: 0.5814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6887 | 1.0 | 217 | 1.4610 | 0.2558 |
| 1.5962 | 2.0 | 434 | 1.3845 | 0.3488 |
| 1.5124 | 3.0 | 651 | 1.3560 | 0.3721 |
| 1.442 | 4.0 | 868 | 1.3419 | 0.3721 |
| 1.41 | 5.0 | 1085 | 1.3313 | 0.3488 |
| 1.3709 | 6.0 | 1302 | 1.3218 | 0.3721 |
| 1.3157 | 7.0 | 1519 | 1.3125 | 0.3721 |
| 1.3328 | 8.0 | 1736 | 1.3039 | 0.3488 |
| 1.3107 | 9.0 | 1953 | 1.2950 | 0.3488 |
| 1.2568 | 10.0 | 2170 | 1.2861 | 0.3488 |
| 1.2226 | 11.0 | 2387 | 1.2769 | 0.3256 |
| 1.198 | 12.0 | 2604 | 1.2671 | 0.3256 |
| 1.232 | 13.0 | 2821 | 1.2570 | 0.3488 |
| 1.1803 | 14.0 | 3038 | 1.2472 | 0.3488 |
| 1.214 | 15.0 | 3255 | 1.2376 | 0.3488 |
| 1.208 | 16.0 | 3472 | 1.2274 | 0.3953 |
| 1.1406 | 17.0 | 3689 | 1.2176 | 0.3953 |
| 1.1243 | 18.0 | 3906 | 1.2072 | 0.3953 |
| 1.1316 | 19.0 | 4123 | 1.1970 | 0.4884 |
| 1.1119 | 20.0 | 4340 | 1.1873 | 0.4884 |
| 1.117 | 21.0 | 4557 | 1.1775 | 0.5116 |
| 1.0609 | 22.0 | 4774 | 1.1681 | 0.5116 |
| 1.0751 | 23.0 | 4991 | 1.1588 | 0.5581 |
| 1.058 | 24.0 | 5208 | 1.1499 | 0.5581 |
| 1.0301 | 25.0 | 5425 | 1.1417 | 0.5581 |
| 1.089 | 26.0 | 5642 | 1.1338 | 0.5581 |
| 0.9909 | 27.0 | 5859 | 1.1255 | 0.5814 |
| 0.9932 | 28.0 | 6076 | 1.1180 | 0.5814 |
| 1.026 | 29.0 | 6293 | 1.1110 | 0.5814 |
| 1.0236 | 30.0 | 6510 | 1.1044 | 0.5814 |
| 1.0169 | 31.0 | 6727 | 1.0980 | 0.5814 |
| 1.0049 | 32.0 | 6944 | 1.0921 | 0.5814 |
| 1.0261 | 33.0 | 7161 | 1.0868 | 0.5814 |
| 0.994 | 34.0 | 7378 | 1.0819 | 0.5814 |
| 0.9887 | 35.0 | 7595 | 1.0769 | 0.5581 |
| 1.0137 | 36.0 | 7812 | 1.0725 | 0.5581 |
| 0.9359 | 37.0 | 8029 | 1.0687 | 0.5581 |
| 0.9531 | 38.0 | 8246 | 1.0651 | 0.5581 |
| 0.9682 | 39.0 | 8463 | 1.0620 | 0.5581 |
| 0.9947 | 40.0 | 8680 | 1.0590 | 0.5581 |
| 0.9063 | 41.0 | 8897 | 1.0565 | 0.5581 |
| 1.0195 | 42.0 | 9114 | 1.0543 | 0.5581 |
| 0.966 | 43.0 | 9331 | 1.0523 | 0.5581 |
| 0.9409 | 44.0 | 9548 | 1.0506 | 0.5581 |
| 0.9327 | 45.0 | 9765 | 1.0492 | 0.5581 |
| 0.9575 | 46.0 | 9982 | 1.0481 | 0.5814 |
| 0.9627 | 47.0 | 10199 | 1.0474 | 0.5814 |
| 0.9553 | 48.0 | 10416 | 1.0469 | 0.5814 |
| 0.9631 | 49.0 | 10633 | 1.0467 | 0.5814 |
| 0.944 | 50.0 | 10850 | 1.0466 | 0.5814 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2365
- Accuracy: 0.8810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2757 | 1.0 | 219 | 1.3298 | 0.2619 |
| 1.0766 | 2.0 | 438 | 1.1919 | 0.4048 |
| 0.9095 | 3.0 | 657 | 1.0786 | 0.5476 |
| 0.7507 | 4.0 | 876 | 0.9821 | 0.5476 |
| 0.6994 | 5.0 | 1095 | 0.8850 | 0.5952 |
| 0.5864 | 6.0 | 1314 | 0.8204 | 0.6429 |
| 0.4328 | 7.0 | 1533 | 0.7576 | 0.6905 |
| 0.4293 | 8.0 | 1752 | 0.6999 | 0.7143 |
| 0.3464 | 9.0 | 1971 | 0.6320 | 0.7143 |
| 0.3175 | 10.0 | 2190 | 0.5956 | 0.7381 |
| 0.2382 | 11.0 | 2409 | 0.5588 | 0.7381 |
| 0.2672 | 12.0 | 2628 | 0.5195 | 0.7381 |
| 0.2016 | 13.0 | 2847 | 0.4850 | 0.8095 |
| 0.1832 | 14.0 | 3066 | 0.4528 | 0.8095 |
| 0.1406 | 15.0 | 3285 | 0.4338 | 0.8333 |
| 0.1305 | 16.0 | 3504 | 0.3948 | 0.8571 |
| 0.1504 | 17.0 | 3723 | 0.3785 | 0.8571 |
| 0.1139 | 18.0 | 3942 | 0.3689 | 0.8571 |
| 0.096 | 19.0 | 4161 | 0.3548 | 0.8571 |
| 0.0869 | 20.0 | 4380 | 0.3393 | 0.8571 |
| 0.0874 | 21.0 | 4599 | 0.3057 | 0.8571 |
| 0.0797 | 22.0 | 4818 | 0.2990 | 0.8571 |
| 0.0596 | 23.0 | 5037 | 0.2862 | 0.8571 |
| 0.053 | 24.0 | 5256 | 0.3012 | 0.8810 |
| 0.0562 | 25.0 | 5475 | 0.2885 | 0.8810 |
| 0.0463 | 26.0 | 5694 | 0.2676 | 0.8810 |
| 0.0374 | 27.0 | 5913 | 0.2870 | 0.8810 |
| 0.037 | 28.0 | 6132 | 0.2638 | 0.8810 |
| 0.0341 | 29.0 | 6351 | 0.2690 | 0.8810 |
| 0.0327 | 30.0 | 6570 | 0.2566 | 0.8810 |
| 0.0238 | 31.0 | 6789 | 0.2611 | 0.8810 |
| 0.0256 | 32.0 | 7008 | 0.2643 | 0.8810 |
| 0.0284 | 33.0 | 7227 | 0.2717 | 0.8810 |
| 0.0213 | 34.0 | 7446 | 0.2627 | 0.8810 |
| 0.0191 | 35.0 | 7665 | 0.2395 | 0.8810 |
| 0.0246 | 36.0 | 7884 | 0.2517 | 0.8810 |
| 0.0207 | 37.0 | 8103 | 0.2515 | 0.8810 |
| 0.0134 | 38.0 | 8322 | 0.2484 | 0.8810 |
| 0.0162 | 39.0 | 8541 | 0.2279 | 0.8810 |
| 0.0165 | 40.0 | 8760 | 0.2516 | 0.8810 |
| 0.0146 | 41.0 | 8979 | 0.2253 | 0.8810 |
| 0.0168 | 42.0 | 9198 | 0.2425 | 0.8810 |
| 0.0155 | 43.0 | 9417 | 0.2370 | 0.8810 |
| 0.0145 | 44.0 | 9636 | 0.2352 | 0.8810 |
| 0.0118 | 45.0 | 9855 | 0.2414 | 0.8810 |
| 0.0107 | 46.0 | 10074 | 0.2338 | 0.8810 |
| 0.0124 | 47.0 | 10293 | 0.2350 | 0.8810 |
| 0.0125 | 48.0 | 10512 | 0.2352 | 0.8810 |
| 0.0138 | 49.0 | 10731 | 0.2367 | 0.8810 |
| 0.0183 | 50.0 | 10950 | 0.2365 | 0.8810 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_0001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1108
- Accuracy: 0.5238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.763 | 1.0 | 219 | 1.5785 | 0.2857 |
| 1.5719 | 2.0 | 438 | 1.5003 | 0.2857 |
| 1.452 | 3.0 | 657 | 1.4620 | 0.2143 |
| 1.4006 | 4.0 | 876 | 1.4368 | 0.2143 |
| 1.3854 | 5.0 | 1095 | 1.4164 | 0.2143 |
| 1.3041 | 6.0 | 1314 | 1.3988 | 0.2381 |
| 1.296 | 7.0 | 1533 | 1.3830 | 0.2619 |
| 1.276 | 8.0 | 1752 | 1.3685 | 0.2381 |
| 1.2474 | 9.0 | 1971 | 1.3546 | 0.2381 |
| 1.2128 | 10.0 | 2190 | 1.3420 | 0.2381 |
| 1.2113 | 11.0 | 2409 | 1.3297 | 0.2381 |
| 1.2121 | 12.0 | 2628 | 1.3176 | 0.2619 |
| 1.1861 | 13.0 | 2847 | 1.3062 | 0.2619 |
| 1.1756 | 14.0 | 3066 | 1.2946 | 0.3095 |
| 1.1431 | 15.0 | 3285 | 1.2837 | 0.3571 |
| 1.1487 | 16.0 | 3504 | 1.2730 | 0.3095 |
| 1.1705 | 17.0 | 3723 | 1.2625 | 0.3095 |
| 1.1482 | 18.0 | 3942 | 1.2522 | 0.2857 |
| 1.1037 | 19.0 | 4161 | 1.2421 | 0.3095 |
| 1.0872 | 20.0 | 4380 | 1.2325 | 0.3810 |
| 1.1026 | 21.0 | 4599 | 1.2229 | 0.4048 |
| 1.0517 | 22.0 | 4818 | 1.2135 | 0.4048 |
| 1.0226 | 23.0 | 5037 | 1.2052 | 0.4286 |
| 1.0485 | 24.0 | 5256 | 1.1974 | 0.4286 |
| 1.0319 | 25.0 | 5475 | 1.1896 | 0.4286 |
| 0.9983 | 26.0 | 5694 | 1.1821 | 0.4286 |
| 1.0014 | 27.0 | 5913 | 1.1755 | 0.4048 |
| 1.0162 | 28.0 | 6132 | 1.1694 | 0.4048 |
| 0.986 | 29.0 | 6351 | 1.1635 | 0.4048 |
| 0.9747 | 30.0 | 6570 | 1.1582 | 0.4286 |
| 0.9811 | 31.0 | 6789 | 1.1532 | 0.4286 |
| 0.9907 | 32.0 | 7008 | 1.1482 | 0.4286 |
| 0.9904 | 33.0 | 7227 | 1.1437 | 0.4286 |
| 0.9293 | 34.0 | 7446 | 1.1399 | 0.4524 |
| 0.9752 | 35.0 | 7665 | 1.1362 | 0.4524 |
| 0.9789 | 36.0 | 7884 | 1.1326 | 0.4762 |
| 0.9516 | 37.0 | 8103 | 1.1293 | 0.5 |
| 0.9703 | 38.0 | 8322 | 1.1262 | 0.5 |
| 0.8944 | 39.0 | 8541 | 1.1236 | 0.5238 |
| 0.9388 | 40.0 | 8760 | 1.1213 | 0.5238 |
| 0.9573 | 41.0 | 8979 | 1.1191 | 0.5238 |
| 0.9441 | 42.0 | 9198 | 1.1172 | 0.5238 |
| 0.9438 | 43.0 | 9417 | 1.1156 | 0.5238 |
| 0.9221 | 44.0 | 9636 | 1.1141 | 0.5238 |
| 0.9079 | 45.0 | 9855 | 1.1130 | 0.5238 |
| 0.962 | 46.0 | 10074 | 1.1121 | 0.5238 |
| 0.9464 | 47.0 | 10293 | 1.1114 | 0.5238 |
| 0.9323 | 48.0 | 10512 | 1.1110 | 0.5238 |
| 0.9581 | 49.0 | 10731 | 1.1108 | 0.5238 |
| 0.942 | 50.0 | 10950 | 1.1108 | 0.5238 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3138
- Accuracy: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2269 | 1.0 | 220 | 1.3889 | 0.3171 |
| 0.9985 | 2.0 | 440 | 1.1915 | 0.3902 |
| 0.8593 | 3.0 | 660 | 1.0700 | 0.5366 |
| 0.7571 | 4.0 | 880 | 0.9669 | 0.5854 |
| 0.6777 | 5.0 | 1100 | 0.8606 | 0.6341 |
| 0.5274 | 6.0 | 1320 | 0.7627 | 0.6585 |
| 0.4414 | 7.0 | 1540 | 0.6648 | 0.7317 |
| 0.4175 | 8.0 | 1760 | 0.6018 | 0.7805 |
| 0.316 | 9.0 | 1980 | 0.5359 | 0.7805 |
| 0.2588 | 10.0 | 2200 | 0.4774 | 0.8049 |
| 0.2327 | 11.0 | 2420 | 0.4274 | 0.8537 |
| 0.1962 | 12.0 | 2640 | 0.3959 | 0.8293 |
| 0.1844 | 13.0 | 2860 | 0.3504 | 0.8293 |
| 0.1541 | 14.0 | 3080 | 0.3380 | 0.8293 |
| 0.1252 | 15.0 | 3300 | 0.3134 | 0.8293 |
| 0.1083 | 16.0 | 3520 | 0.2915 | 0.8537 |
| 0.0934 | 17.0 | 3740 | 0.2801 | 0.8537 |
| 0.092 | 18.0 | 3960 | 0.2741 | 0.8780 |
| 0.083 | 19.0 | 4180 | 0.2729 | 0.8780 |
| 0.0654 | 20.0 | 4400 | 0.2600 | 0.8780 |
| 0.0561 | 21.0 | 4620 | 0.2597 | 0.8780 |
| 0.0451 | 22.0 | 4840 | 0.2669 | 0.8780 |
| 0.0472 | 23.0 | 5060 | 0.2513 | 0.8780 |
| 0.0354 | 24.0 | 5280 | 0.2691 | 0.8780 |
| 0.0367 | 25.0 | 5500 | 0.2575 | 0.8780 |
| 0.0263 | 26.0 | 5720 | 0.2498 | 0.8780 |
| 0.0333 | 27.0 | 5940 | 0.2592 | 0.8780 |
| 0.0309 | 28.0 | 6160 | 0.2598 | 0.8780 |
| 0.0217 | 29.0 | 6380 | 0.2806 | 0.8780 |
| 0.0156 | 30.0 | 6600 | 0.2646 | 0.8780 |
| 0.0138 | 31.0 | 6820 | 0.2633 | 0.8780 |
| 0.0242 | 32.0 | 7040 | 0.2880 | 0.8780 |
| 0.0128 | 33.0 | 7260 | 0.2733 | 0.8780 |
| 0.0109 | 34.0 | 7480 | 0.2902 | 0.8780 |
| 0.011 | 35.0 | 7700 | 0.2855 | 0.8780 |
| 0.0099 | 36.0 | 7920 | 0.3131 | 0.8537 |
| 0.0084 | 37.0 | 8140 | 0.2898 | 0.8780 |
| 0.0211 | 38.0 | 8360 | 0.3199 | 0.8537 |
| 0.014 | 39.0 | 8580 | 0.3071 | 0.8537 |
| 0.0076 | 40.0 | 8800 | 0.2872 | 0.8780 |
| 0.0086 | 41.0 | 9020 | 0.3095 | 0.8537 |
| 0.0068 | 42.0 | 9240 | 0.3068 | 0.8537 |
| 0.0085 | 43.0 | 9460 | 0.3042 | 0.8537 |
| 0.0079 | 44.0 | 9680 | 0.3170 | 0.8537 |
| 0.0089 | 45.0 | 9900 | 0.3144 | 0.8537 |
| 0.005 | 46.0 | 10120 | 0.3132 | 0.8537 |
| 0.0073 | 47.0 | 10340 | 0.3173 | 0.8537 |
| 0.01 | 48.0 | 10560 | 0.3150 | 0.8537 |
| 0.0068 | 49.0 | 10780 | 0.3144 | 0.8537 |
| 0.0068 | 50.0 | 11000 | 0.3138 | 0.8537 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_0001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1092
- Accuracy: 0.4634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7467 | 1.0 | 220 | 1.6098 | 0.2683 |
| 1.5306 | 2.0 | 440 | 1.5314 | 0.2683 |
| 1.3989 | 3.0 | 660 | 1.5004 | 0.2439 |
| 1.3588 | 4.0 | 880 | 1.4811 | 0.2195 |
| 1.3953 | 5.0 | 1100 | 1.4639 | 0.2683 |
| 1.3096 | 6.0 | 1320 | 1.4476 | 0.2439 |
| 1.2743 | 7.0 | 1540 | 1.4329 | 0.2683 |
| 1.2405 | 8.0 | 1760 | 1.4190 | 0.2927 |
| 1.253 | 9.0 | 1980 | 1.4052 | 0.3171 |
| 1.2253 | 10.0 | 2200 | 1.3912 | 0.3171 |
| 1.1663 | 11.0 | 2420 | 1.3767 | 0.3659 |
| 1.1699 | 12.0 | 2640 | 1.3616 | 0.3659 |
| 1.1615 | 13.0 | 2860 | 1.3463 | 0.3659 |
| 1.0999 | 14.0 | 3080 | 1.3303 | 0.3902 |
| 1.1286 | 15.0 | 3300 | 1.3148 | 0.3659 |
| 1.1333 | 16.0 | 3520 | 1.2990 | 0.3659 |
| 1.075 | 17.0 | 3740 | 1.2842 | 0.3659 |
| 1.0779 | 18.0 | 3960 | 1.2709 | 0.3659 |
| 1.0652 | 19.0 | 4180 | 1.2579 | 0.3659 |
| 1.0475 | 20.0 | 4400 | 1.2462 | 0.3659 |
| 1.0095 | 21.0 | 4620 | 1.2350 | 0.3902 |
| 1.0607 | 22.0 | 4840 | 1.2247 | 0.3902 |
| 1.0243 | 23.0 | 5060 | 1.2151 | 0.4146 |
| 1.0174 | 24.0 | 5280 | 1.2064 | 0.4146 |
| 0.9654 | 25.0 | 5500 | 1.1977 | 0.3902 |
| 1.017 | 26.0 | 5720 | 1.1899 | 0.4146 |
| 1.0002 | 27.0 | 5940 | 1.1820 | 0.3902 |
| 1.0191 | 28.0 | 6160 | 1.1750 | 0.3902 |
| 0.9876 | 29.0 | 6380 | 1.1683 | 0.3902 |
| 0.9526 | 30.0 | 6600 | 1.1623 | 0.4146 |
| 0.9957 | 31.0 | 6820 | 1.1566 | 0.4390 |
| 0.9778 | 32.0 | 7040 | 1.1513 | 0.4390 |
| 0.9223 | 33.0 | 7260 | 1.1464 | 0.4634 |
| 0.9281 | 34.0 | 7480 | 1.1418 | 0.4634 |
| 0.9107 | 35.0 | 7700 | 1.1376 | 0.4634 |
| 0.9485 | 36.0 | 7920 | 1.1336 | 0.4634 |
| 0.9035 | 37.0 | 8140 | 1.1298 | 0.4634 |
| 0.9223 | 38.0 | 8360 | 1.1266 | 0.4634 |
| 0.9312 | 39.0 | 8580 | 1.1235 | 0.4634 |
| 0.8782 | 40.0 | 8800 | 1.1209 | 0.4634 |
| 0.9252 | 41.0 | 9020 | 1.1184 | 0.4634 |
| 0.8989 | 42.0 | 9240 | 1.1164 | 0.4634 |
| 0.8959 | 43.0 | 9460 | 1.1145 | 0.4634 |
| 0.8589 | 44.0 | 9680 | 1.1130 | 0.4634 |
| 0.8899 | 45.0 | 9900 | 1.1117 | 0.4634 |
| 0.8915 | 46.0 | 10120 | 1.1107 | 0.4634 |
| 0.9043 | 47.0 | 10340 | 1.1100 | 0.4634 |
| 0.8309 | 48.0 | 10560 | 1.1095 | 0.4634 |
| 0.8724 | 49.0 | 10780 | 1.1093 | 0.4634 |
| 0.9011 | 50.0 | 11000 | 1.1092 | 0.4634 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_00001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4582
- Accuracy: 0.2444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.0145 | 1.0 | 215 | 1.6167 | 0.2444 |
| 1.9528 | 2.0 | 430 | 1.6051 | 0.2444 |
| 1.8549 | 3.0 | 645 | 1.5941 | 0.2444 |
| 1.9735 | 4.0 | 860 | 1.5837 | 0.2444 |
| 1.8613 | 5.0 | 1075 | 1.5740 | 0.2444 |
| 1.8998 | 6.0 | 1290 | 1.5649 | 0.2667 |
| 1.9805 | 7.0 | 1505 | 1.5563 | 0.2667 |
| 1.7761 | 8.0 | 1720 | 1.5483 | 0.2444 |
| 1.8452 | 9.0 | 1935 | 1.5408 | 0.2222 |
| 1.7743 | 10.0 | 2150 | 1.5338 | 0.2222 |
| 1.7542 | 11.0 | 2365 | 1.5272 | 0.2222 |
| 1.7225 | 12.0 | 2580 | 1.5212 | 0.2222 |
| 1.7844 | 13.0 | 2795 | 1.5156 | 0.2222 |
| 1.7312 | 14.0 | 3010 | 1.5104 | 0.2444 |
| 1.766 | 15.0 | 3225 | 1.5056 | 0.2222 |
| 1.6922 | 16.0 | 3440 | 1.5012 | 0.2222 |
| 1.6889 | 17.0 | 3655 | 1.4971 | 0.2222 |
| 1.6643 | 18.0 | 3870 | 1.4934 | 0.2 |
| 1.6328 | 19.0 | 4085 | 1.4899 | 0.2222 |
| 1.6408 | 20.0 | 4300 | 1.4868 | 0.2222 |
| 1.5927 | 21.0 | 4515 | 1.4839 | 0.2222 |
| 1.6137 | 22.0 | 4730 | 1.4813 | 0.2 |
| 1.623 | 23.0 | 4945 | 1.4789 | 0.1778 |
| 1.5692 | 24.0 | 5160 | 1.4767 | 0.1778 |
| 1.5514 | 25.0 | 5375 | 1.4747 | 0.1778 |
| 1.5483 | 26.0 | 5590 | 1.4729 | 0.2222 |
| 1.571 | 27.0 | 5805 | 1.4713 | 0.2222 |
| 1.5882 | 28.0 | 6020 | 1.4698 | 0.2222 |
| 1.524 | 29.0 | 6235 | 1.4684 | 0.2222 |
| 1.5611 | 30.0 | 6450 | 1.4672 | 0.2222 |
| 1.5511 | 31.0 | 6665 | 1.4661 | 0.2222 |
| 1.5655 | 32.0 | 6880 | 1.4650 | 0.2222 |
| 1.5736 | 33.0 | 7095 | 1.4641 | 0.2222 |
| 1.5317 | 34.0 | 7310 | 1.4633 | 0.2222 |
| 1.5555 | 35.0 | 7525 | 1.4625 | 0.2222 |
| 1.5608 | 36.0 | 7740 | 1.4619 | 0.2222 |
| 1.5011 | 37.0 | 7955 | 1.4612 | 0.2222 |
| 1.5571 | 38.0 | 8170 | 1.4607 | 0.2444 |
| 1.4975 | 39.0 | 8385 | 1.4602 | 0.2444 |
| 1.4908 | 40.0 | 8600 | 1.4598 | 0.2444 |
| 1.5291 | 41.0 | 8815 | 1.4595 | 0.2444 |
| 1.52 | 42.0 | 9030 | 1.4592 | 0.2444 |
| 1.5041 | 43.0 | 9245 | 1.4589 | 0.2444 |
| 1.5102 | 44.0 | 9460 | 1.4587 | 0.2444 |
| 1.5245 | 45.0 | 9675 | 1.4585 | 0.2444 |
| 1.4992 | 46.0 | 9890 | 1.4584 | 0.2444 |
| 1.4976 | 47.0 | 10105 | 1.4583 | 0.2444 |
| 1.5255 | 48.0 | 10320 | 1.4582 | 0.2444 |
| 1.4826 | 49.0 | 10535 | 1.4582 | 0.2444 |
| 1.5224 | 50.0 | 10750 | 1.4582 | 0.2444 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2597
- Accuracy: 0.4222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3825 | 1.0 | 215 | 1.4688 | 0.2667 |
| 1.3534 | 2.0 | 430 | 1.4553 | 0.3778 |
| 0.9498 | 3.0 | 645 | 1.8460 | 0.3333 |
| 0.7874 | 4.0 | 860 | 1.0992 | 0.4444 |
| 0.6519 | 5.0 | 1075 | 1.5864 | 0.4222 |
| 0.6238 | 6.0 | 1290 | 1.5678 | 0.4444 |
| 0.6712 | 7.0 | 1505 | 1.5837 | 0.3778 |
| 0.6234 | 8.0 | 1720 | 1.4844 | 0.3778 |
| 0.6842 | 9.0 | 1935 | 1.4360 | 0.4 |
| 0.5244 | 10.0 | 2150 | 1.9225 | 0.3778 |
| 0.5422 | 11.0 | 2365 | 1.4512 | 0.4667 |
| 0.4482 | 12.0 | 2580 | 2.2789 | 0.3556 |
| 0.5899 | 13.0 | 2795 | 1.6124 | 0.4222 |
| 0.4227 | 14.0 | 3010 | 1.8210 | 0.4444 |
| 0.4862 | 15.0 | 3225 | 1.4215 | 0.4667 |
| 0.4615 | 16.0 | 3440 | 2.1496 | 0.3778 |
| 0.6895 | 17.0 | 3655 | 1.7698 | 0.4667 |
| 0.3741 | 18.0 | 3870 | 2.6905 | 0.3556 |
| 0.3762 | 19.0 | 4085 | 2.4546 | 0.4222 |
| 0.3383 | 20.0 | 4300 | 2.0176 | 0.3778 |
| 0.3622 | 21.0 | 4515 | 2.9706 | 0.4 |
| 0.3284 | 22.0 | 4730 | 2.9396 | 0.4 |
| 0.2403 | 23.0 | 4945 | 2.3459 | 0.4889 |
| 0.345 | 24.0 | 5160 | 3.1195 | 0.4222 |
| 0.3045 | 25.0 | 5375 | 2.4187 | 0.4667 |
| 0.2936 | 26.0 | 5590 | 2.9167 | 0.3556 |
| 0.249 | 27.0 | 5805 | 2.5521 | 0.4667 |
| 0.2161 | 28.0 | 6020 | 3.7842 | 0.3778 |
| 0.2382 | 29.0 | 6235 | 3.0584 | 0.4 |
| 0.1225 | 30.0 | 6450 | 4.4557 | 0.4 |
| 0.2075 | 31.0 | 6665 | 4.7131 | 0.3111 |
| 0.1575 | 32.0 | 6880 | 3.8714 | 0.3556 |
| 0.1516 | 33.0 | 7095 | 4.5510 | 0.4 |
| 0.1231 | 34.0 | 7310 | 5.0636 | 0.3778 |
| 0.0943 | 35.0 | 7525 | 4.2212 | 0.4 |
| 0.0741 | 36.0 | 7740 | 4.4947 | 0.4 |
| 0.0582 | 37.0 | 7955 | 4.8808 | 0.4222 |
| 0.0412 | 38.0 | 8170 | 5.2254 | 0.3778 |
| 0.0508 | 39.0 | 8385 | 5.2558 | 0.3556 |
| 0.0566 | 40.0 | 8600 | 5.9529 | 0.3556 |
| 0.0397 | 41.0 | 8815 | 5.9087 | 0.3333 |
| 0.0462 | 42.0 | 9030 | 6.2634 | 0.4444 |
| 0.0245 | 43.0 | 9245 | 6.0294 | 0.4222 |
| 0.0398 | 44.0 | 9460 | 6.9015 | 0.4222 |
| 0.0182 | 45.0 | 9675 | 5.5112 | 0.4667 |
| 0.0162 | 46.0 | 9890 | 6.0476 | 0.4889 |
| 0.0028 | 47.0 | 10105 | 6.5416 | 0.4667 |
| 0.0087 | 48.0 | 10320 | 6.8964 | 0.4444 |
| 0.0011 | 49.0 | 10535 | 7.0908 | 0.4222 |
| 0.0007 | 50.0 | 10750 | 7.2597 | 0.4222 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_00001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4445
- Accuracy: 0.2889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.9592 | 1.0 | 215 | 1.6375 | 0.2444 |
| 1.9589 | 2.0 | 430 | 1.6244 | 0.2444 |
| 1.9405 | 3.0 | 645 | 1.6119 | 0.2444 |
| 1.8111 | 4.0 | 860 | 1.6000 | 0.2444 |
| 1.7854 | 5.0 | 1075 | 1.5888 | 0.2444 |
| 1.7837 | 6.0 | 1290 | 1.5782 | 0.2444 |
| 1.8329 | 7.0 | 1505 | 1.5681 | 0.2444 |
| 1.808 | 8.0 | 1720 | 1.5587 | 0.2444 |
| 1.8135 | 9.0 | 1935 | 1.5499 | 0.2444 |
| 1.7532 | 10.0 | 2150 | 1.5416 | 0.2444 |
| 1.7679 | 11.0 | 2365 | 1.5338 | 0.2667 |
| 1.6698 | 12.0 | 2580 | 1.5266 | 0.2667 |
| 1.7071 | 13.0 | 2795 | 1.5198 | 0.2667 |
| 1.7064 | 14.0 | 3010 | 1.5135 | 0.2667 |
| 1.6474 | 15.0 | 3225 | 1.5077 | 0.2667 |
| 1.6795 | 16.0 | 3440 | 1.5023 | 0.2667 |
| 1.6509 | 17.0 | 3655 | 1.4973 | 0.2444 |
| 1.6571 | 18.0 | 3870 | 1.4927 | 0.2444 |
| 1.6903 | 19.0 | 4085 | 1.4885 | 0.2444 |
| 1.594 | 20.0 | 4300 | 1.4845 | 0.2444 |
| 1.6043 | 21.0 | 4515 | 1.4809 | 0.2444 |
| 1.5781 | 22.0 | 4730 | 1.4775 | 0.2667 |
| 1.6042 | 23.0 | 4945 | 1.4744 | 0.2667 |
| 1.6585 | 24.0 | 5160 | 1.4716 | 0.2667 |
| 1.6133 | 25.0 | 5375 | 1.4689 | 0.2667 |
| 1.5503 | 26.0 | 5590 | 1.4665 | 0.2667 |
| 1.559 | 27.0 | 5805 | 1.4642 | 0.2889 |
| 1.5271 | 28.0 | 6020 | 1.4621 | 0.3111 |
| 1.5368 | 29.0 | 6235 | 1.4602 | 0.3111 |
| 1.5411 | 30.0 | 6450 | 1.4584 | 0.3111 |
| 1.6163 | 31.0 | 6665 | 1.4568 | 0.3111 |
| 1.5496 | 32.0 | 6880 | 1.4553 | 0.3111 |
| 1.5517 | 33.0 | 7095 | 1.4539 | 0.3111 |
| 1.5789 | 34.0 | 7310 | 1.4526 | 0.3111 |
| 1.5768 | 35.0 | 7525 | 1.4515 | 0.3111 |
| 1.5496 | 36.0 | 7740 | 1.4505 | 0.3111 |
| 1.5074 | 37.0 | 7955 | 1.4495 | 0.3111 |
| 1.5918 | 38.0 | 8170 | 1.4487 | 0.3111 |
| 1.5751 | 39.0 | 8385 | 1.4479 | 0.3111 |
| 1.5533 | 40.0 | 8600 | 1.4472 | 0.3111 |
| 1.5217 | 41.0 | 8815 | 1.4467 | 0.3111 |
| 1.5477 | 42.0 | 9030 | 1.4461 | 0.3111 |
| 1.5219 | 43.0 | 9245 | 1.4457 | 0.3111 |
| 1.5414 | 44.0 | 9460 | 1.4453 | 0.3111 |
| 1.5697 | 45.0 | 9675 | 1.4450 | 0.3111 |
| 1.5331 | 46.0 | 9890 | 1.4448 | 0.3111 |
| 1.5195 | 47.0 | 10105 | 1.4446 | 0.3111 |
| 1.5072 | 48.0 | 10320 | 1.4445 | 0.2889 |
| 1.5469 | 49.0 | 10535 | 1.4445 | 0.2889 |
| 1.5351 | 50.0 | 10750 | 1.4445 | 0.2889 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 7.1718
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2965 | 1.0 | 215 | 1.2091 | 0.4667 |
| 0.8177 | 2.0 | 430 | 1.2899 | 0.5333 |
| 0.7013 | 3.0 | 645 | 1.2461 | 0.5111 |
| 0.6618 | 4.0 | 860 | 1.7839 | 0.4444 |
| 0.462 | 5.0 | 1075 | 1.6189 | 0.4889 |
| 0.4201 | 6.0 | 1290 | 2.6718 | 0.4889 |
| 0.3471 | 7.0 | 1505 | 2.6469 | 0.5556 |
| 0.3247 | 8.0 | 1720 | 2.5316 | 0.5778 |
| 0.3947 | 9.0 | 1935 | 2.1794 | 0.5556 |
| 0.3688 | 10.0 | 2150 | 2.4409 | 0.4889 |
| 0.3554 | 11.0 | 2365 | 2.7892 | 0.5556 |
| 0.3048 | 12.0 | 2580 | 3.7146 | 0.5333 |
| 0.3088 | 13.0 | 2795 | 2.9395 | 0.6 |
| 0.2595 | 14.0 | 3010 | 2.6649 | 0.6222 |
| 0.2467 | 15.0 | 3225 | 3.0874 | 0.5778 |
| 0.2539 | 16.0 | 3440 | 3.4419 | 0.6 |
| 0.2637 | 17.0 | 3655 | 3.2496 | 0.6 |
| 0.2157 | 18.0 | 3870 | 3.3961 | 0.6 |
| 0.1038 | 19.0 | 4085 | 4.4011 | 0.6 |
| 0.1844 | 20.0 | 4300 | 3.9340 | 0.5111 |
| 0.1977 | 21.0 | 4515 | 4.0238 | 0.5556 |
| 0.1597 | 22.0 | 4730 | 3.8533 | 0.5778 |
| 0.1177 | 23.0 | 4945 | 4.1555 | 0.5556 |
| 0.0976 | 24.0 | 5160 | 3.8653 | 0.6222 |
| 0.106 | 25.0 | 5375 | 3.5209 | 0.5778 |
| 0.0483 | 26.0 | 5590 | 4.5365 | 0.6222 |
| 0.0977 | 27.0 | 5805 | 3.6449 | 0.6 |
| 0.115 | 28.0 | 6020 | 4.3353 | 0.6 |
| 0.0948 | 29.0 | 6235 | 4.1300 | 0.6222 |
| 0.0615 | 30.0 | 6450 | 4.7232 | 0.6 |
| 0.1033 | 31.0 | 6665 | 4.3508 | 0.5778 |
| 0.0812 | 32.0 | 6880 | 4.5803 | 0.5778 |
| 0.056 | 33.0 | 7095 | 4.4372 | 0.6222 |
| 0.0361 | 34.0 | 7310 | 4.9845 | 0.6444 |
| 0.0141 | 35.0 | 7525 | 6.0367 | 0.5778 |
| 0.0009 | 36.0 | 7740 | 5.8383 | 0.6222 |
| 0.0009 | 37.0 | 7955 | 5.7637 | 0.6222 |
| 0.0203 | 38.0 | 8170 | 5.3901 | 0.6 |
| 0.049 | 39.0 | 8385 | 5.5891 | 0.5778 |
| 0.0029 | 40.0 | 8600 | 5.9302 | 0.5778 |
| 0.0002 | 41.0 | 8815 | 5.9565 | 0.5778 |
| 0.0046 | 42.0 | 9030 | 6.4259 | 0.5778 |
| 0.0003 | 43.0 | 9245 | 6.2149 | 0.6222 |
| 0.0 | 44.0 | 9460 | 6.6058 | 0.6 |
| 0.0 | 45.0 | 9675 | 6.5823 | 0.6 |
| 0.0 | 46.0 | 9890 | 6.7170 | 0.6 |
| 0.0 | 47.0 | 10105 | 6.8829 | 0.6 |
| 0.0 | 48.0 | 10320 | 7.0021 | 0.6 |
| 0.0 | 49.0 | 10535 | 7.1301 | 0.6 |
| 0.0 | 50.0 | 10750 | 7.1718 | 0.6 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_00001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3703
- Accuracy: 0.3256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.8914 | 1.0 | 217 | 1.5805 | 0.2558 |
| 2.0178 | 2.0 | 434 | 1.5654 | 0.2558 |
| 2.0179 | 3.0 | 651 | 1.5510 | 0.2558 |
| 1.8888 | 4.0 | 868 | 1.5374 | 0.2558 |
| 1.872 | 5.0 | 1085 | 1.5245 | 0.2558 |
| 1.7831 | 6.0 | 1302 | 1.5124 | 0.2558 |
| 1.836 | 7.0 | 1519 | 1.5009 | 0.2558 |
| 1.8178 | 8.0 | 1736 | 1.4901 | 0.2558 |
| 1.7694 | 9.0 | 1953 | 1.4801 | 0.2326 |
| 1.7678 | 10.0 | 2170 | 1.4706 | 0.2326 |
| 1.659 | 11.0 | 2387 | 1.4618 | 0.2326 |
| 1.6239 | 12.0 | 2604 | 1.4536 | 0.2558 |
| 1.6882 | 13.0 | 2821 | 1.4460 | 0.2558 |
| 1.6748 | 14.0 | 3038 | 1.4391 | 0.2558 |
| 1.6892 | 15.0 | 3255 | 1.4327 | 0.2791 |
| 1.725 | 16.0 | 3472 | 1.4268 | 0.2791 |
| 1.6371 | 17.0 | 3689 | 1.4214 | 0.2791 |
| 1.6193 | 18.0 | 3906 | 1.4164 | 0.3256 |
| 1.6512 | 19.0 | 4123 | 1.4119 | 0.3256 |
| 1.6188 | 20.0 | 4340 | 1.4078 | 0.3256 |
| 1.643 | 21.0 | 4557 | 1.4041 | 0.3256 |
| 1.5803 | 22.0 | 4774 | 1.4006 | 0.3256 |
| 1.592 | 23.0 | 4991 | 1.3975 | 0.3256 |
| 1.5987 | 24.0 | 5208 | 1.3946 | 0.3256 |
| 1.566 | 25.0 | 5425 | 1.3921 | 0.3488 |
| 1.5574 | 26.0 | 5642 | 1.3897 | 0.3488 |
| 1.4978 | 27.0 | 5859 | 1.3876 | 0.3488 |
| 1.524 | 28.0 | 6076 | 1.3857 | 0.3488 |
| 1.5682 | 29.0 | 6293 | 1.3839 | 0.3488 |
| 1.5042 | 30.0 | 6510 | 1.3823 | 0.3488 |
| 1.5589 | 31.0 | 6727 | 1.3808 | 0.3023 |
| 1.5347 | 32.0 | 6944 | 1.3795 | 0.3023 |
| 1.5403 | 33.0 | 7161 | 1.3783 | 0.3023 |
| 1.5548 | 34.0 | 7378 | 1.3772 | 0.3023 |
| 1.5321 | 35.0 | 7595 | 1.3762 | 0.3023 |
| 1.5015 | 36.0 | 7812 | 1.3753 | 0.3023 |
| 1.4993 | 37.0 | 8029 | 1.3745 | 0.3023 |
| 1.4844 | 38.0 | 8246 | 1.3738 | 0.3023 |
| 1.5191 | 39.0 | 8463 | 1.3732 | 0.3023 |
| 1.515 | 40.0 | 8680 | 1.3726 | 0.3256 |
| 1.4957 | 41.0 | 8897 | 1.3721 | 0.3256 |
| 1.5585 | 42.0 | 9114 | 1.3717 | 0.3256 |
| 1.5037 | 43.0 | 9331 | 1.3713 | 0.3256 |
| 1.4828 | 44.0 | 9548 | 1.3710 | 0.3256 |
| 1.4967 | 45.0 | 9765 | 1.3708 | 0.3256 |
| 1.5387 | 46.0 | 9982 | 1.3706 | 0.3256 |
| 1.5118 | 47.0 | 10199 | 1.3705 | 0.3256 |
| 1.5073 | 48.0 | 10416 | 1.3704 | 0.3256 |
| 1.5166 | 49.0 | 10633 | 1.3703 | 0.3256 |
| 1.4994 | 50.0 | 10850 | 1.3703 | 0.3256 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1579
- Accuracy: 0.7442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2419 | 1.0 | 217 | 1.3981 | 0.2791 |
| 1.0235 | 2.0 | 434 | 1.3169 | 0.3953 |
| 0.8369 | 3.0 | 651 | 1.0743 | 0.4884 |
| 0.7963 | 4.0 | 868 | 0.6563 | 0.6977 |
| 0.7399 | 5.0 | 1085 | 1.1403 | 0.4651 |
| 0.591 | 6.0 | 1302 | 0.6390 | 0.7209 |
| 0.4772 | 7.0 | 1519 | 0.8818 | 0.6047 |
| 0.4582 | 8.0 | 1736 | 0.8295 | 0.6744 |
| 0.4273 | 9.0 | 1953 | 1.1233 | 0.4884 |
| 0.3402 | 10.0 | 2170 | 0.8028 | 0.7442 |
| 0.3174 | 11.0 | 2387 | 1.2880 | 0.5581 |
| 0.2909 | 12.0 | 2604 | 1.5844 | 0.6512 |
| 0.2204 | 13.0 | 2821 | 1.1940 | 0.6977 |
| 0.2639 | 14.0 | 3038 | 1.0276 | 0.6279 |
| 0.2085 | 15.0 | 3255 | 1.7122 | 0.6512 |
| 0.1551 | 16.0 | 3472 | 1.0876 | 0.7209 |
| 0.2066 | 17.0 | 3689 | 1.4826 | 0.6279 |
| 0.1259 | 18.0 | 3906 | 1.7194 | 0.6279 |
| 0.1381 | 19.0 | 4123 | 1.1881 | 0.7442 |
| 0.0864 | 20.0 | 4340 | 2.4912 | 0.7209 |
| 0.1059 | 21.0 | 4557 | 1.6650 | 0.6977 |
| 0.0958 | 22.0 | 4774 | 1.6843 | 0.6977 |
| 0.0803 | 23.0 | 4991 | 2.0214 | 0.6279 |
| 0.0716 | 24.0 | 5208 | 2.3668 | 0.6977 |
| 0.0335 | 25.0 | 5425 | 1.8384 | 0.6279 |
| 0.0722 | 26.0 | 5642 | 1.9563 | 0.6744 |
| 0.0543 | 27.0 | 5859 | 2.2739 | 0.6744 |
| 0.024 | 28.0 | 6076 | 1.7616 | 0.6977 |
| 0.0588 | 29.0 | 6293 | 1.9807 | 0.6977 |
| 0.0731 | 30.0 | 6510 | 2.0008 | 0.6279 |
| 0.0315 | 31.0 | 6727 | 2.2264 | 0.7209 |
| 0.0084 | 32.0 | 6944 | 2.2231 | 0.7674 |
| 0.0194 | 33.0 | 7161 | 2.3580 | 0.6977 |
| 0.0559 | 34.0 | 7378 | 2.5423 | 0.7209 |
| 0.0002 | 35.0 | 7595 | 2.6899 | 0.7674 |
| 0.0092 | 36.0 | 7812 | 2.7843 | 0.6744 |
| 0.0002 | 37.0 | 8029 | 2.7034 | 0.7442 |
| 0.016 | 38.0 | 8246 | 2.9844 | 0.7674 |
| 0.0006 | 39.0 | 8463 | 1.9924 | 0.8140 |
| 0.006 | 40.0 | 8680 | 2.8801 | 0.6977 |
| 0.0001 | 41.0 | 8897 | 2.7323 | 0.7674 |
| 0.0001 | 42.0 | 9114 | 3.2030 | 0.6977 |
| 0.0002 | 43.0 | 9331 | 3.6553 | 0.7674 |
| 0.0001 | 44.0 | 9548 | 2.9080 | 0.7209 |
| 0.0001 | 45.0 | 9765 | 2.8393 | 0.7442 |
| 0.0 | 46.0 | 9982 | 2.9525 | 0.7442 |
| 0.0 | 47.0 | 10199 | 3.0057 | 0.7442 |
| 0.0 | 48.0 | 10416 | 3.0880 | 0.7442 |
| 0.0 | 49.0 | 10633 | 3.1339 | 0.7442 |
| 0.0 | 50.0 | 10850 | 3.1579 | 0.7442 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_00001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4814
- Accuracy: 0.2619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.9934 | 1.0 | 219 | 1.6922 | 0.2619 |
| 1.9676 | 2.0 | 438 | 1.6780 | 0.2619 |
| 1.8623 | 3.0 | 657 | 1.6644 | 0.2619 |
| 1.8471 | 4.0 | 876 | 1.6514 | 0.2619 |
| 1.8032 | 5.0 | 1095 | 1.6391 | 0.2619 |
| 1.7347 | 6.0 | 1314 | 1.6275 | 0.2619 |
| 1.7421 | 7.0 | 1533 | 1.6165 | 0.2619 |
| 1.7487 | 8.0 | 1752 | 1.6060 | 0.2857 |
| 1.7488 | 9.0 | 1971 | 1.5962 | 0.2857 |
| 1.6907 | 10.0 | 2190 | 1.5869 | 0.2857 |
| 1.7571 | 11.0 | 2409 | 1.5783 | 0.2857 |
| 1.7123 | 12.0 | 2628 | 1.5703 | 0.3095 |
| 1.6705 | 13.0 | 2847 | 1.5628 | 0.3095 |
| 1.6677 | 14.0 | 3066 | 1.5558 | 0.3095 |
| 1.7011 | 15.0 | 3285 | 1.5494 | 0.3095 |
| 1.6459 | 16.0 | 3504 | 1.5434 | 0.3095 |
| 1.6424 | 17.0 | 3723 | 1.5379 | 0.2857 |
| 1.666 | 18.0 | 3942 | 1.5328 | 0.2857 |
| 1.5793 | 19.0 | 4161 | 1.5282 | 0.2857 |
| 1.612 | 20.0 | 4380 | 1.5239 | 0.2857 |
| 1.6378 | 21.0 | 4599 | 1.5199 | 0.2857 |
| 1.603 | 22.0 | 4818 | 1.5163 | 0.2857 |
| 1.5587 | 23.0 | 5037 | 1.5130 | 0.2619 |
| 1.5451 | 24.0 | 5256 | 1.5099 | 0.2619 |
| 1.6036 | 25.0 | 5475 | 1.5071 | 0.2619 |
| 1.5234 | 26.0 | 5694 | 1.5045 | 0.2857 |
| 1.5523 | 27.0 | 5913 | 1.5021 | 0.2857 |
| 1.5056 | 28.0 | 6132 | 1.4999 | 0.2857 |
| 1.5455 | 29.0 | 6351 | 1.4979 | 0.2857 |
| 1.5427 | 30.0 | 6570 | 1.4960 | 0.2857 |
| 1.566 | 31.0 | 6789 | 1.4943 | 0.2857 |
| 1.5088 | 32.0 | 7008 | 1.4927 | 0.2857 |
| 1.524 | 33.0 | 7227 | 1.4913 | 0.2857 |
| 1.5159 | 34.0 | 7446 | 1.4900 | 0.2857 |
| 1.4946 | 35.0 | 7665 | 1.4888 | 0.2619 |
| 1.5331 | 36.0 | 7884 | 1.4877 | 0.2619 |
| 1.5143 | 37.0 | 8103 | 1.4867 | 0.2619 |
| 1.5074 | 38.0 | 8322 | 1.4858 | 0.2619 |
| 1.4853 | 39.0 | 8541 | 1.4850 | 0.2619 |
| 1.5395 | 40.0 | 8760 | 1.4843 | 0.2619 |
| 1.4854 | 41.0 | 8979 | 1.4837 | 0.2619 |
| 1.4835 | 42.0 | 9198 | 1.4832 | 0.2619 |
| 1.5069 | 43.0 | 9417 | 1.4827 | 0.2619 |
| 1.4921 | 44.0 | 9636 | 1.4823 | 0.2619 |
| 1.4639 | 45.0 | 9855 | 1.4820 | 0.2619 |
| 1.511 | 46.0 | 10074 | 1.4818 | 0.2619 |
| 1.4986 | 47.0 | 10293 | 1.4816 | 0.2619 |
| 1.4648 | 48.0 | 10512 | 1.4815 | 0.2619 |
| 1.5244 | 49.0 | 10731 | 1.4814 | 0.2619 |
| 1.5102 | 50.0 | 10950 | 1.4814 | 0.2619 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0611
- Accuracy: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2113 | 1.0 | 219 | 1.1711 | 0.4286 |
| 1.0563 | 2.0 | 438 | 1.0188 | 0.5476 |
| 1.1492 | 3.0 | 657 | 1.0982 | 0.5 |
| 0.8355 | 4.0 | 876 | 0.9249 | 0.4524 |
| 0.9277 | 5.0 | 1095 | 0.9715 | 0.4762 |
| 0.9059 | 6.0 | 1314 | 0.8917 | 0.5476 |
| 0.697 | 7.0 | 1533 | 1.5323 | 0.5 |
| 0.7359 | 8.0 | 1752 | 0.7730 | 0.6190 |
| 0.6186 | 9.0 | 1971 | 0.7734 | 0.6190 |
| 0.5814 | 10.0 | 2190 | 0.7874 | 0.7143 |
| 0.5808 | 11.0 | 2409 | 0.5974 | 0.7619 |
| 0.5808 | 12.0 | 2628 | 0.7519 | 0.7381 |
| 0.5465 | 13.0 | 2847 | 0.4863 | 0.8095 |
| 0.4671 | 14.0 | 3066 | 0.6575 | 0.6905 |
| 0.5228 | 15.0 | 3285 | 0.6495 | 0.7143 |
| 0.4327 | 16.0 | 3504 | 0.7075 | 0.7619 |
| 0.3283 | 17.0 | 3723 | 0.6356 | 0.7381 |
| 0.3853 | 18.0 | 3942 | 0.5432 | 0.7857 |
| 0.3803 | 19.0 | 4161 | 0.9396 | 0.7857 |
| 0.3493 | 20.0 | 4380 | 0.8015 | 0.7143 |
| 0.3953 | 21.0 | 4599 | 0.7074 | 0.7619 |
| 0.3223 | 22.0 | 4818 | 1.0523 | 0.6667 |
| 0.2414 | 23.0 | 5037 | 1.0911 | 0.6667 |
| 0.2219 | 24.0 | 5256 | 1.1394 | 0.6905 |
| 0.2892 | 25.0 | 5475 | 0.7116 | 0.7619 |
| 0.2739 | 26.0 | 5694 | 1.1234 | 0.7143 |
| 0.2207 | 27.0 | 5913 | 0.8565 | 0.7857 |
| 0.1354 | 28.0 | 6132 | 1.1975 | 0.7381 |
| 0.2042 | 29.0 | 6351 | 0.8634 | 0.7619 |
| 0.152 | 30.0 | 6570 | 0.8119 | 0.7857 |
| 0.1453 | 31.0 | 6789 | 0.8364 | 0.7381 |
| 0.1714 | 32.0 | 7008 | 1.2193 | 0.8095 |
| 0.1106 | 33.0 | 7227 | 1.0792 | 0.7619 |
| 0.1157 | 34.0 | 7446 | 0.9831 | 0.7619 |
| 0.096 | 35.0 | 7665 | 1.1093 | 0.8095 |
| 0.0452 | 36.0 | 7884 | 0.9133 | 0.8095 |
| 0.0552 | 37.0 | 8103 | 1.3044 | 0.8095 |
| 0.0539 | 38.0 | 8322 | 0.9892 | 0.8095 |
| 0.041 | 39.0 | 8541 | 1.1780 | 0.8571 |
| 0.0165 | 40.0 | 8760 | 1.3517 | 0.8333 |
| 0.0361 | 41.0 | 8979 | 1.5071 | 0.8333 |
| 0.046 | 42.0 | 9198 | 1.2679 | 0.8571 |
| 0.0477 | 43.0 | 9417 | 1.7256 | 0.8333 |
| 0.0088 | 44.0 | 9636 | 1.2515 | 0.8333 |
| 0.0208 | 45.0 | 9855 | 1.8769 | 0.8571 |
| 0.0012 | 46.0 | 10074 | 1.9828 | 0.8333 |
| 0.0014 | 47.0 | 10293 | 1.9685 | 0.8571 |
| 0.0001 | 48.0 | 10512 | 1.7583 | 0.8571 |
| 0.0001 | 49.0 | 10731 | 2.0944 | 0.8571 |
| 0.0001 | 50.0 | 10950 | 2.0611 | 0.8571 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_sgd_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_00001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5176
- Accuracy: 0.3171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.9948 | 1.0 | 220 | 1.7324 | 0.2683 |
| 1.9668 | 2.0 | 440 | 1.7170 | 0.2683 |
| 1.7569 | 3.0 | 660 | 1.7024 | 0.2683 |
| 1.8204 | 4.0 | 880 | 1.6885 | 0.2683 |
| 1.8992 | 5.0 | 1100 | 1.6754 | 0.2683 |
| 1.8203 | 6.0 | 1320 | 1.6629 | 0.2683 |
| 1.8006 | 7.0 | 1540 | 1.6512 | 0.2683 |
| 1.746 | 8.0 | 1760 | 1.6401 | 0.2683 |
| 1.7509 | 9.0 | 1980 | 1.6297 | 0.2683 |
| 1.7973 | 10.0 | 2200 | 1.6200 | 0.2683 |
| 1.7248 | 11.0 | 2420 | 1.6109 | 0.2683 |
| 1.5895 | 12.0 | 2640 | 1.6025 | 0.2683 |
| 1.6708 | 13.0 | 2860 | 1.5947 | 0.2683 |
| 1.5672 | 14.0 | 3080 | 1.5875 | 0.2683 |
| 1.6734 | 15.0 | 3300 | 1.5810 | 0.2683 |
| 1.6377 | 16.0 | 3520 | 1.5749 | 0.2683 |
| 1.5807 | 17.0 | 3740 | 1.5693 | 0.2683 |
| 1.6065 | 18.0 | 3960 | 1.5643 | 0.2439 |
| 1.5952 | 19.0 | 4180 | 1.5597 | 0.2439 |
| 1.6236 | 20.0 | 4400 | 1.5555 | 0.2439 |
| 1.6357 | 21.0 | 4620 | 1.5517 | 0.2439 |
| 1.5866 | 22.0 | 4840 | 1.5483 | 0.2439 |
| 1.546 | 23.0 | 5060 | 1.5451 | 0.2439 |
| 1.5341 | 24.0 | 5280 | 1.5423 | 0.2683 |
| 1.5615 | 25.0 | 5500 | 1.5397 | 0.2683 |
| 1.5768 | 26.0 | 5720 | 1.5373 | 0.2683 |
| 1.5024 | 27.0 | 5940 | 1.5352 | 0.2683 |
| 1.5377 | 28.0 | 6160 | 1.5332 | 0.2683 |
| 1.5225 | 29.0 | 6380 | 1.5314 | 0.2683 |
| 1.5464 | 30.0 | 6600 | 1.5298 | 0.2683 |
| 1.5869 | 31.0 | 6820 | 1.5284 | 0.2683 |
| 1.5384 | 32.0 | 7040 | 1.5270 | 0.2683 |
| 1.5241 | 33.0 | 7260 | 1.5258 | 0.2683 |
| 1.5029 | 34.0 | 7480 | 1.5247 | 0.2683 |
| 1.4813 | 35.0 | 7700 | 1.5237 | 0.2927 |
| 1.4892 | 36.0 | 7920 | 1.5227 | 0.2927 |
| 1.5014 | 37.0 | 8140 | 1.5219 | 0.2927 |
| 1.5037 | 38.0 | 8360 | 1.5212 | 0.2927 |
| 1.4775 | 39.0 | 8580 | 1.5205 | 0.2927 |
| 1.4967 | 40.0 | 8800 | 1.5200 | 0.2927 |
| 1.4438 | 41.0 | 9020 | 1.5195 | 0.2927 |
| 1.4692 | 42.0 | 9240 | 1.5190 | 0.2927 |
| 1.5023 | 43.0 | 9460 | 1.5187 | 0.2927 |
| 1.4883 | 44.0 | 9680 | 1.5184 | 0.2927 |
| 1.4515 | 45.0 | 9900 | 1.5181 | 0.2927 |
| 1.4741 | 46.0 | 10120 | 1.5179 | 0.3171 |
| 1.4857 | 47.0 | 10340 | 1.5178 | 0.3171 |
| 1.4547 | 48.0 | 10560 | 1.5177 | 0.3171 |
| 1.45 | 49.0 | 10780 | 1.5176 | 0.3171 |
| 1.5056 | 50.0 | 11000 | 1.5176 | 0.3171 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6080
- Accuracy: 0.7805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4202 | 1.0 | 220 | 1.3858 | 0.2439 |
| 1.4846 | 2.0 | 440 | 1.3050 | 0.3659 |
| 1.213 | 3.0 | 660 | 1.2527 | 0.3659 |
| 1.2436 | 4.0 | 880 | 1.2166 | 0.3659 |
| 1.1042 | 5.0 | 1100 | 1.0859 | 0.4878 |
| 1.1247 | 6.0 | 1320 | 1.0109 | 0.5854 |
| 1.1005 | 7.0 | 1540 | 1.0680 | 0.4634 |
| 1.0292 | 8.0 | 1760 | 0.9693 | 0.5366 |
| 0.9154 | 9.0 | 1980 | 1.0620 | 0.6829 |
| 0.8944 | 10.0 | 2200 | 0.7963 | 0.6585 |
| 0.9136 | 11.0 | 2420 | 1.0510 | 0.5854 |
| 0.8882 | 12.0 | 2640 | 0.9455 | 0.6098 |
| 0.8527 | 13.0 | 2860 | 0.7251 | 0.6098 |
| 0.7429 | 14.0 | 3080 | 0.7653 | 0.7805 |
| 0.6936 | 15.0 | 3300 | 0.7806 | 0.7561 |
| 0.7049 | 16.0 | 3520 | 1.3071 | 0.6585 |
| 0.6294 | 17.0 | 3740 | 0.5950 | 0.7561 |
| 0.6472 | 18.0 | 3960 | 0.4686 | 0.7561 |
| 0.6452 | 19.0 | 4180 | 0.5722 | 0.7561 |
| 0.5301 | 20.0 | 4400 | 0.8370 | 0.6341 |
| 0.5967 | 21.0 | 4620 | 0.5708 | 0.8537 |
| 0.4666 | 22.0 | 4840 | 0.3950 | 0.8293 |
| 0.5239 | 23.0 | 5060 | 0.5273 | 0.8049 |
| 0.5888 | 24.0 | 5280 | 0.4686 | 0.7561 |
| 0.5034 | 25.0 | 5500 | 0.5275 | 0.7561 |
| 0.5755 | 26.0 | 5720 | 0.4866 | 0.7805 |
| 0.5913 | 27.0 | 5940 | 0.8503 | 0.7805 |
| 0.405 | 28.0 | 6160 | 0.5132 | 0.8049 |
| 0.4508 | 29.0 | 6380 | 0.7683 | 0.7805 |
| 0.4334 | 30.0 | 6600 | 0.7092 | 0.8293 |
| 0.4219 | 31.0 | 6820 | 0.5130 | 0.7561 |
| 0.4182 | 32.0 | 7040 | 0.5153 | 0.8293 |
| 0.308 | 33.0 | 7260 | 0.5501 | 0.7561 |
| 0.3445 | 34.0 | 7480 | 0.6037 | 0.8049 |
| 0.2565 | 35.0 | 7700 | 0.5862 | 0.7805 |
| 0.2803 | 36.0 | 7920 | 0.7253 | 0.7805 |
| 0.3045 | 37.0 | 8140 | 0.9133 | 0.6829 |
| 0.2202 | 38.0 | 8360 | 0.8750 | 0.7073 |
| 0.222 | 39.0 | 8580 | 0.5361 | 0.7805 |
| 0.1816 | 40.0 | 8800 | 1.1395 | 0.7317 |
| 0.2213 | 41.0 | 9020 | 0.9746 | 0.7561 |
| 0.16 | 42.0 | 9240 | 1.0585 | 0.7317 |
| 0.1084 | 43.0 | 9460 | 0.8736 | 0.7073 |
| 0.1243 | 44.0 | 9680 | 1.1686 | 0.8049 |
| 0.1009 | 45.0 | 9900 | 0.9862 | 0.7561 |
| 0.096 | 46.0 | 10120 | 1.4994 | 0.6829 |
| 0.089 | 47.0 | 10340 | 1.4415 | 0.7561 |
| 0.0473 | 48.0 | 10560 | 1.6197 | 0.7805 |
| 0.0266 | 49.0 | 10780 | 1.5765 | 0.7805 |
| 0.0418 | 50.0 | 11000 | 1.6080 | 0.7805 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_0001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2959
- Accuracy: 0.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1164 | 1.0 | 215 | 0.6806 | 0.8444 |
| 0.1473 | 2.0 | 430 | 1.6512 | 0.6889 |
| 0.0323 | 3.0 | 645 | 2.5012 | 0.5556 |
| 0.0026 | 4.0 | 860 | 1.3440 | 0.7333 |
| 0.0644 | 5.0 | 1075 | 1.9037 | 0.6667 |
| 0.0218 | 6.0 | 1290 | 1.1429 | 0.7778 |
| 0.0001 | 7.0 | 1505 | 1.3004 | 0.7778 |
| 0.0 | 8.0 | 1720 | 1.5783 | 0.8 |
| 0.0 | 9.0 | 1935 | 1.6151 | 0.8 |
| 0.0 | 10.0 | 2150 | 1.7171 | 0.7778 |
| 0.0 | 11.0 | 2365 | 1.8524 | 0.7778 |
| 0.0 | 12.0 | 2580 | 2.0103 | 0.7778 |
| 0.0 | 13.0 | 2795 | 2.1601 | 0.7778 |
| 0.0 | 14.0 | 3010 | 2.3193 | 0.7778 |
| 0.0 | 15.0 | 3225 | 2.4911 | 0.7556 |
| 0.0 | 16.0 | 3440 | 2.6216 | 0.7556 |
| 0.0 | 17.0 | 3655 | 2.7129 | 0.7556 |
| 0.0 | 18.0 | 3870 | 2.8038 | 0.7556 |
| 0.0 | 19.0 | 4085 | 2.8933 | 0.7556 |
| 0.0 | 20.0 | 4300 | 2.9673 | 0.7556 |
| 0.0 | 21.0 | 4515 | 3.0230 | 0.7556 |
| 0.0 | 22.0 | 4730 | 3.0642 | 0.7556 |
| 0.0 | 23.0 | 4945 | 3.0970 | 0.7556 |
| 0.0 | 24.0 | 5160 | 3.1238 | 0.7556 |
| 0.0 | 25.0 | 5375 | 3.1458 | 0.7556 |
| 0.0 | 26.0 | 5590 | 3.1648 | 0.7556 |
| 0.0 | 27.0 | 5805 | 3.1810 | 0.7556 |
| 0.0 | 28.0 | 6020 | 3.1953 | 0.7556 |
| 0.0 | 29.0 | 6235 | 3.2081 | 0.7556 |
| 0.0 | 30.0 | 6450 | 3.2189 | 0.7556 |
| 0.0 | 31.0 | 6665 | 3.2288 | 0.7556 |
| 0.0 | 32.0 | 6880 | 3.2374 | 0.7556 |
| 0.0 | 33.0 | 7095 | 3.2451 | 0.7556 |
| 0.0 | 34.0 | 7310 | 3.2520 | 0.7556 |
| 0.0 | 35.0 | 7525 | 3.2584 | 0.7556 |
| 0.0 | 36.0 | 7740 | 3.2638 | 0.7556 |
| 0.0 | 37.0 | 7955 | 3.2687 | 0.7556 |
| 0.0 | 38.0 | 8170 | 3.2732 | 0.7556 |
| 0.0 | 39.0 | 8385 | 3.2771 | 0.7556 |
| 0.0 | 40.0 | 8600 | 3.2806 | 0.7556 |
| 0.0 | 41.0 | 8815 | 3.2837 | 0.7556 |
| 0.0 | 42.0 | 9030 | 3.2863 | 0.7556 |
| 0.0 | 43.0 | 9245 | 3.2887 | 0.7556 |
| 0.0 | 44.0 | 9460 | 3.2906 | 0.7556 |
| 0.0 | 45.0 | 9675 | 3.2923 | 0.7556 |
| 0.0 | 46.0 | 9890 | 3.2937 | 0.7556 |
| 0.0 | 47.0 | 10105 | 3.2947 | 0.7556 |
| 0.0 | 48.0 | 10320 | 3.2954 | 0.7556 |
| 0.0 | 49.0 | 10535 | 3.2958 | 0.7556 |
| 0.0 | 50.0 | 10750 | 3.2959 | 0.7556 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_0001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5928
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0578 | 1.0 | 215 | 1.1579 | 0.7333 |
| 0.1017 | 2.0 | 430 | 1.7859 | 0.7556 |
| 0.022 | 3.0 | 645 | 1.6749 | 0.8 |
| 0.0643 | 4.0 | 860 | 2.1460 | 0.6889 |
| 0.0005 | 5.0 | 1075 | 1.2973 | 0.7778 |
| 0.0002 | 6.0 | 1290 | 1.6108 | 0.7778 |
| 0.0002 | 7.0 | 1505 | 1.9441 | 0.7556 |
| 0.0 | 8.0 | 1720 | 2.1424 | 0.7778 |
| 0.0 | 9.0 | 1935 | 2.2105 | 0.8 |
| 0.0 | 10.0 | 2150 | 2.3105 | 0.8 |
| 0.0 | 11.0 | 2365 | 2.4406 | 0.8 |
| 0.0 | 12.0 | 2580 | 2.5849 | 0.8 |
| 0.0 | 13.0 | 2795 | 2.7379 | 0.8 |
| 0.0 | 14.0 | 3010 | 2.8751 | 0.8 |
| 0.0 | 15.0 | 3225 | 2.9942 | 0.8 |
| 0.0 | 16.0 | 3440 | 3.0983 | 0.8 |
| 0.0 | 17.0 | 3655 | 3.1877 | 0.8 |
| 0.0 | 18.0 | 3870 | 3.2698 | 0.8 |
| 0.0 | 19.0 | 4085 | 3.3376 | 0.8 |
| 0.0 | 20.0 | 4300 | 3.3925 | 0.8 |
| 0.0 | 21.0 | 4515 | 3.4335 | 0.8 |
| 0.0 | 22.0 | 4730 | 3.4638 | 0.8 |
| 0.0 | 23.0 | 4945 | 3.4866 | 0.8 |
| 0.0 | 24.0 | 5160 | 3.5041 | 0.8 |
| 0.0 | 25.0 | 5375 | 3.5181 | 0.8 |
| 0.0 | 26.0 | 5590 | 3.5294 | 0.8 |
| 0.0 | 27.0 | 5805 | 3.5388 | 0.8 |
| 0.0 | 28.0 | 6020 | 3.5464 | 0.8 |
| 0.0 | 29.0 | 6235 | 3.5531 | 0.8 |
| 0.0 | 30.0 | 6450 | 3.5587 | 0.8 |
| 0.0 | 31.0 | 6665 | 3.5636 | 0.8 |
| 0.0 | 32.0 | 6880 | 3.5677 | 0.8 |
| 0.0 | 33.0 | 7095 | 3.5714 | 0.8 |
| 0.0 | 34.0 | 7310 | 3.5745 | 0.8 |
| 0.0 | 35.0 | 7525 | 3.5772 | 0.8 |
| 0.0 | 36.0 | 7740 | 3.5795 | 0.8 |
| 0.0 | 37.0 | 7955 | 3.5816 | 0.8 |
| 0.0 | 38.0 | 8170 | 3.5833 | 0.8 |
| 0.0 | 39.0 | 8385 | 3.5849 | 0.8 |
| 0.0 | 40.0 | 8600 | 3.5863 | 0.8 |
| 0.0 | 41.0 | 8815 | 3.5875 | 0.8 |
| 0.0 | 42.0 | 9030 | 3.5885 | 0.8 |
| 0.0 | 43.0 | 9245 | 3.5895 | 0.8 |
| 0.0 | 44.0 | 9460 | 3.5903 | 0.8 |
| 0.0 | 45.0 | 9675 | 3.5910 | 0.8 |
| 0.0 | 46.0 | 9890 | 3.5915 | 0.8 |
| 0.0 | 47.0 | 10105 | 3.5920 | 0.8 |
| 0.0 | 48.0 | 10320 | 3.5924 | 0.8 |
| 0.0 | 49.0 | 10535 | 3.5927 | 0.8 |
| 0.0 | 50.0 | 10750 | 3.5928 | 0.8 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_00001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_00001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8515
- Accuracy: 0.8222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0268 | 1.0 | 215 | 0.7986 | 0.8 |
| 0.0002 | 2.0 | 430 | 1.0382 | 0.7556 |
| 0.0001 | 3.0 | 645 | 1.1402 | 0.7778 |
| 0.0 | 4.0 | 860 | 1.2476 | 0.7556 |
| 0.0 | 5.0 | 1075 | 1.3476 | 0.7556 |
| 0.0 | 6.0 | 1290 | 1.4725 | 0.7556 |
| 0.0 | 7.0 | 1505 | 1.6233 | 0.7778 |
| 0.0 | 8.0 | 1720 | 1.7734 | 0.7778 |
| 0.0 | 9.0 | 1935 | 1.8805 | 0.7778 |
| 0.0 | 10.0 | 2150 | 1.8889 | 0.8 |
| 0.0 | 11.0 | 2365 | 2.1587 | 0.7778 |
| 0.0 | 12.0 | 2580 | 2.0588 | 0.8 |
| 0.0 | 13.0 | 2795 | 2.1202 | 0.7778 |
| 0.0 | 14.0 | 3010 | 2.1555 | 0.7778 |
| 0.0 | 15.0 | 3225 | 1.9136 | 0.8 |
| 0.0 | 16.0 | 3440 | 1.9929 | 0.7778 |
| 0.0 | 17.0 | 3655 | 1.9161 | 0.8 |
| 0.0 | 18.0 | 3870 | 1.9718 | 0.7778 |
| 0.0 | 19.0 | 4085 | 1.9351 | 0.7778 |
| 0.0 | 20.0 | 4300 | 1.8731 | 0.8 |
| 0.0 | 21.0 | 4515 | 2.0003 | 0.7778 |
| 0.0 | 22.0 | 4730 | 1.9341 | 0.8222 |
| 0.0 | 23.0 | 4945 | 1.8619 | 0.8222 |
| 0.0 | 24.0 | 5160 | 1.9436 | 0.7778 |
| 0.0 | 25.0 | 5375 | 1.8959 | 0.8 |
| 0.0 | 26.0 | 5590 | 1.9309 | 0.8 |
| 0.0 | 27.0 | 5805 | 1.9142 | 0.8222 |
| 0.0 | 28.0 | 6020 | 1.8863 | 0.8222 |
| 0.0 | 29.0 | 6235 | 1.8613 | 0.8222 |
| 0.0 | 30.0 | 6450 | 1.9273 | 0.8222 |
| 0.0 | 31.0 | 6665 | 1.8653 | 0.8222 |
| 0.0 | 32.0 | 6880 | 1.8521 | 0.8 |
| 0.0 | 33.0 | 7095 | 1.8442 | 0.8222 |
| 0.0 | 34.0 | 7310 | 1.8633 | 0.8222 |
| 0.0 | 35.0 | 7525 | 1.8741 | 0.8222 |
| 0.0 | 36.0 | 7740 | 1.8375 | 0.8222 |
| 0.0 | 37.0 | 7955 | 1.8547 | 0.8222 |
| 0.0 | 38.0 | 8170 | 1.8764 | 0.8 |
| 0.0 | 39.0 | 8385 | 1.8572 | 0.8222 |
| 0.0 | 40.0 | 8600 | 1.8485 | 0.8222 |
| 0.0 | 41.0 | 8815 | 1.8477 | 0.8222 |
| 0.0 | 42.0 | 9030 | 1.8438 | 0.8222 |
| 0.0 | 43.0 | 9245 | 1.8448 | 0.8222 |
| 0.0 | 44.0 | 9460 | 1.8731 | 0.8222 |
| 0.0 | 45.0 | 9675 | 1.8515 | 0.8222 |
| 0.0 | 46.0 | 9890 | 1.8522 | 0.8222 |
| 0.0 | 47.0 | 10105 | 1.8514 | 0.8222 |
| 0.0 | 48.0 | 10320 | 1.8557 | 0.8222 |
| 0.0 | 49.0 | 10535 | 1.8518 | 0.8222 |
| 0.0 | 50.0 | 10750 | 1.8515 | 0.8222 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_0001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3623
- Accuracy: 0.8605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.162 | 1.0 | 217 | 0.7397 | 0.8140 |
| 0.0554 | 2.0 | 434 | 0.5902 | 0.8605 |
| 0.0178 | 3.0 | 651 | 1.1734 | 0.8605 |
| 0.0009 | 4.0 | 868 | 1.2319 | 0.8372 |
| 0.0013 | 5.0 | 1085 | 1.7982 | 0.7442 |
| 0.0274 | 6.0 | 1302 | 1.0518 | 0.8140 |
| 0.0022 | 7.0 | 1519 | 1.2789 | 0.7907 |
| 0.0002 | 8.0 | 1736 | 1.6091 | 0.7907 |
| 0.0002 | 9.0 | 1953 | 1.3608 | 0.7907 |
| 0.0001 | 10.0 | 2170 | 1.7662 | 0.7674 |
| 0.0001 | 11.0 | 2387 | 1.4719 | 0.8372 |
| 0.0001 | 12.0 | 2604 | 0.9802 | 0.8837 |
| 0.0537 | 13.0 | 2821 | 1.7727 | 0.8140 |
| 0.0 | 14.0 | 3038 | 1.4355 | 0.8372 |
| 0.0002 | 15.0 | 3255 | 1.2526 | 0.8140 |
| 0.0071 | 16.0 | 3472 | 1.9556 | 0.7674 |
| 0.0 | 17.0 | 3689 | 1.8517 | 0.7907 |
| 0.0016 | 18.0 | 3906 | 1.4335 | 0.8372 |
| 0.0124 | 19.0 | 4123 | 1.3513 | 0.7907 |
| 0.0235 | 20.0 | 4340 | 2.0239 | 0.7907 |
| 0.0 | 21.0 | 4557 | 1.2893 | 0.8605 |
| 0.0 | 22.0 | 4774 | 1.3114 | 0.8605 |
| 0.0 | 23.0 | 4991 | 1.3523 | 0.8605 |
| 0.0 | 24.0 | 5208 | 1.4204 | 0.8372 |
| 0.0 | 25.0 | 5425 | 1.5136 | 0.8372 |
| 0.0 | 26.0 | 5642 | 1.6287 | 0.8605 |
| 0.0 | 27.0 | 5859 | 1.7481 | 0.8605 |
| 0.0 | 28.0 | 6076 | 1.8569 | 0.8605 |
| 0.0 | 29.0 | 6293 | 1.9482 | 0.8605 |
| 0.0 | 30.0 | 6510 | 2.0219 | 0.8605 |
| 0.0 | 31.0 | 6727 | 2.0881 | 0.8605 |
| 0.0 | 32.0 | 6944 | 2.1406 | 0.8605 |
| 0.0 | 33.0 | 7161 | 2.1867 | 0.8605 |
| 0.0 | 34.0 | 7378 | 2.2231 | 0.8605 |
| 0.0 | 35.0 | 7595 | 2.2508 | 0.8605 |
| 0.0 | 36.0 | 7812 | 2.2725 | 0.8605 |
| 0.0 | 37.0 | 8029 | 2.2899 | 0.8605 |
| 0.0 | 38.0 | 8246 | 2.3039 | 0.8605 |
| 0.0 | 39.0 | 8463 | 2.3156 | 0.8605 |
| 0.0 | 40.0 | 8680 | 2.3253 | 0.8605 |
| 0.0 | 41.0 | 8897 | 2.3335 | 0.8605 |
| 0.0 | 42.0 | 9114 | 2.3403 | 0.8605 |
| 0.0 | 43.0 | 9331 | 2.3460 | 0.8605 |
| 0.0 | 44.0 | 9548 | 2.3507 | 0.8605 |
| 0.0 | 45.0 | 9765 | 2.3545 | 0.8605 |
| 0.0 | 46.0 | 9982 | 2.3575 | 0.8605 |
| 0.0 | 47.0 | 10199 | 2.3597 | 0.8605 |
| 0.0 | 48.0 | 10416 | 2.3612 | 0.8605 |
| 0.0 | 49.0 | 10633 | 2.3621 | 0.8605 |
| 0.0 | 50.0 | 10850 | 2.3623 | 0.8605 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_00001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_00001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4522
- Accuracy: 0.9535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0295 | 1.0 | 217 | 0.4206 | 0.8837 |
| 0.002 | 2.0 | 434 | 0.1981 | 0.9070 |
| 0.0001 | 3.0 | 651 | 0.1728 | 0.8837 |
| 0.0 | 4.0 | 868 | 0.2047 | 0.9070 |
| 0.0 | 5.0 | 1085 | 0.2712 | 0.9070 |
| 0.0 | 6.0 | 1302 | 0.2693 | 0.9302 |
| 0.0 | 7.0 | 1519 | 0.3423 | 0.9070 |
| 0.0 | 8.0 | 1736 | 0.3994 | 0.9302 |
| 0.0 | 9.0 | 1953 | 0.3966 | 0.9302 |
| 0.0 | 10.0 | 2170 | 0.5128 | 0.9302 |
| 0.0 | 11.0 | 2387 | 0.4001 | 0.9302 |
| 0.0 | 12.0 | 2604 | 0.5315 | 0.9302 |
| 0.0 | 13.0 | 2821 | 0.5591 | 0.9302 |
| 0.0 | 14.0 | 3038 | 0.5631 | 0.9302 |
| 0.0 | 15.0 | 3255 | 0.4505 | 0.9302 |
| 0.0 | 16.0 | 3472 | 0.4593 | 0.9302 |
| 0.0 | 17.0 | 3689 | 0.5550 | 0.9302 |
| 0.0 | 18.0 | 3906 | 0.4957 | 0.9302 |
| 0.0 | 19.0 | 4123 | 0.4675 | 0.9302 |
| 0.0 | 20.0 | 4340 | 0.4440 | 0.9302 |
| 0.0 | 21.0 | 4557 | 0.5276 | 0.9302 |
| 0.0 | 22.0 | 4774 | 0.5480 | 0.9302 |
| 0.0 | 23.0 | 4991 | 0.4757 | 0.9302 |
| 0.0 | 24.0 | 5208 | 0.4990 | 0.9302 |
| 0.0 | 25.0 | 5425 | 0.4333 | 0.9535 |
| 0.0 | 26.0 | 5642 | 0.4085 | 0.9535 |
| 0.0 | 27.0 | 5859 | 0.4460 | 0.9535 |
| 0.0 | 28.0 | 6076 | 0.4859 | 0.9302 |
| 0.0 | 29.0 | 6293 | 0.4427 | 0.9302 |
| 0.0 | 30.0 | 6510 | 0.4450 | 0.9535 |
| 0.0 | 31.0 | 6727 | 0.4625 | 0.9535 |
| 0.0 | 32.0 | 6944 | 0.4421 | 0.9535 |
| 0.0 | 33.0 | 7161 | 0.4667 | 0.9302 |
| 0.0 | 34.0 | 7378 | 0.4474 | 0.9535 |
| 0.0 | 35.0 | 7595 | 0.4297 | 0.9535 |
| 0.0 | 36.0 | 7812 | 0.4695 | 0.9535 |
| 0.0 | 37.0 | 8029 | 0.4421 | 0.9535 |
| 0.0 | 38.0 | 8246 | 0.4281 | 0.9535 |
| 0.0 | 39.0 | 8463 | 0.4143 | 0.9535 |
| 0.0 | 40.0 | 8680 | 0.4529 | 0.9535 |
| 0.0 | 41.0 | 8897 | 0.4584 | 0.9535 |
| 0.0 | 42.0 | 9114 | 0.4637 | 0.9535 |
| 0.0 | 43.0 | 9331 | 0.4580 | 0.9535 |
| 0.0 | 44.0 | 9548 | 0.4337 | 0.9535 |
| 0.0 | 45.0 | 9765 | 0.4550 | 0.9535 |
| 0.0 | 46.0 | 9982 | 0.4612 | 0.9535 |
| 0.0 | 47.0 | 10199 | 0.4403 | 0.9535 |
| 0.0 | 48.0 | 10416 | 0.4548 | 0.9535 |
| 0.0 | 49.0 | 10633 | 0.4544 | 0.9535 |
| 0.0 | 50.0 | 10850 | 0.4522 | 0.9535 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_00001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_00001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0414
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0261 | 1.0 | 219 | 0.2813 | 0.9048 |
| 0.0004 | 2.0 | 438 | 0.4646 | 0.9048 |
| 0.0318 | 3.0 | 657 | 0.2739 | 0.9524 |
| 0.0001 | 4.0 | 876 | 0.2467 | 0.9524 |
| 0.0001 | 5.0 | 1095 | 0.3508 | 0.9524 |
| 0.0 | 6.0 | 1314 | 0.3119 | 0.9524 |
| 0.0006 | 7.0 | 1533 | 0.2283 | 0.9286 |
| 0.0 | 8.0 | 1752 | 0.4350 | 0.9048 |
| 0.0 | 9.0 | 1971 | 0.4640 | 0.9048 |
| 0.0 | 10.0 | 2190 | 0.4527 | 0.9048 |
| 0.0 | 11.0 | 2409 | 0.4193 | 0.9286 |
| 0.0 | 12.0 | 2628 | 0.3715 | 0.9286 |
| 0.0 | 13.0 | 2847 | 0.3628 | 0.9286 |
| 0.0 | 14.0 | 3066 | 0.3061 | 0.9524 |
| 0.0 | 15.0 | 3285 | 0.2734 | 0.9524 |
| 0.0 | 16.0 | 3504 | 0.2564 | 0.9762 |
| 0.0 | 17.0 | 3723 | 0.2341 | 0.9762 |
| 0.0 | 18.0 | 3942 | 0.1999 | 0.9762 |
| 0.0 | 19.0 | 4161 | 0.1825 | 0.9762 |
| 0.0 | 20.0 | 4380 | 0.1638 | 0.9762 |
| 0.0 | 21.0 | 4599 | 0.1534 | 0.9762 |
| 0.0 | 22.0 | 4818 | 0.1387 | 0.9762 |
| 0.0 | 23.0 | 5037 | 0.1091 | 0.9762 |
| 0.0 | 24.0 | 5256 | 0.1221 | 0.9762 |
| 0.0 | 25.0 | 5475 | 0.1159 | 0.9762 |
| 0.0 | 26.0 | 5694 | 0.1135 | 0.9762 |
| 0.0 | 27.0 | 5913 | 0.1212 | 0.9762 |
| 0.0 | 28.0 | 6132 | 0.1127 | 0.9762 |
| 0.0 | 29.0 | 6351 | 0.0979 | 0.9762 |
| 0.0 | 30.0 | 6570 | 0.0810 | 0.9762 |
| 0.0 | 31.0 | 6789 | 0.0741 | 0.9762 |
| 0.0 | 32.0 | 7008 | 0.0839 | 0.9762 |
| 0.0 | 33.0 | 7227 | 0.0751 | 0.9762 |
| 0.0 | 34.0 | 7446 | 0.0611 | 0.9762 |
| 0.0 | 35.0 | 7665 | 0.0643 | 0.9762 |
| 0.0 | 36.0 | 7884 | 0.0533 | 0.9762 |
| 0.0 | 37.0 | 8103 | 0.0608 | 0.9762 |
| 0.0 | 38.0 | 8322 | 0.0466 | 0.9762 |
| 0.0 | 39.0 | 8541 | 0.0483 | 0.9762 |
| 0.0 | 40.0 | 8760 | 0.0457 | 0.9762 |
| 0.0 | 41.0 | 8979 | 0.0380 | 0.9762 |
| 0.0 | 42.0 | 9198 | 0.0427 | 0.9762 |
| 0.0 | 43.0 | 9417 | 0.0480 | 0.9762 |
| 0.0 | 44.0 | 9636 | 0.0456 | 0.9762 |
| 0.0 | 45.0 | 9855 | 0.0409 | 0.9762 |
| 0.0 | 46.0 | 10074 | 0.0400 | 0.9762 |
| 0.0 | 47.0 | 10293 | 0.0425 | 0.9762 |
| 0.0 | 48.0 | 10512 | 0.0391 | 0.9762 |
| 0.0 | 49.0 | 10731 | 0.0420 | 0.9762 |
| 0.0 | 50.0 | 10950 | 0.0414 | 0.9762 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_0001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2918
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5087 | 1.0 | 219 | 0.4037 | 0.7619 |
| 0.1898 | 2.0 | 438 | 0.1339 | 0.9762 |
| 0.0553 | 3.0 | 657 | 0.0324 | 0.9762 |
| 0.0797 | 4.0 | 876 | 0.1848 | 0.9762 |
| 0.0341 | 5.0 | 1095 | 0.2228 | 0.9762 |
| 0.0296 | 6.0 | 1314 | 0.2257 | 0.9286 |
| 0.0744 | 7.0 | 1533 | 0.1717 | 0.9524 |
| 0.0049 | 8.0 | 1752 | 0.3696 | 0.9048 |
| 0.0089 | 9.0 | 1971 | 0.3392 | 0.9286 |
| 0.0001 | 10.0 | 2190 | 0.4146 | 0.9286 |
| 0.0322 | 11.0 | 2409 | 0.3832 | 0.9524 |
| 0.0165 | 12.0 | 2628 | 0.7717 | 0.9048 |
| 0.0 | 13.0 | 2847 | 0.2462 | 0.9762 |
| 0.0339 | 14.0 | 3066 | 0.0004 | 1.0 |
| 0.0335 | 15.0 | 3285 | 0.0062 | 1.0 |
| 0.0205 | 16.0 | 3504 | 0.2197 | 0.9524 |
| 0.0 | 17.0 | 3723 | 0.1117 | 0.9762 |
| 0.0 | 18.0 | 3942 | 0.1233 | 0.9762 |
| 0.0 | 19.0 | 4161 | 0.1357 | 0.9762 |
| 0.0 | 20.0 | 4380 | 0.1491 | 0.9762 |
| 0.0 | 21.0 | 4599 | 0.1602 | 0.9762 |
| 0.0 | 22.0 | 4818 | 0.1668 | 0.9762 |
| 0.0 | 23.0 | 5037 | 0.1701 | 0.9762 |
| 0.0 | 24.0 | 5256 | 0.1738 | 0.9762 |
| 0.0 | 25.0 | 5475 | 0.1788 | 0.9762 |
| 0.0 | 26.0 | 5694 | 0.1882 | 0.9762 |
| 0.0 | 27.0 | 5913 | 0.2002 | 0.9762 |
| 0.0 | 28.0 | 6132 | 0.2109 | 0.9762 |
| 0.0 | 29.0 | 6351 | 0.2232 | 0.9762 |
| 0.0 | 30.0 | 6570 | 0.2349 | 0.9762 |
| 0.0 | 31.0 | 6789 | 0.2441 | 0.9762 |
| 0.0 | 32.0 | 7008 | 0.2518 | 0.9762 |
| 0.0 | 33.0 | 7227 | 0.2582 | 0.9762 |
| 0.0 | 34.0 | 7446 | 0.2637 | 0.9762 |
| 0.0 | 35.0 | 7665 | 0.2684 | 0.9762 |
| 0.0 | 36.0 | 7884 | 0.2722 | 0.9762 |
| 0.0 | 37.0 | 8103 | 0.2755 | 0.9762 |
| 0.0 | 38.0 | 8322 | 0.2784 | 0.9762 |
| 0.0 | 39.0 | 8541 | 0.2809 | 0.9762 |
| 0.0 | 40.0 | 8760 | 0.2832 | 0.9762 |
| 0.0 | 41.0 | 8979 | 0.2850 | 0.9762 |
| 0.0 | 42.0 | 9198 | 0.2865 | 0.9762 |
| 0.0 | 43.0 | 9417 | 0.2879 | 0.9762 |
| 0.0 | 44.0 | 9636 | 0.2889 | 0.9762 |
| 0.0 | 45.0 | 9855 | 0.2898 | 0.9762 |
| 0.0 | 46.0 | 10074 | 0.2906 | 0.9762 |
| 0.0 | 47.0 | 10293 | 0.2911 | 0.9762 |
| 0.0 | 48.0 | 10512 | 0.2915 | 0.9762 |
| 0.0 | 49.0 | 10731 | 0.2917 | 0.9762 |
| 0.0 | 50.0 | 10950 | 0.2918 | 0.9762 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_00001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_00001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4515
- Accuracy: 0.9024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0144 | 1.0 | 220 | 0.3466 | 0.8537 |
| 0.0049 | 2.0 | 440 | 0.2883 | 0.9268 |
| 0.0001 | 3.0 | 660 | 0.2605 | 0.9268 |
| 0.0 | 4.0 | 880 | 0.2900 | 0.9268 |
| 0.0 | 5.0 | 1100 | 0.2493 | 0.9756 |
| 0.0 | 6.0 | 1320 | 0.2569 | 0.9512 |
| 0.0 | 7.0 | 1540 | 0.2394 | 0.9512 |
| 0.0 | 8.0 | 1760 | 0.2786 | 0.9512 |
| 0.0 | 9.0 | 1980 | 0.2666 | 0.9512 |
| 0.0 | 10.0 | 2200 | 0.3124 | 0.9512 |
| 0.0 | 11.0 | 2420 | 0.2854 | 0.9512 |
| 0.0 | 12.0 | 2640 | 0.3639 | 0.9512 |
| 0.0 | 13.0 | 2860 | 0.3605 | 0.9512 |
| 0.0 | 14.0 | 3080 | 0.2651 | 0.9512 |
| 0.0 | 15.0 | 3300 | 0.3080 | 0.9268 |
| 0.0 | 16.0 | 3520 | 0.3418 | 0.9268 |
| 0.0 | 17.0 | 3740 | 0.2718 | 0.9268 |
| 0.0 | 18.0 | 3960 | 0.3121 | 0.9268 |
| 0.0 | 19.0 | 4180 | 0.4157 | 0.9024 |
| 0.0 | 20.0 | 4400 | 0.3977 | 0.9024 |
| 0.0 | 21.0 | 4620 | 0.4932 | 0.9024 |
| 0.0 | 22.0 | 4840 | 0.4660 | 0.9024 |
| 0.0 | 23.0 | 5060 | 0.3734 | 0.9024 |
| 0.0 | 24.0 | 5280 | 0.4701 | 0.9024 |
| 0.0 | 25.0 | 5500 | 0.3205 | 0.9268 |
| 0.0 | 26.0 | 5720 | 0.4371 | 0.9024 |
| 0.0 | 27.0 | 5940 | 0.3973 | 0.9024 |
| 0.0 | 28.0 | 6160 | 0.5710 | 0.9024 |
| 0.0 | 29.0 | 6380 | 0.4409 | 0.9024 |
| 0.0 | 30.0 | 6600 | 0.4151 | 0.9024 |
| 0.0 | 31.0 | 6820 | 0.4511 | 0.9024 |
| 0.0 | 32.0 | 7040 | 0.4474 | 0.9024 |
| 0.0 | 33.0 | 7260 | 0.4280 | 0.9024 |
| 0.0 | 34.0 | 7480 | 0.4279 | 0.9024 |
| 0.0 | 35.0 | 7700 | 0.4240 | 0.9024 |
| 0.0 | 36.0 | 7920 | 0.4599 | 0.9024 |
| 0.0 | 37.0 | 8140 | 0.4436 | 0.9024 |
| 0.0 | 38.0 | 8360 | 0.4580 | 0.9024 |
| 0.0 | 39.0 | 8580 | 0.4591 | 0.9024 |
| 0.0 | 40.0 | 8800 | 0.4659 | 0.9024 |
| 0.0 | 41.0 | 9020 | 0.4697 | 0.9024 |
| 0.0 | 42.0 | 9240 | 0.4218 | 0.9024 |
| 0.0 | 43.0 | 9460 | 0.4390 | 0.9024 |
| 0.0 | 44.0 | 9680 | 0.4679 | 0.9024 |
| 0.0 | 45.0 | 9900 | 0.4475 | 0.9024 |
| 0.0 | 46.0 | 10120 | 0.4486 | 0.9024 |
| 0.0 | 47.0 | 10340 | 0.4470 | 0.9024 |
| 0.0 | 48.0 | 10560 | 0.4530 | 0.9024 |
| 0.0 | 49.0 | 10780 | 0.4470 | 0.9024 |
| 0.0 | 50.0 | 11000 | 0.4515 | 0.9024 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_deit_small_rms_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_0001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9788
- Accuracy: 0.8049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1371 | 1.0 | 220 | 0.3701 | 0.8293 |
| 0.0445 | 2.0 | 440 | 1.9924 | 0.7073 |
| 0.0133 | 3.0 | 660 | 1.1496 | 0.8049 |
| 0.0131 | 4.0 | 880 | 0.3434 | 0.9024 |
| 0.0354 | 5.0 | 1100 | 0.4117 | 0.8537 |
| 0.0497 | 6.0 | 1320 | 0.2267 | 0.9268 |
| 0.0845 | 7.0 | 1540 | 1.0625 | 0.8293 |
| 0.0001 | 8.0 | 1760 | 1.4387 | 0.7317 |
| 0.0648 | 9.0 | 1980 | 0.2862 | 0.9756 |
| 0.0159 | 10.0 | 2200 | 0.5399 | 0.8780 |
| 0.0001 | 11.0 | 2420 | 0.6240 | 0.8293 |
| 0.0069 | 12.0 | 2640 | 0.9226 | 0.8049 |
| 0.071 | 13.0 | 2860 | 1.0657 | 0.8293 |
| 0.0001 | 14.0 | 3080 | 1.2561 | 0.7805 |
| 0.0 | 15.0 | 3300 | 1.2385 | 0.7805 |
| 0.0 | 16.0 | 3520 | 1.2648 | 0.7805 |
| 0.0 | 17.0 | 3740 | 1.3089 | 0.7805 |
| 0.0 | 18.0 | 3960 | 1.3750 | 0.7805 |
| 0.0 | 19.0 | 4180 | 1.4566 | 0.7805 |
| 0.0 | 20.0 | 4400 | 1.5453 | 0.8049 |
| 0.0 | 21.0 | 4620 | 1.6338 | 0.8049 |
| 0.0 | 22.0 | 4840 | 1.6896 | 0.8049 |
| 0.0 | 23.0 | 5060 | 1.7347 | 0.8049 |
| 0.0 | 24.0 | 5280 | 1.7835 | 0.8049 |
| 0.0 | 25.0 | 5500 | 1.8255 | 0.8049 |
| 0.0 | 26.0 | 5720 | 1.8621 | 0.8049 |
| 0.0 | 27.0 | 5940 | 1.8887 | 0.8049 |
| 0.0 | 28.0 | 6160 | 1.9074 | 0.8049 |
| 0.0 | 29.0 | 6380 | 1.9212 | 0.8049 |
| 0.0 | 30.0 | 6600 | 1.9317 | 0.8049 |
| 0.0 | 31.0 | 6820 | 1.9398 | 0.8049 |
| 0.0 | 32.0 | 7040 | 1.9465 | 0.8049 |
| 0.0 | 33.0 | 7260 | 1.9519 | 0.8049 |
| 0.0 | 34.0 | 7480 | 1.9563 | 0.8049 |
| 0.0 | 35.0 | 7700 | 1.9601 | 0.8049 |
| 0.0 | 36.0 | 7920 | 1.9632 | 0.8049 |
| 0.0 | 37.0 | 8140 | 1.9659 | 0.8049 |
| 0.0 | 38.0 | 8360 | 1.9682 | 0.8049 |
| 0.0 | 39.0 | 8580 | 1.9702 | 0.8049 |
| 0.0 | 40.0 | 8800 | 1.9718 | 0.8049 |
| 0.0 | 41.0 | 9020 | 1.9733 | 0.8049 |
| 0.0 | 42.0 | 9240 | 1.9745 | 0.8049 |
| 0.0 | 43.0 | 9460 | 1.9756 | 0.8049 |
| 0.0 | 44.0 | 9680 | 1.9764 | 0.8049 |
| 0.0 | 45.0 | 9900 | 1.9772 | 0.8049 |
| 0.0 | 46.0 | 10120 | 1.9777 | 0.8049 |
| 0.0 | 47.0 | 10340 | 1.9782 | 0.8049 |
| 0.0 | 48.0 | 10560 | 1.9785 | 0.8049 |
| 0.0 | 49.0 | 10780 | 1.9787 | 0.8049 |
| 0.0 | 50.0 | 11000 | 1.9788 | 0.8049 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_beit_large_adamax_0001_fold1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_0001_fold1
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6987
- Accuracy: 0.8889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0285 | 1.0 | 215 | 0.5849 | 0.8 |
| 0.0006 | 2.0 | 430 | 0.7781 | 0.8222 |
| 0.0 | 3.0 | 645 | 0.5158 | 0.8 |
| 0.0 | 4.0 | 860 | 0.4099 | 0.8444 |
| 0.0 | 5.0 | 1075 | 0.4040 | 0.8889 |
| 0.0 | 6.0 | 1290 | 0.4087 | 0.8889 |
| 0.0029 | 7.0 | 1505 | 0.2585 | 0.8889 |
| 0.0159 | 8.0 | 1720 | 0.6738 | 0.9111 |
| 0.0 | 9.0 | 1935 | 0.7387 | 0.8889 |
| 0.0 | 10.0 | 2150 | 0.3266 | 0.9111 |
| 0.0001 | 11.0 | 2365 | 0.5064 | 0.8667 |
| 0.0 | 12.0 | 2580 | 0.3031 | 0.9111 |
| 0.0 | 13.0 | 2795 | 0.3143 | 0.9111 |
| 0.0 | 14.0 | 3010 | 0.3219 | 0.9111 |
| 0.0 | 15.0 | 3225 | 0.3481 | 0.9111 |
| 0.0 | 16.0 | 3440 | 0.3485 | 0.9111 |
| 0.0 | 17.0 | 3655 | 0.3724 | 0.9111 |
| 0.0 | 18.0 | 3870 | 0.3706 | 0.8889 |
| 0.0 | 19.0 | 4085 | 0.3603 | 0.9111 |
| 0.0 | 20.0 | 4300 | 0.3742 | 0.9111 |
| 0.0 | 21.0 | 4515 | 0.5745 | 0.8444 |
| 0.0 | 22.0 | 4730 | 0.4247 | 0.8444 |
| 0.0 | 23.0 | 4945 | 0.4328 | 0.8667 |
| 0.0 | 24.0 | 5160 | 0.3958 | 0.8889 |
| 0.0 | 25.0 | 5375 | 0.4106 | 0.9111 |
| 0.0 | 26.0 | 5590 | 0.4237 | 0.8667 |
| 0.0 | 27.0 | 5805 | 0.4907 | 0.8667 |
| 0.0 | 28.0 | 6020 | 0.5123 | 0.8667 |
| 0.0 | 29.0 | 6235 | 0.4509 | 0.8889 |
| 0.0 | 30.0 | 6450 | 0.5376 | 0.8889 |
| 0.0 | 31.0 | 6665 | 0.5524 | 0.8889 |
| 0.0 | 32.0 | 6880 | 0.6004 | 0.8889 |
| 0.0 | 33.0 | 7095 | 0.5947 | 0.8889 |
| 0.0 | 34.0 | 7310 | 0.6506 | 0.8889 |
| 0.0 | 35.0 | 7525 | 0.8615 | 0.8889 |
| 0.0 | 36.0 | 7740 | 0.6453 | 0.8889 |
| 0.0 | 37.0 | 7955 | 0.6879 | 0.8889 |
| 0.0 | 38.0 | 8170 | 0.6869 | 0.8889 |
| 0.0 | 39.0 | 8385 | 0.7122 | 0.8889 |
| 0.0 | 40.0 | 8600 | 0.7111 | 0.8889 |
| 0.0 | 41.0 | 8815 | 0.7028 | 0.8889 |
| 0.0 | 42.0 | 9030 | 0.7091 | 0.8889 |
| 0.0 | 43.0 | 9245 | 0.7217 | 0.8889 |
| 0.0 | 44.0 | 9460 | 0.7018 | 0.8889 |
| 0.0 | 45.0 | 9675 | 0.7281 | 0.8889 |
| 0.0 | 46.0 | 9890 | 0.7227 | 0.8889 |
| 0.0 | 47.0 | 10105 | 0.7233 | 0.8889 |
| 0.0 | 48.0 | 10320 | 0.7063 | 0.8889 |
| 0.0 | 49.0 | 10535 | 0.6973 | 0.8889 |
| 0.0 | 50.0 | 10750 | 0.6987 | 0.8889 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_beit_large_adamax_0001_fold2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_0001_fold2
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5515
- Accuracy: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0224 | 1.0 | 215 | 0.7690 | 0.8222 |
| 0.0 | 2.0 | 430 | 0.9419 | 0.8222 |
| 0.0 | 3.0 | 645 | 0.9930 | 0.8667 |
| 0.0 | 4.0 | 860 | 0.8917 | 0.8444 |
| 0.0 | 5.0 | 1075 | 0.9011 | 0.8667 |
| 0.0 | 6.0 | 1290 | 0.8682 | 0.8667 |
| 0.0016 | 7.0 | 1505 | 1.2238 | 0.8444 |
| 0.0197 | 8.0 | 1720 | 1.2274 | 0.8667 |
| 0.0027 | 9.0 | 1935 | 1.0944 | 0.8444 |
| 0.0058 | 10.0 | 2150 | 1.9516 | 0.7778 |
| 0.0 | 11.0 | 2365 | 1.8577 | 0.7556 |
| 0.0 | 12.0 | 2580 | 1.7768 | 0.8 |
| 0.0 | 13.0 | 2795 | 1.1199 | 0.7778 |
| 0.0 | 14.0 | 3010 | 1.2644 | 0.8222 |
| 0.0 | 15.0 | 3225 | 0.9150 | 0.8889 |
| 0.0 | 16.0 | 3440 | 0.8728 | 0.8889 |
| 0.0 | 17.0 | 3655 | 0.8904 | 0.8889 |
| 0.0 | 18.0 | 3870 | 0.8975 | 0.8889 |
| 0.0 | 19.0 | 4085 | 0.9193 | 0.8889 |
| 0.0 | 20.0 | 4300 | 0.9261 | 0.8889 |
| 0.0 | 21.0 | 4515 | 1.6757 | 0.8 |
| 0.0 | 22.0 | 4730 | 1.3218 | 0.8444 |
| 0.0 | 23.0 | 4945 | 1.3867 | 0.8222 |
| 0.0 | 24.0 | 5160 | 1.3833 | 0.8444 |
| 0.0 | 25.0 | 5375 | 1.2895 | 0.8444 |
| 0.0 | 26.0 | 5590 | 1.2783 | 0.8667 |
| 0.0 | 27.0 | 5805 | 1.2770 | 0.8667 |
| 0.0 | 28.0 | 6020 | 1.2426 | 0.8667 |
| 0.0 | 29.0 | 6235 | 1.2537 | 0.8667 |
| 0.0 | 30.0 | 6450 | 1.2475 | 0.8667 |
| 0.0 | 31.0 | 6665 | 1.2602 | 0.8667 |
| 0.0 | 32.0 | 6880 | 1.2779 | 0.8667 |
| 0.0 | 33.0 | 7095 | 1.2891 | 0.8667 |
| 0.0 | 34.0 | 7310 | 1.3447 | 0.8444 |
| 0.0 | 35.0 | 7525 | 1.3109 | 0.8667 |
| 0.0 | 36.0 | 7740 | 1.3704 | 0.8667 |
| 0.0 | 37.0 | 7955 | 1.5945 | 0.8 |
| 0.0 | 38.0 | 8170 | 1.5665 | 0.8444 |
| 0.0 | 39.0 | 8385 | 1.4945 | 0.8444 |
| 0.0 | 40.0 | 8600 | 1.4921 | 0.8444 |
| 0.0 | 41.0 | 8815 | 1.5103 | 0.8444 |
| 0.0 | 42.0 | 9030 | 1.5661 | 0.8444 |
| 0.0 | 43.0 | 9245 | 1.5778 | 0.8444 |
| 0.0 | 44.0 | 9460 | 1.5715 | 0.8444 |
| 0.0 | 45.0 | 9675 | 1.5931 | 0.8444 |
| 0.0 | 46.0 | 9890 | 1.5813 | 0.8444 |
| 0.0 | 47.0 | 10105 | 1.5501 | 0.8444 |
| 0.0 | 48.0 | 10320 | 1.5512 | 0.8444 |
| 0.0 | 49.0 | 10535 | 1.5477 | 0.8444 |
| 0.0 | 50.0 | 10750 | 1.5515 | 0.8444 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_beit_large_adamax_0001_fold3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_0001_fold3
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4238
- Accuracy: 0.9070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0397 | 1.0 | 217 | 0.4585 | 0.8605 |
| 0.0001 | 2.0 | 434 | 1.0180 | 0.8837 |
| 0.0 | 3.0 | 651 | 0.9542 | 0.9070 |
| 0.0 | 4.0 | 868 | 1.0472 | 0.9070 |
| 0.011 | 5.0 | 1085 | 0.8152 | 0.8837 |
| 0.0 | 6.0 | 1302 | 0.8047 | 0.9070 |
| 0.0001 | 7.0 | 1519 | 1.1339 | 0.8837 |
| 0.0 | 8.0 | 1736 | 0.6894 | 0.9070 |
| 0.0 | 9.0 | 1953 | 0.9352 | 0.8837 |
| 0.0015 | 10.0 | 2170 | 0.8497 | 0.8372 |
| 0.0 | 11.0 | 2387 | 0.8859 | 0.8837 |
| 0.0 | 12.0 | 2604 | 1.0189 | 0.8837 |
| 0.001 | 13.0 | 2821 | 0.9729 | 0.8605 |
| 0.0 | 14.0 | 3038 | 0.9152 | 0.8837 |
| 0.0 | 15.0 | 3255 | 0.8697 | 0.8605 |
| 0.0 | 16.0 | 3472 | 0.9016 | 0.8605 |
| 0.0 | 17.0 | 3689 | 0.8964 | 0.8837 |
| 0.0 | 18.0 | 3906 | 1.0277 | 0.8837 |
| 0.0 | 19.0 | 4123 | 0.8584 | 0.8837 |
| 0.0 | 20.0 | 4340 | 0.8132 | 0.9070 |
| 0.0 | 21.0 | 4557 | 0.8453 | 0.9070 |
| 0.0 | 22.0 | 4774 | 0.8777 | 0.9070 |
| 0.0 | 23.0 | 4991 | 0.8912 | 0.9070 |
| 0.0 | 24.0 | 5208 | 0.9167 | 0.8837 |
| 0.0 | 25.0 | 5425 | 0.9234 | 0.8837 |
| 0.0 | 26.0 | 5642 | 0.9407 | 0.8837 |
| 0.0 | 27.0 | 5859 | 1.0058 | 0.9070 |
| 0.0 | 28.0 | 6076 | 1.1055 | 0.8837 |
| 0.0 | 29.0 | 6293 | 1.1155 | 0.8837 |
| 0.0 | 30.0 | 6510 | 1.1212 | 0.8837 |
| 0.0 | 31.0 | 6727 | 1.4063 | 0.9070 |
| 0.0 | 32.0 | 6944 | 1.3993 | 0.9070 |
| 0.0 | 33.0 | 7161 | 1.4033 | 0.9070 |
| 0.0 | 34.0 | 7378 | 1.4032 | 0.9070 |
| 0.0 | 35.0 | 7595 | 1.4070 | 0.9070 |
| 0.0 | 36.0 | 7812 | 1.4100 | 0.9070 |
| 0.0 | 37.0 | 8029 | 1.4111 | 0.9070 |
| 0.0 | 38.0 | 8246 | 1.4234 | 0.9070 |
| 0.0 | 39.0 | 8463 | 1.4283 | 0.8837 |
| 0.0 | 40.0 | 8680 | 1.4259 | 0.8837 |
| 0.0 | 41.0 | 8897 | 1.4283 | 0.8837 |
| 0.0 | 42.0 | 9114 | 1.4459 | 0.8837 |
| 0.0 | 43.0 | 9331 | 1.4466 | 0.8837 |
| 0.0 | 44.0 | 9548 | 1.4349 | 0.8837 |
| 0.0 | 45.0 | 9765 | 1.4277 | 0.8837 |
| 0.0 | 46.0 | 9982 | 1.4129 | 0.9070 |
| 0.0 | 47.0 | 10199 | 1.4175 | 0.9070 |
| 0.0 | 48.0 | 10416 | 1.4184 | 0.9070 |
| 0.0 | 49.0 | 10633 | 1.4243 | 0.9070 |
| 0.0 | 50.0 | 10850 | 1.4238 | 0.9070 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
tangfei/autotrain-sinm4-3x59p
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 2.1737219307624096e+36
f1_macro: 0.13333333333333333
f1_micro: 0.25
f1_weighted: 0.1
precision_macro: 0.08333333333333333
precision_micro: 0.25
precision_weighted: 0.0625
recall_macro: 0.3333333333333333
recall_micro: 0.25
recall_weighted: 0.25
accuracy: 0.25
|
[
"11covered_with_a_quilt_and_only_the_head_exposed",
"12covered_with_a_quilt_and_exposed_other_parts_of_the_body",
"13has_nothing_to_do_with_11_and_12_above"
] |
KumarGyanam/vit-base-patch16-224-finetuned-flower
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.1.0+cu121
- Datasets 2.7.1
- Tokenizers 0.13.3
|
[
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips"
] |
hkivancoral/hushem_40x_beit_large_adamax_0001_fold4
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_0001_fold4
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1076
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0165 | 1.0 | 219 | 0.5362 | 0.8810 |
| 0.0002 | 2.0 | 438 | 0.2899 | 0.9048 |
| 0.002 | 3.0 | 657 | 0.2264 | 0.9286 |
| 0.0 | 4.0 | 876 | 0.0134 | 1.0 |
| 0.0 | 5.0 | 1095 | 0.0221 | 0.9762 |
| 0.0 | 6.0 | 1314 | 0.0312 | 0.9762 |
| 0.0 | 7.0 | 1533 | 0.0455 | 0.9762 |
| 0.0 | 8.0 | 1752 | 0.1418 | 0.9524 |
| 0.0 | 9.0 | 1971 | 0.1481 | 0.9762 |
| 0.0 | 10.0 | 2190 | 0.0104 | 1.0 |
| 0.0 | 11.0 | 2409 | 0.0643 | 0.9762 |
| 0.0 | 12.0 | 2628 | 0.0455 | 0.9762 |
| 0.0 | 13.0 | 2847 | 0.0444 | 0.9762 |
| 0.0 | 14.0 | 3066 | 0.0410 | 0.9762 |
| 0.0 | 15.0 | 3285 | 0.0550 | 0.9762 |
| 0.0 | 16.0 | 3504 | 0.0281 | 0.9762 |
| 0.0 | 17.0 | 3723 | 0.0303 | 0.9762 |
| 0.0 | 18.0 | 3942 | 0.0305 | 0.9762 |
| 0.0 | 19.0 | 4161 | 0.0952 | 0.9762 |
| 0.0 | 20.0 | 4380 | 0.0860 | 0.9762 |
| 0.0 | 21.0 | 4599 | 0.0315 | 0.9762 |
| 0.0 | 22.0 | 4818 | 0.0334 | 0.9762 |
| 0.0 | 23.0 | 5037 | 0.0409 | 0.9762 |
| 0.0004 | 24.0 | 5256 | 0.3332 | 0.9524 |
| 0.0 | 25.0 | 5475 | 0.1274 | 0.9762 |
| 0.0071 | 26.0 | 5694 | 0.1341 | 0.9762 |
| 0.0 | 27.0 | 5913 | 0.1590 | 0.9762 |
| 0.0 | 28.0 | 6132 | 0.1155 | 0.9762 |
| 0.0 | 29.0 | 6351 | 0.1162 | 0.9762 |
| 0.0 | 30.0 | 6570 | 0.1374 | 0.9762 |
| 0.0 | 31.0 | 6789 | 0.1350 | 0.9762 |
| 0.0 | 32.0 | 7008 | 0.1260 | 0.9762 |
| 0.0 | 33.0 | 7227 | 0.1236 | 0.9762 |
| 0.0 | 34.0 | 7446 | 0.1361 | 0.9762 |
| 0.0 | 35.0 | 7665 | 0.1318 | 0.9762 |
| 0.0 | 36.0 | 7884 | 0.1308 | 0.9762 |
| 0.0 | 37.0 | 8103 | 0.1168 | 0.9762 |
| 0.0 | 38.0 | 8322 | 0.1190 | 0.9762 |
| 0.0 | 39.0 | 8541 | 0.0898 | 0.9762 |
| 0.0 | 40.0 | 8760 | 0.0926 | 0.9762 |
| 0.0 | 41.0 | 8979 | 0.0919 | 0.9762 |
| 0.0 | 42.0 | 9198 | 0.0987 | 0.9762 |
| 0.0 | 43.0 | 9417 | 0.0991 | 0.9762 |
| 0.0 | 44.0 | 9636 | 0.1047 | 0.9762 |
| 0.0 | 45.0 | 9855 | 0.1049 | 0.9762 |
| 0.0 | 46.0 | 10074 | 0.1056 | 0.9762 |
| 0.0 | 47.0 | 10293 | 0.1068 | 0.9762 |
| 0.0 | 48.0 | 10512 | 0.1039 | 0.9762 |
| 0.0 | 49.0 | 10731 | 0.1062 | 0.9762 |
| 0.0 | 50.0 | 10950 | 0.1076 | 0.9762 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
hkivancoral/hushem_40x_beit_large_adamax_0001_fold5
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_beit_large_adamax_0001_fold5
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9014
- Accuracy: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0137 | 1.0 | 220 | 0.6252 | 0.8049 |
| 0.0039 | 2.0 | 440 | 0.3651 | 0.9268 |
| 0.0 | 3.0 | 660 | 0.2079 | 0.9512 |
| 0.0 | 4.0 | 880 | 0.2782 | 0.8780 |
| 0.0015 | 5.0 | 1100 | 0.3966 | 0.8780 |
| 0.0006 | 6.0 | 1320 | 0.9179 | 0.8049 |
| 0.0 | 7.0 | 1540 | 0.6543 | 0.8780 |
| 0.0 | 8.0 | 1760 | 0.6721 | 0.8537 |
| 0.0 | 9.0 | 1980 | 0.6667 | 0.8537 |
| 0.0 | 10.0 | 2200 | 0.6892 | 0.8293 |
| 0.0 | 11.0 | 2420 | 0.6788 | 0.8293 |
| 0.0187 | 12.0 | 2640 | 0.6872 | 0.8537 |
| 0.0 | 13.0 | 2860 | 1.1812 | 0.8049 |
| 0.0 | 14.0 | 3080 | 0.6787 | 0.8537 |
| 0.0 | 15.0 | 3300 | 0.7294 | 0.8293 |
| 0.0 | 16.0 | 3520 | 1.0136 | 0.8293 |
| 0.0 | 17.0 | 3740 | 0.9479 | 0.8293 |
| 0.0 | 18.0 | 3960 | 0.9308 | 0.8293 |
| 0.0 | 19.0 | 4180 | 0.8944 | 0.8293 |
| 0.0 | 20.0 | 4400 | 0.8979 | 0.8293 |
| 0.0 | 21.0 | 4620 | 0.8942 | 0.8293 |
| 0.0 | 22.0 | 4840 | 0.9123 | 0.8293 |
| 0.0 | 23.0 | 5060 | 0.7263 | 0.8537 |
| 0.0 | 24.0 | 5280 | 0.7426 | 0.8537 |
| 0.0 | 25.0 | 5500 | 0.7599 | 0.8293 |
| 0.0 | 26.0 | 5720 | 0.7693 | 0.8293 |
| 0.0 | 27.0 | 5940 | 0.8044 | 0.8293 |
| 0.0 | 28.0 | 6160 | 0.8028 | 0.8293 |
| 0.0 | 29.0 | 6380 | 0.6542 | 0.8293 |
| 0.0 | 30.0 | 6600 | 0.6934 | 0.8293 |
| 0.0 | 31.0 | 6820 | 0.6814 | 0.8293 |
| 0.0 | 32.0 | 7040 | 0.6666 | 0.8537 |
| 0.0 | 33.0 | 7260 | 0.7695 | 0.8293 |
| 0.0 | 34.0 | 7480 | 1.0033 | 0.8293 |
| 0.0 | 35.0 | 7700 | 0.9558 | 0.8537 |
| 0.0 | 36.0 | 7920 | 0.8444 | 0.8537 |
| 0.0 | 37.0 | 8140 | 0.9196 | 0.8537 |
| 0.0 | 38.0 | 8360 | 0.8784 | 0.8537 |
| 0.0 | 39.0 | 8580 | 0.8306 | 0.8780 |
| 0.0 | 40.0 | 8800 | 0.9373 | 0.8537 |
| 0.0 | 41.0 | 9020 | 0.9235 | 0.8537 |
| 0.0 | 42.0 | 9240 | 0.9473 | 0.8537 |
| 0.0 | 43.0 | 9460 | 0.9424 | 0.8537 |
| 0.0 | 44.0 | 9680 | 0.9102 | 0.8537 |
| 0.0 | 45.0 | 9900 | 0.9576 | 0.8537 |
| 0.0 | 46.0 | 10120 | 0.9639 | 0.8537 |
| 0.0 | 47.0 | 10340 | 0.9689 | 0.8537 |
| 0.0 | 48.0 | 10560 | 0.8859 | 0.8537 |
| 0.0 | 49.0 | 10780 | 0.9011 | 0.8537 |
| 0.0 | 50.0 | 11000 | 0.9014 | 0.8537 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
[
"01_normal",
"02_tapered",
"03_pyriform",
"04_amorphous"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.