model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
dima806/headgear_image_detection
Returns headgear type given an image. See https://www.kaggle.com/code/dima806/headgear-image-detection-vit for more details. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/Emppcn7eVbOL7r2YShjNX.png) ``` Classification report: precision recall f1-score support BERET 1.0000 0.9565 0.9778 115 FEDORA 0.9913 1.0000 0.9956 114 SOMBERO 1.0000 1.0000 1.0000 115 HARD HAT 1.0000 1.0000 1.0000 115 FEZ 1.0000 0.9912 0.9956 114 ZUCCHETTO 1.0000 0.9912 0.9956 114 TOP HAT 1.0000 1.0000 1.0000 115 DEERSTALKER 0.9913 1.0000 0.9956 114 ASCOT CAP 0.9500 1.0000 0.9744 114 PORK PIE 0.9739 0.9825 0.9782 114 MILITARY HELMET 1.0000 1.0000 1.0000 115 BICORNE 1.0000 0.9912 0.9956 114 FOOTBALL HELMET 1.0000 1.0000 1.0000 115 MOTARBOARD 0.9913 1.0000 0.9956 114 BOATER 1.0000 1.0000 1.0000 115 PITH HELMET 0.9913 1.0000 0.9956 114 SOUTHWESTER 1.0000 0.9912 0.9956 114 BOWLER 0.9912 0.9825 0.9868 114 GARRISON CAP 1.0000 0.9912 0.9956 114 BASEBALL CAP 1.0000 1.0000 1.0000 115 accuracy 0.9939 2288 macro avg 0.9940 0.9939 0.9939 2288 weighted avg 0.9940 0.9939 0.9939 2288 ```
[ "beret", "fedora", "sombero", "hard hat", "fez", "zucchetto", "top hat", "deerstalker", "ascot cap", "pork pie", "military helmet", "bicorne", "football helmet", "motarboard", "boater", "pith helmet", "southwester", "bowler", "garrison cap", "baseball cap" ]
galbitang/autotrain-jin0_sofa-94923146231
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 94923146231 - CO2 Emissions (in grams): 0.0667 ## Validation Metrics - Loss: 0.897 - Accuracy: 0.693 - Macro F1: 0.633 - Micro F1: 0.693 - Weighted F1: 0.686 - Macro Precision: 0.663 - Micro Precision: 0.693 - Weighted Precision: 0.693 - Macro Recall: 0.628 - Micro Recall: 0.693 - Weighted Recall: 0.693
[ "classicantique", "frenchprovence", "vintageretro", "industrial", "koreaaisa", "lovelyromantic", "minimalsimple", "modern", "natural", "notherneurope", "unique" ]
galbitang/autotrain-jeongmi_bedframe-94918146232
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 94918146232 - CO2 Emissions (in grams): 0.0779 ## Validation Metrics - Loss: 0.544 - Accuracy: 0.824 - Macro F1: 0.829 - Micro F1: 0.824 - Weighted F1: 0.819 - Macro Precision: 0.859 - Micro Precision: 0.824 - Weighted Precision: 0.835 - Macro Recall: 0.816 - Micro Recall: 0.824 - Weighted Recall: 0.824
[ "classicantique", "frenchprovence", "vintageretro", "industrial", "koreaaisa", "lovelyromantic", "minimalsimple", "modern", "natural", "notherneurope", "unique" ]
galbitang/autotrain-lamp_train_dataset-94947146236
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 94947146236 - CO2 Emissions (in grams): 0.0384 ## Validation Metrics - Loss: 1.766 - Accuracy: 0.450 - Macro F1: 0.318 - Micro F1: 0.450 - Weighted F1: 0.392 - Macro Precision: 0.321 - Micro Precision: 0.450 - Weighted Precision: 0.373 - Macro Recall: 0.355 - Micro Recall: 0.450 - Weighted Recall: 0.450
[ "classicantique", "frenchprovence", "vintageretro", "industrial", "koreaaisa", "lovelyromantic", "minimalsimple", "modern", "natural", "notherneurope", "unique" ]
dima806/wild_cats_image_detection
Returns wild cat given an image. See https://www.kaggle.com/code/dima806/wild-cats-image-detection-vit for more details. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/OLnWwhtPz-WG4sybUm4BQ.png) ``` Classification report: precision recall f1-score support LIONS 1.0000 1.0000 1.0000 99 CARACAL 1.0000 1.0000 1.0000 99 AFRICAN LEOPARD 0.9897 0.9697 0.9796 99 CHEETAH 0.9899 0.9899 0.9899 99 SNOW LEOPARD 0.9900 0.9900 0.9900 100 TIGER 1.0000 1.0000 1.0000 99 OCELOT 0.9899 0.9899 0.9899 99 JAGUAR 0.9802 1.0000 0.9900 99 PUMA 1.0000 1.0000 1.0000 100 CLOUDED LEOPARD 0.9899 0.9899 0.9899 99 accuracy 0.9929 992 macro avg 0.9930 0.9929 0.9929 992 weighted avg 0.9930 0.9929 0.9929 992 ```
[ "lions", "caracal", "african leopard", "cheetah", "snow leopard", "tiger", "ocelot", "jaguar", "puma", "clouded leopard" ]
zkdeng/convnextv2-tiny-22k-384-finetuned-Spiders
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnextv2-tiny-22k-384-finetuned-Spiders This model is a fine-tuned version of [facebook/convnextv2-tiny-22k-384](https://huggingface.co/facebook/convnextv2-tiny-22k-384) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1945 - eval_accuracy: 0.915 - eval_precision: 0.8899 - eval_recall: 0.9510 - eval_f1: 0.9194 - eval_runtime: 9.0512 - eval_samples_per_second: 22.097 - eval_steps_per_second: 1.436 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "lactrodectus_hesperus", "parasteatoda_tepidariorum" ]
galbitang/autotrain-jinvit_sofa_base-94978146242
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 94978146242 - CO2 Emissions (in grams): 0.0588 ## Validation Metrics - Loss: 0.725 - Accuracy: 0.750 - Macro F1: 0.678 - Micro F1: 0.750 - Weighted F1: 0.737 - Macro Precision: 0.746 - Micro Precision: 0.750 - Weighted Precision: 0.752 - Macro Recall: 0.654 - Micro Recall: 0.750 - Weighted Recall: 0.750
[ "classicantique", "frenchprovence", "industrial", "koreaaisa", "lovelyromantic", "modern", "natural", "simple", "unique", "vintageretro" ]
Leeyuyu/swin-tiny-patch4-window7-224-finetunedo
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetunedo This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3710 - Roc Auc: 0.8606 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Roc Auc | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 1.0 | 3 | 0.5066 | 0.7647 | | No log | 2.0 | 6 | 0.4204 | 0.7941 | | No log | 3.0 | 9 | 0.4298 | 0.7353 | | 0.4868 | 4.0 | 12 | 0.4040 | 0.8018 | | 0.4868 | 5.0 | 15 | 0.3925 | 0.7724 | | 0.4868 | 6.0 | 18 | 0.3674 | 0.8235 | | 0.4096 | 7.0 | 21 | 0.3673 | 0.8606 | | 0.4096 | 8.0 | 24 | 0.3710 | 0.8606 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "m", "nonm" ]
FoamoftheSea/pvt_v2_b0
# PVTv2 This is the Hugging Face PyTorch implementation of the [PVTv2](https://arxiv.org/abs/2106.13797) model. ## Model Description The Pyramid Vision Transformer v2 (PVTv2) is a powerful, lightweight hierarchical transformer backbone for vision tasks. PVTv2 infuses convolution operations into its transformer layers to infuse properties of CNNs that enable them to learn image data efficiently. This mix transformer architecture requires no added positional embeddings, and produces multi-scale feature maps which are known to be beneficial for dense and fine-grained prediction tasks. Vision models using PVTv2 for a backbone: 1. [Segformer](https://arxiv.org/abs/2105.15203) for Semantic Segmentation. 2. [GLPN](https://arxiv.org/abs/2201.07436) for Monocular Depth. 3. [Deformable DETR](https://arxiv.org/abs/2010.04159) for 2D Object Detection. 4. [Panoptic Segformer](https://arxiv.org/abs/2109.03814) for Panoptic Segmentation.
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9", "label_10", "label_11", "label_12", "label_13", "label_14", "label_15", "label_16", "label_17", "label_18", "label_19", "label_20", "label_21", "label_22", "label_23", "label_24", "label_25", "label_26", "label_27", "label_28", "label_29", "label_30", "label_31", "label_32", "label_33", "label_34", "label_35", "label_36", "label_37", "label_38", "label_39", "label_40", "label_41", "label_42", "label_43", "label_44", "label_45", "label_46", "label_47", "label_48", "label_49", "label_50", "label_51", "label_52", "label_53", "label_54", "label_55", "label_56", "label_57", "label_58", "label_59", "label_60", "label_61", "label_62", "label_63", "label_64", "label_65", "label_66", "label_67", "label_68", "label_69", "label_70", "label_71", "label_72", "label_73", "label_74", "label_75", "label_76", "label_77", "label_78", "label_79", "label_80", "label_81", "label_82", "label_83", "label_84", "label_85", "label_86", "label_87", "label_88", "label_89", "label_90", "label_91", "label_92", "label_93", "label_94", "label_95", "label_96", "label_97", "label_98", "label_99", "label_100", "label_101", "label_102", "label_103", "label_104", "label_105", "label_106", "label_107", "label_108", "label_109", "label_110", "label_111", "label_112", "label_113", "label_114", "label_115", "label_116", "label_117", "label_118", "label_119", "label_120", "label_121", "label_122", "label_123", "label_124", "label_125", "label_126", "label_127", "label_128", "label_129", "label_130", "label_131", "label_132", "label_133", "label_134", "label_135", "label_136", "label_137", "label_138", "label_139", "label_140", "label_141", "label_142", "label_143", "label_144", "label_145", "label_146", "label_147", "label_148", "label_149", "label_150", "label_151", "label_152", "label_153", "label_154", "label_155", "label_156", "label_157", "label_158", "label_159", "label_160", "label_161", "label_162", "label_163", "label_164", "label_165", "label_166", "label_167", "label_168", "label_169", "label_170", "label_171", "label_172", "label_173", "label_174", "label_175", "label_176", "label_177", "label_178", "label_179", "label_180", "label_181", "label_182", "label_183", "label_184", "label_185", "label_186", "label_187", "label_188", "label_189", "label_190", "label_191", "label_192", "label_193", "label_194", "label_195", "label_196", "label_197", "label_198", "label_199", "label_200", "label_201", "label_202", "label_203", "label_204", "label_205", "label_206", "label_207", "label_208", "label_209", "label_210", "label_211", "label_212", "label_213", "label_214", "label_215", "label_216", "label_217", "label_218", "label_219", "label_220", "label_221", "label_222", "label_223", "label_224", "label_225", "label_226", "label_227", "label_228", "label_229", "label_230", "label_231", "label_232", "label_233", "label_234", "label_235", "label_236", "label_237", "label_238", "label_239", "label_240", "label_241", "label_242", "label_243", "label_244", "label_245", "label_246", "label_247", "label_248", "label_249", "label_250", "label_251", "label_252", "label_253", "label_254", "label_255", "label_256", "label_257", "label_258", "label_259", "label_260", "label_261", "label_262", "label_263", "label_264", "label_265", "label_266", "label_267", "label_268", "label_269", "label_270", "label_271", "label_272", "label_273", "label_274", "label_275", "label_276", "label_277", "label_278", "label_279", "label_280", "label_281", "label_282", "label_283", "label_284", "label_285", "label_286", "label_287", "label_288", "label_289", "label_290", "label_291", "label_292", "label_293", "label_294", "label_295", "label_296", "label_297", "label_298", "label_299", "label_300", "label_301", "label_302", "label_303", "label_304", "label_305", "label_306", "label_307", "label_308", "label_309", "label_310", "label_311", "label_312", "label_313", "label_314", "label_315", "label_316", "label_317", "label_318", "label_319", "label_320", "label_321", "label_322", "label_323", "label_324", "label_325", "label_326", "label_327", "label_328", "label_329", "label_330", "label_331", "label_332", "label_333", "label_334", "label_335", "label_336", "label_337", "label_338", "label_339", "label_340", "label_341", "label_342", "label_343", "label_344", "label_345", "label_346", "label_347", "label_348", "label_349", "label_350", "label_351", "label_352", "label_353", "label_354", "label_355", "label_356", "label_357", "label_358", "label_359", "label_360", "label_361", "label_362", "label_363", "label_364", "label_365", "label_366", "label_367", "label_368", "label_369", "label_370", "label_371", "label_372", "label_373", "label_374", "label_375", "label_376", "label_377", "label_378", "label_379", "label_380", "label_381", "label_382", "label_383", "label_384", "label_385", "label_386", "label_387", "label_388", "label_389", "label_390", "label_391", "label_392", "label_393", "label_394", "label_395", "label_396", "label_397", "label_398", "label_399", "label_400", "label_401", "label_402", "label_403", "label_404", "label_405", "label_406", "label_407", "label_408", "label_409", "label_410", "label_411", "label_412", "label_413", "label_414", "label_415", "label_416", "label_417", "label_418", "label_419", "label_420", "label_421", "label_422", "label_423", "label_424", "label_425", "label_426", "label_427", "label_428", "label_429", "label_430", "label_431", "label_432", "label_433", "label_434", "label_435", "label_436", "label_437", "label_438", "label_439", "label_440", "label_441", "label_442", "label_443", "label_444", "label_445", "label_446", "label_447", "label_448", "label_449", "label_450", "label_451", "label_452", "label_453", "label_454", "label_455", "label_456", "label_457", "label_458", "label_459", "label_460", "label_461", "label_462", "label_463", "label_464", "label_465", "label_466", "label_467", "label_468", "label_469", "label_470", "label_471", "label_472", "label_473", "label_474", "label_475", "label_476", "label_477", "label_478", "label_479", "label_480", "label_481", "label_482", "label_483", "label_484", "label_485", "label_486", "label_487", "label_488", "label_489", "label_490", "label_491", "label_492", "label_493", "label_494", "label_495", "label_496", "label_497", "label_498", "label_499", "label_500", "label_501", "label_502", "label_503", "label_504", "label_505", "label_506", "label_507", "label_508", "label_509", "label_510", "label_511", "label_512", "label_513", "label_514", "label_515", "label_516", "label_517", "label_518", "label_519", "label_520", "label_521", "label_522", "label_523", "label_524", "label_525", "label_526", "label_527", "label_528", "label_529", "label_530", "label_531", "label_532", "label_533", "label_534", "label_535", "label_536", "label_537", "label_538", "label_539", "label_540", "label_541", "label_542", "label_543", "label_544", "label_545", "label_546", "label_547", "label_548", "label_549", "label_550", "label_551", "label_552", "label_553", "label_554", "label_555", "label_556", "label_557", "label_558", "label_559", "label_560", "label_561", "label_562", "label_563", "label_564", "label_565", "label_566", "label_567", "label_568", "label_569", "label_570", "label_571", "label_572", "label_573", "label_574", "label_575", "label_576", "label_577", "label_578", "label_579", "label_580", "label_581", "label_582", "label_583", "label_584", "label_585", "label_586", "label_587", "label_588", "label_589", "label_590", "label_591", "label_592", "label_593", "label_594", "label_595", "label_596", "label_597", "label_598", "label_599", "label_600", "label_601", "label_602", "label_603", "label_604", "label_605", "label_606", "label_607", "label_608", "label_609", "label_610", "label_611", "label_612", "label_613", "label_614", "label_615", "label_616", "label_617", "label_618", "label_619", "label_620", "label_621", "label_622", "label_623", "label_624", "label_625", "label_626", "label_627", "label_628", "label_629", "label_630", "label_631", "label_632", "label_633", "label_634", "label_635", "label_636", "label_637", "label_638", "label_639", "label_640", "label_641", "label_642", "label_643", "label_644", "label_645", "label_646", "label_647", "label_648", "label_649", "label_650", "label_651", "label_652", "label_653", "label_654", "label_655", "label_656", "label_657", "label_658", "label_659", "label_660", "label_661", "label_662", "label_663", "label_664", "label_665", "label_666", "label_667", "label_668", "label_669", "label_670", "label_671", "label_672", "label_673", "label_674", "label_675", "label_676", "label_677", "label_678", "label_679", "label_680", "label_681", "label_682", "label_683", "label_684", "label_685", "label_686", "label_687", "label_688", "label_689", "label_690", "label_691", "label_692", "label_693", "label_694", "label_695", "label_696", "label_697", "label_698", "label_699", "label_700", "label_701", "label_702", "label_703", "label_704", "label_705", "label_706", "label_707", "label_708", "label_709", "label_710", "label_711", "label_712", "label_713", "label_714", "label_715", "label_716", "label_717", "label_718", "label_719", "label_720", "label_721", "label_722", "label_723", "label_724", "label_725", "label_726", "label_727", "label_728", "label_729", "label_730", "label_731", "label_732", "label_733", "label_734", "label_735", "label_736", "label_737", "label_738", "label_739", "label_740", "label_741", "label_742", "label_743", "label_744", "label_745", "label_746", "label_747", "label_748", "label_749", "label_750", "label_751", "label_752", "label_753", "label_754", "label_755", "label_756", "label_757", "label_758", "label_759", "label_760", "label_761", "label_762", "label_763", "label_764", "label_765", "label_766", "label_767", "label_768", "label_769", "label_770", "label_771", "label_772", "label_773", "label_774", "label_775", "label_776", "label_777", "label_778", "label_779", "label_780", "label_781", "label_782", "label_783", "label_784", "label_785", "label_786", "label_787", "label_788", "label_789", "label_790", "label_791", "label_792", "label_793", "label_794", "label_795", "label_796", "label_797", "label_798", "label_799", "label_800", "label_801", "label_802", "label_803", "label_804", "label_805", "label_806", "label_807", "label_808", "label_809", "label_810", "label_811", "label_812", "label_813", "label_814", "label_815", "label_816", "label_817", "label_818", "label_819", "label_820", "label_821", "label_822", "label_823", "label_824", "label_825", "label_826", "label_827", "label_828", "label_829", "label_830", "label_831", "label_832", "label_833", "label_834", "label_835", "label_836", "label_837", "label_838", "label_839", "label_840", "label_841", "label_842", "label_843", "label_844", "label_845", "label_846", "label_847", "label_848", "label_849", "label_850", "label_851", "label_852", "label_853", "label_854", "label_855", "label_856", "label_857", "label_858", "label_859", "label_860", "label_861", "label_862", "label_863", "label_864", "label_865", "label_866", "label_867", "label_868", "label_869", "label_870", "label_871", "label_872", "label_873", "label_874", "label_875", "label_876", "label_877", "label_878", "label_879", "label_880", "label_881", "label_882", "label_883", "label_884", "label_885", "label_886", "label_887", "label_888", "label_889", "label_890", "label_891", "label_892", "label_893", "label_894", "label_895", "label_896", "label_897", "label_898", "label_899", "label_900", "label_901", "label_902", "label_903", "label_904", "label_905", "label_906", "label_907", "label_908", "label_909", "label_910", "label_911", "label_912", "label_913", "label_914", "label_915", "label_916", "label_917", "label_918", "label_919", "label_920", "label_921", "label_922", "label_923", "label_924", "label_925", "label_926", "label_927", "label_928", "label_929", "label_930", "label_931", "label_932", "label_933", "label_934", "label_935", "label_936", "label_937", "label_938", "label_939", "label_940", "label_941", "label_942", "label_943", "label_944", "label_945", "label_946", "label_947", "label_948", "label_949", "label_950", "label_951", "label_952", "label_953", "label_954", "label_955", "label_956", "label_957", "label_958", "label_959", "label_960", "label_961", "label_962", "label_963", "label_964", "label_965", "label_966", "label_967", "label_968", "label_969", "label_970", "label_971", "label_972", "label_973", "label_974", "label_975", "label_976", "label_977", "label_978", "label_979", "label_980", "label_981", "label_982", "label_983", "label_984", "label_985", "label_986", "label_987", "label_988", "label_989", "label_990", "label_991", "label_992", "label_993", "label_994", "label_995", "label_996", "label_997", "label_998", "label_999" ]
hilmansw/resnet18-food-classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model description This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on an [custom](https://www.kaggle.com/datasets/faldoae/padangfood) dataset. This model was built using the "Padang Cuisine (Indonesian Food Image Classification)" dataset obtained from Kaggle. During the model building process, this was done using the Pytorch framework with pre-trained Resnet-18. The method used during the process of building this classification model is fine-tuning with the dataset. ## Training results | Epoch | Accuracy | |:-----:|:--------:| | 1.0 | 0.6030 | | 2.0 | 0.8342 | | 3.0 | 0.8442 | | 4.0 | 0.8191 | | 5.0 | 0.8693 | | 6.0 | 0.8643 | | 7.0 | 0.8744 | | 8.0 | 0.8643 | | 9.0 | 0.8744 | | 10.0 | 0.8744 | | 11.0 | 0.8794 | | 12.0 | 0.8744 | | 13.0 | 0.8894 | | 14.0 | 0.8794 | | 15.0 | 0.8945 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - loss_function = CrossEntropyLoss - optimizer = AdamW - learning_rate: 0.00001 - batch_size: 16 - num_epochs: 15 ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "ayam_goreng", "ayam_pop", "daging_rendang", "dendeng_batokok", "gulai_ikan", "gulai_tambusu", "gulai_tunjang", "telur_balado", "telur_dadar" ]
dima806/deepfake_vs_real_image_detection
Checks whether an image is real or fake (AI-generated). **Note to users who want to use this model in production** Beware that this model is trained on a dataset collected about 3 years ago. Since then, there is a remarkable progress in generating deepfake images with common AI tools, resulting in a significant concept drift. To mitigate that, I urge you to retrain the model using the latest available labeled data. As a quick-fix approach, simple reducing the threshold (say from default 0.5 to 0.1 or even 0.01) of labelling image as a fake may suffice. However, you will do that at your own risk, and retraining the model is the better way of handling the concept drift. See https://www.kaggle.com/code/dima806/deepfake-vs-real-faces-detection-vit for more details. ``` Classification report: precision recall f1-score support Real 0.9921 0.9933 0.9927 38080 Fake 0.9933 0.9921 0.9927 38081 accuracy 0.9927 76161 macro avg 0.9927 0.9927 0.9927 76161 weighted avg 0.9927 0.9927 0.9927 76161 ```
[ "real", "fake" ]
galbitang/autotrain-sofa_1015-95167146296
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 95167146296 - CO2 Emissions (in grams): 3.2484 ## Validation Metrics - Loss: 0.860 - Accuracy: 0.698 - Macro F1: 0.628 - Micro F1: 0.698 - Weighted F1: 0.694 - Macro Precision: 0.646 - Micro Precision: 0.698 - Weighted Precision: 0.699 - Macro Recall: 0.625 - Micro Recall: 0.698 - Weighted Recall: 0.698
[ "classicantique", "frenchprovence", "vintageretro", "industrial", "koreaaisa", "lovelyromantic", "minimalsimple", "modern", "natural", "notherneurope", "unique" ]
Akshay0706/Plant-Diseases-Classification-Training-Arguments
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Plant-Diseases-Classification-Training-Arguments This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 26 | 0.4907 | 0.9524 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "0", "1" ]
galbitang/autotrain-ijeongmi_lamp_final-95169146297
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 95169146297 - CO2 Emissions (in grams): 2.3970 ## Validation Metrics - Loss: 1.042 - Accuracy: 0.655 - Macro F1: 0.563 - Micro F1: 0.655 - Weighted F1: 0.646 - Macro Precision: 0.602 - Micro Precision: 0.655 - Weighted Precision: 0.652 - Macro Recall: 0.552 - Micro Recall: 0.655 - Weighted Recall: 0.655
[ "frenchprovence", "industrial", "koreaaisa", "lovelyromantic", "modern", "natural", "notherneurope", "unique", "vintageretro" ]
galbitang/autotrain-jeongmi_lamp_fffinal-95171146298
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 95171146298 - CO2 Emissions (in grams): 2.2590 ## Validation Metrics - Loss: 1.174 - Accuracy: 0.632 - Macro F1: 0.485 - Micro F1: 0.632 - Weighted F1: 0.606 - Macro Precision: 0.617 - Micro Precision: 0.632 - Weighted Precision: 0.630 - Macro Recall: 0.482 - Micro Recall: 0.632 - Weighted Recall: 0.632
[ "classicantique", "frenchprovence", "vintageretro", "industrial", "koreaaisa", "lovelyromantic", "minimalsimple", "modern", "natural", "notherneurope", "unique" ]
galbitang/autotrain-table_1015-95170146299
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 95170146299 - CO2 Emissions (in grams): 0.0626 ## Validation Metrics - Loss: 0.851 - Accuracy: 0.751 - Macro F1: 0.694 - Micro F1: 0.751 - Weighted F1: 0.744 - Macro Precision: 0.728 - Micro Precision: 0.751 - Weighted Precision: 0.747 - Macro Recall: 0.679 - Micro Recall: 0.751 - Weighted Recall: 0.751
[ "classicantique", "frenchprovence", "vintageretro", "industrial", "koreaaisa", "lovelyromantic", "minimalsimple", "modern", "natural", "notherneurope", "unique" ]
fahmindra/padang_cuisine_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # padang_cuisine_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8549 - Accuracy: 0.9509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1256 | 0.98 | 10 | 2.0189 | 0.6012 | | 1.839 | 1.95 | 20 | 1.6867 | 0.8834 | | 1.5149 | 2.93 | 30 | 1.3800 | 0.9080 | | 1.2405 | 4.0 | 41 | 1.1324 | 0.9141 | | 1.0359 | 4.98 | 51 | 0.9649 | 0.9387 | | 0.874 | 5.95 | 61 | 0.8402 | 0.9448 | | 0.766 | 6.93 | 71 | 0.7901 | 0.9387 | | 0.7065 | 8.0 | 82 | 0.7175 | 0.9448 | | 0.6558 | 8.98 | 92 | 0.7112 | 0.9387 | | 0.6537 | 9.76 | 100 | 0.7114 | 0.9325 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "ayam_goreng", "ayam_pop", "daging_rendang", "dendeng_batokok", "gulai_ikan", "gulai_tambusu", "gulai_tunjang", "telur_balado", "telur_dadar" ]
dima806/133_dog_breeds_image_detection
Returns dog breed given an image. See https://www.kaggle.com/code/dima806/133-dog-breed-image-detection-vit for more details. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/9oA9yV4Rnd5p1zIQgcPbG.png) ``` Classification report: precision recall f1-score support Norwich_terrier 0.8750 0.8974 0.8861 39 Bichon_frise 0.8125 1.0000 0.8966 39 Entlebucher_mountain_dog 0.8889 0.6316 0.7385 38 Briard 1.0000 1.0000 1.0000 39 Norwegian_elkhound 0.9487 0.9487 0.9487 39 Field_spaniel 0.6731 0.9211 0.7778 38 Gordon_setter 0.9500 1.0000 0.9744 38 Cocker_spaniel 0.8378 0.8158 0.8267 38 Irish_setter 1.0000 0.9231 0.9600 39 Wirehaired_pointing_griffon 0.7600 0.9744 0.8539 39 Giant_schnauzer 1.0000 0.9737 0.9867 38 Maltese 0.7755 1.0000 0.8736 38 English_springer_spaniel 0.8571 0.9474 0.9000 38 Bernese_mountain_dog 1.0000 0.9231 0.9600 39 Alaskan_malamute 1.0000 1.0000 1.0000 38 American_eskimo_dog 0.9500 1.0000 0.9744 38 Havanese 0.0000 0.0000 0.0000 38 Icelandic_sheepdog 0.9412 0.8421 0.8889 38 Manchester_terrier 0.8298 1.0000 0.9070 39 Dogue_de_bordeaux 0.9048 0.9744 0.9383 39 Cardigan_welsh_corgi 0.9231 0.6154 0.7385 39 Norfolk_terrier 0.9487 0.9487 0.9487 39 Canaan_dog 0.8800 0.5789 0.6984 38 Clumber_spaniel 0.9737 0.9737 0.9737 38 Black_russian_terrier 0.9286 1.0000 0.9630 39 German_shepherd_dog 0.8780 0.9474 0.9114 38 Affenpinscher 0.8837 0.9744 0.9268 39 Bearded_collie 0.9697 0.8421 0.9014 38 Chinese_shar-pei 0.9677 0.7692 0.8571 39 Labrador_retriever 0.9333 0.3684 0.5283 38 Irish_terrier 0.9714 0.8947 0.9315 38 Chinese_crested 1.0000 0.8421 0.9143 38 Anatolian_shepherd_dog 1.0000 0.8947 0.9444 38 Brittany 1.0000 0.8947 0.9444 38 Norwegian_buhund 0.8372 0.9474 0.8889 38 Miniature_schnauzer 0.9512 1.0000 0.9750 39 Xoloitzcuintli 0.9750 1.0000 0.9873 39 Dalmatian 0.8667 1.0000 0.9286 39 Greyhound 0.8750 0.9211 0.8974 38 Leonberger 1.0000 1.0000 1.0000 39 Ibizan_hound 1.0000 0.9487 0.9737 39 Bloodhound 1.0000 1.0000 1.0000 38 Bluetick_coonhound 1.0000 1.0000 1.0000 39 English_setter 1.0000 1.0000 1.0000 38 Neapolitan_mastiff 0.8864 1.0000 0.9398 39 Parson_russell_terrier 0.9167 0.8462 0.8800 39 Brussels_griffon 0.9714 0.8947 0.9315 38 Bulldog 0.9268 1.0000 0.9620 38 Bullmastiff 0.7857 0.5641 0.6567 39 Borzoi 1.0000 1.0000 1.0000 38 Poodle 1.0000 0.8421 0.9143 38 Kuvasz 0.8500 0.8947 0.8718 38 Plott 0.8810 0.9737 0.9250 38 Belgian_malinois 0.9722 0.9211 0.9459 38 Japanese_chin 0.9286 1.0000 0.9630 39 Smooth_fox_terrier 0.9024 0.9737 0.9367 38 Flat-coated_retriever 0.8298 1.0000 0.9070 39 Pointer 1.0000 0.6316 0.7742 38 Otterhound 0.9487 0.9737 0.9610 38 Pomeranian 0.9167 0.8684 0.8919 38 Lhasa_apso 0.8444 0.9744 0.9048 39 Bouvier_des_flandres 0.9737 0.9737 0.9737 38 Irish_water_spaniel 0.9730 0.9474 0.9600 38 Old_english_sheepdog 0.8837 0.9744 0.9268 39 Basset_hound 1.0000 0.9744 0.9870 39 American_water_spaniel 0.8571 0.9474 0.9000 38 Airedale_terrier 0.7308 1.0000 0.8444 38 Border_terrier 0.9730 0.9474 0.9600 38 Irish_wolfhound 1.0000 1.0000 1.0000 39 Yorkshire_terrier 0.7037 1.0000 0.8261 38 Papillon 0.9048 1.0000 0.9500 38 Dachshund 1.0000 0.7895 0.8824 38 Cavalier_king_charles_spaniel 0.8140 0.9211 0.8642 38 Tibetan_mastiff 1.0000 0.9487 0.9737 39 Pekingese 1.0000 0.9211 0.9589 38 German_wirehaired_pointer 1.0000 0.6316 0.7742 38 Doberman_pinscher 0.6102 0.9474 0.7423 38 Keeshond 1.0000 1.0000 1.0000 39 Dandie_dinmont_terrier 1.0000 0.9737 0.9867 38 American_staffordshire_terrier 0.8718 0.8947 0.8831 38 Cairn_terrier 1.0000 0.9744 0.9870 39 Portuguese_water_dog 0.9722 0.8974 0.9333 39 Golden_retriever 0.9000 0.9474 0.9231 38 Basenji 0.8125 1.0000 0.8966 39 Bedlington_terrier 1.0000 0.9737 0.9867 38 Newfoundland 0.9737 0.9737 0.9737 38 Boxer 0.8444 0.9744 0.9048 39 Pembroke_welsh_corgi 0.6923 0.9474 0.8000 38 German_pinscher 1.0000 0.3846 0.5556 39 Chesapeake_bay_retriever 1.0000 0.9474 0.9730 38 Chow_chow 1.0000 1.0000 1.0000 38 Collie 0.9500 1.0000 0.9744 38 Komondor 1.0000 1.0000 1.0000 38 Boston_terrier 1.0000 1.0000 1.0000 39 Glen_of_imaal_terrier 0.9231 0.9231 0.9231 39 Beauceron 0.9429 0.8462 0.8919 39 Belgian_sheepdog 1.0000 1.0000 1.0000 38 Bull_terrier 1.0000 0.9737 0.9867 38 German_shorthaired_pointer 0.7917 1.0000 0.8837 38 Silky_terrier 0.9545 0.5526 0.7000 38 Great_dane 0.9630 0.6667 0.7879 39 French_bulldog 1.0000 0.9474 0.9730 38 Welsh_springer_spaniel 0.7600 1.0000 0.8636 38 Curly-coated_retriever 0.8810 0.9487 0.9136 39 Cane_corso 0.8250 0.8462 0.8354 39 Italian_greyhound 0.8780 0.9231 0.9000 39 Australian_terrier 0.9487 0.9487 0.9487 39 Australian_shepherd 0.9722 0.9211 0.9459 38 Belgian_tervuren 0.9500 0.9744 0.9620 39 Lakeland_terrier 1.0000 0.5263 0.6897 38 Finnish_spitz 0.9000 0.9474 0.9231 38 English_toy_spaniel 0.9375 0.7895 0.8571 38 Boykin_spaniel 0.8750 0.5526 0.6774 38 Pharaoh_hound 0.9024 0.9737 0.9367 38 Afghan_hound 0.9250 0.9487 0.9367 39 American_foxhound 0.9355 0.7436 0.8286 39 Lowchen 0.5965 0.8718 0.7083 39 Mastiff 0.7500 0.9474 0.8372 38 Petit_basset_griffon_vendeen 0.9070 1.0000 0.9512 39 Kerry_blue_terrier 0.8478 1.0000 0.9176 39 Irish_red_and_white_setter 0.8919 0.8462 0.8684 39 Australian_cattle_dog 1.0000 0.9474 0.9730 38 Beagle 0.7551 0.9737 0.8506 38 Great_pyrenees 0.7805 0.8421 0.8101 38 Border_collie 0.9744 1.0000 0.9870 38 Saint_bernard 1.0000 1.0000 1.0000 38 Akita 0.8182 0.7105 0.7606 38 Norwegian_lundehund 0.8261 1.0000 0.9048 38 Nova_scotia_duck_tolling_retriever 0.9211 0.9211 0.9211 38 Greater_swiss_mountain_dog 0.6667 0.9231 0.7742 39 Chihuahua 1.0000 0.9487 0.9737 39 Black_and_tan_coonhound 0.8667 1.0000 0.9286 39 English_cocker_spaniel 0.8710 0.7105 0.7826 38 accuracy 0.9017 5108 macro avg 0.9061 0.9015 0.8955 5108 weighted avg 0.9061 0.9017 0.8957 5108 ```
[ "norwich_terrier", "bichon_frise", "entlebucher_mountain_dog", "briard", "norwegian_elkhound", "field_spaniel", "gordon_setter", "cocker_spaniel", "irish_setter", "wirehaired_pointing_griffon", "giant_schnauzer", "maltese", "english_springer_spaniel", "bernese_mountain_dog", "alaskan_malamute", "american_eskimo_dog", "havanese", "icelandic_sheepdog", "manchester_terrier", "dogue_de_bordeaux", "cardigan_welsh_corgi", "norfolk_terrier", "canaan_dog", "clumber_spaniel", "black_russian_terrier", "german_shepherd_dog", "affenpinscher", "bearded_collie", "chinese_shar-pei", "labrador_retriever", "irish_terrier", "chinese_crested", "anatolian_shepherd_dog", "brittany", "norwegian_buhund", "miniature_schnauzer", "xoloitzcuintli", "dalmatian", "greyhound", "leonberger", "ibizan_hound", "bloodhound", "bluetick_coonhound", "english_setter", "neapolitan_mastiff", "parson_russell_terrier", "brussels_griffon", "bulldog", "bullmastiff", "borzoi", "poodle", "kuvasz", "plott", "belgian_malinois", "japanese_chin", "smooth_fox_terrier", "flat-coated_retriever", "pointer", "otterhound", "pomeranian", "lhasa_apso", "bouvier_des_flandres", "irish_water_spaniel", "old_english_sheepdog", "basset_hound", "american_water_spaniel", "airedale_terrier", "border_terrier", "irish_wolfhound", "yorkshire_terrier", "papillon", "dachshund", "cavalier_king_charles_spaniel", "tibetan_mastiff", "pekingese", "german_wirehaired_pointer", "doberman_pinscher", "keeshond", "dandie_dinmont_terrier", "american_staffordshire_terrier", "cairn_terrier", "portuguese_water_dog", "golden_retriever", "basenji", "bedlington_terrier", "newfoundland", "boxer", "pembroke_welsh_corgi", "german_pinscher", "chesapeake_bay_retriever", "chow_chow", "collie", "komondor", "boston_terrier", "glen_of_imaal_terrier", "beauceron", "belgian_sheepdog", "bull_terrier", "german_shorthaired_pointer", "silky_terrier", "great_dane", "french_bulldog", "welsh_springer_spaniel", "curly-coated_retriever", "cane_corso", "italian_greyhound", "australian_terrier", "australian_shepherd", "belgian_tervuren", "lakeland_terrier", "finnish_spitz", "english_toy_spaniel", "boykin_spaniel", "pharaoh_hound", "afghan_hound", "american_foxhound", "lowchen", "mastiff", "petit_basset_griffon_vendeen", "kerry_blue_terrier", "irish_red_and_white_setter", "australian_cattle_dog", "beagle", "great_pyrenees", "border_collie", "saint_bernard", "akita", "norwegian_lundehund", "nova_scotia_duck_tolling_retriever", "greater_swiss_mountain_dog", "chihuahua", "black_and_tan_coonhound", "english_cocker_spaniel" ]
galbitang/autotrain-lamp_1015-95249146314
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 95249146314 - CO2 Emissions (in grams): 0.0513 ## Validation Metrics - Loss: 1.035 - Accuracy: 0.660 - Macro F1: 0.478 - Micro F1: 0.660 - Weighted F1: 0.624 - Macro Precision: 0.525 - Micro Precision: 0.660 - Weighted Precision: 0.614 - Macro Recall: 0.490 - Micro Recall: 0.660 - Weighted Recall: 0.660
[ "classicantique", "frenchprovence", "vintageretro", "industrial", "koreaasia", "lovelyromantic", "minimalsimple", "modern", "natural", "notherneurope", "unique" ]
galbitang/autotrain-bed_frame_merge_vit-95266146325
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 95266146325 - CO2 Emissions (in grams): 0.0627 ## Validation Metrics - Loss: 0.409 - Accuracy: 0.872 - Macro F1: 0.868 - Micro F1: 0.872 - Weighted F1: 0.872 - Macro Precision: 0.879 - Micro Precision: 0.872 - Weighted Precision: 0.873 - Macro Recall: 0.860 - Micro Recall: 0.872 - Weighted Recall: 0.872
[ "classicantique", "frenchprovence", "industrial", "koreaasia", "lovelyromantic", "modern", "natural", "simple", "unique", "vintageretro" ]
galbitang/autotrain-chair_merge_vit-95268146326
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 95268146326 - CO2 Emissions (in grams): 2.8301 ## Validation Metrics - Loss: 0.607 - Accuracy: 0.814 - Macro F1: 0.674 - Micro F1: 0.814 - Weighted F1: 0.801 - Macro Precision: 0.682 - Micro Precision: 0.814 - Weighted Precision: 0.797 - Macro Recall: 0.676 - Micro Recall: 0.814 - Weighted Recall: 0.814
[ "classsicantique", "frenchprovence", "industrial", "koreaasia", "lovelyromantic", "modern", "natural", "simple", "unique", "vintageretro" ]
galbitang/autotrain-sofa_merge_vit-95267146327
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 95267146327 - CO2 Emissions (in grams): 3.5112 ## Validation Metrics - Loss: 0.678 - Accuracy: 0.784 - Macro F1: 0.740 - Micro F1: 0.784 - Weighted F1: 0.778 - Macro Precision: 0.767 - Micro Precision: 0.784 - Weighted Precision: 0.786 - Macro Recall: 0.739 - Micro Recall: 0.784 - Weighted Recall: 0.784
[ "classicantique", "frenchprovence", "industrial", "koreaasia", "lovelyromantic", "modern", "natural", "simple", "unique", "vintageretro" ]
galbitang/autotrain-table_merge_vit_2-95271146330
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 95271146330 - CO2 Emissions (in grams): 0.0864 ## Validation Metrics - Loss: 0.690 - Accuracy: 0.810 - Macro F1: 0.788 - Micro F1: 0.810 - Weighted F1: 0.807 - Macro Precision: 0.815 - Micro Precision: 0.810 - Weighted Precision: 0.813 - Macro Recall: 0.776 - Micro Recall: 0.810 - Weighted Recall: 0.810
[ "classicantique", "frenchprovence", "industrial", "koreaasia", "lovelyromantic", "modern", "natural", "simple", "unique", "vintageretro" ]
dima806/ai_vs_real_image_detection
Checks whether the image is real or fake (AI-generated). **Note to users who want to use this model in production:** Beware that this model is trained on a dataset collected about 2 years ago. Since then, there is a remarkable progress in generating deepfake images with common AI tools, resulting in a significant concept drift. To mitigate that, I urge you to retrain the model using the latest available labeled data. As a quick-fix approach, simple reducing the threshold (say from default 0.5 to 0.1 or even 0.01) of labelling image as a fake may suffice. However, you will do that at your own risk, and retraining the model is the better way of handling the concept drift. See https://www.kaggle.com/code/dima806/cifake-ai-generated-image-detection-vit for more details. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/bbtmz7duMA6o4HfEp_vjz.png) ``` Classification report: precision recall f1-score support REAL 0.9868 0.9780 0.9824 24000 FAKE 0.9782 0.9870 0.9826 24000 accuracy 0.9825 48000 macro avg 0.9825 0.9825 0.9825 48000 weighted avg 0.9825 0.9825 0.9825 48000 ```
[ "real", "fake" ]
hchcsuim/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0670 - Accuracy: 0.9748 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2374 | 1.0 | 190 | 0.1074 | 0.9615 | | 0.1797 | 2.0 | 380 | 0.0838 | 0.9674 | | 0.111 | 3.0 | 570 | 0.0670 | 0.9748 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
Abhiram4/AnimeCharacterClassifierMark1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AnimeCharacterClassifierMark1 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.6720 - Accuracy: 0.8655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 42 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 5.0145 | 0.99 | 17 | 4.9303 | 0.0092 | | 4.8416 | 1.97 | 34 | 4.7487 | 0.0287 | | 4.4383 | 2.96 | 51 | 4.3597 | 0.1170 | | 4.0762 | 4.0 | 69 | 3.6419 | 0.3224 | | 3.108 | 4.99 | 86 | 2.8574 | 0.5246 | | 2.1571 | 5.97 | 103 | 2.2129 | 0.6653 | | 1.4685 | 6.96 | 120 | 1.7290 | 0.7495 | | 1.1649 | 8.0 | 138 | 1.3862 | 0.7977 | | 0.7905 | 8.99 | 155 | 1.1589 | 0.8214 | | 0.5549 | 9.97 | 172 | 1.0263 | 0.8296 | | 0.4577 | 10.96 | 189 | 0.8994 | 0.8368 | | 0.2964 | 12.0 | 207 | 0.8086 | 0.8552 | | 0.194 | 12.99 | 224 | 0.7446 | 0.8583 | | 0.1358 | 13.97 | 241 | 0.7064 | 0.8573 | | 0.1116 | 14.96 | 258 | 0.6720 | 0.8655 | | 0.0811 | 16.0 | 276 | 0.6515 | 0.8645 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
[ "abigail_williams_(fate)", "aegis_(persona)", "aisaka_taiga", "albedo", "anastasia_(idolmaster)", "aqua_(konosuba)", "arcueid_brunestud", "asia_argento", "astolfo_(fate)", "asuna_(sao)", "atago_(azur_lane)", "ayanami_rei", "belfast_(azur_lane)", "bremerton_(azur_lane)", "c.c", "chitanda_eru", "chloe_von_einzbern", "cleveland_(azur_lane)", "d.va_(overwatch)", "dido_(azur_lane)", "emilia_rezero", "enterprise_(azur_lane)", "formidable_(azur_lane)", "fubuki_(one-punch_man)", "fujibayashi_kyou", "fujiwara_chika", "furukawa_nagisa", "gawr_gura", "gilgamesh", "giorno_giovanna", "hanekawa_tsubasa", "hatsune_miku", "hayasaka_ai", "hirasawa_yui", "hyuuga_hinata", "ichigo_(darling_in_the_franxx)", "illyasviel_von_einzbern", "irisviel_von_einzbern", "ishtar_(fate_grand_order)", "isshiki_iroha", "jonathan_joestar", "kamado_nezuko", "kaname_madoka", "kanbaru_suruga", "karin_(blue_archive)", "karna_(fate)", "katsuragi_misato", "keqing_(genshin_impact)", "kirito", "kiryu_coco", "kizuna_ai", "kochou_shinobu", "komi_shouko", "laffey_(azur_lane)", "lancer", "makise_kurisu", "mash_kyrielight", "matou_sakura", "megumin", "mei_(pokemon)", "meltlilith", "minato_aqua", "misaka_mikoto", "miyazono_kawori", "mori_calliope", "nagato_yuki", "nakano_azusa", "nakano_itsuki", "nakano_miku", "nakano_nino", "nakano_yotsuba", "nami_(one_piece)", "nekomata_okayu", "nico_robin", "ninomae_ina'nis", "nishikino_maki", "okita_souji_(fate)", "ookami_mio", "oshino_ougi", "oshino_shinobu", "ouro_kronii", "paimon_(genshin_impact)", "platelet_(hataraku_saibou)", "ram_rezero", "raphtalia", "rem_rezero", "rias_gremory", "rider", "ryougi_shiki", "sakura_futaba", "sakurajima_mai", "sakurauchi_riko", "satonaka_chie", "semiramis_(fate)", "sengoku_nadeko", "senjougahara_hitagi", "shidare_hotaru", "shinomiya_kaguya", "shirakami_fubuki", "shirogane_naoto", "shirogane_noel", "shishiro_botan", "shuten_douji_(fate)", "sinon", "souryuu_asuka_langley", "st_ar-15_(girls_frontline)", "super_sonico", "suzuhara_lulu", "suzumiya_haruhi", "taihou_(azur_lane)", "takagi-san", "takamaki_anne", "takanashi_rikka", "takao_(azur_lane)", "takarada_rikka", "takimoto_hifumi", "tokoyami_towa", "toosaka_rin", "toujou_nozomi", "tsushima_yoshiko", "unicorn_(azur_lane)", "usada_pekora", "utsumi_erise", "watson_amelia", "waver_velvet", "xenovia_(high_school_dxd)", "yui_(angel_beats!)", "yuigahama_yui", "yukinoshita_yukino", "zero_two_(darling_in_the_franxx)" ]
LucyintheSky/model-prediction
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Fashion Model Prediction ## Model description This model predicts the name of the fashion model in the image. It is trained on [Lucy in the Sky](https://www.lucyinthesky.com/shop) images. This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k). ## Training and evaluation data It achieves the following results on the evaluation set: - Loss: 0.4297 - Accuracy: 0.9435 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "anna", "bianca", "mila", "natasha", "tailine", "cat", "ellie", "gabby", "genevive", "jessica", "kiele", "lisa", "melanie" ]
seige-ml/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0961 - Accuracy: 0.3333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.84 | 4 | 1.1132 | 0.32 | | No log | 1.89 | 9 | 1.0985 | 0.3267 | | 1.1116 | 2.53 | 12 | 1.0961 | 0.3333 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.1.0 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "0", "1", "2" ]
dima806/faces_age_detection
Returns age group with about 91% accuracy based on facial image. See https://www.kaggle.com/code/dima806/age-group-image-detection-vit for more details. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/Fp88lO_Z8KNt1JNzHyg1s.png) ``` Classification report: precision recall f1-score support MIDDLE 0.8316 0.9278 0.8771 4321 YOUNG 0.9598 0.8563 0.9051 4322 OLD 0.9552 0.9477 0.9515 4322 accuracy 0.9106 12965 macro avg 0.9155 0.9106 0.9112 12965 weighted avg 0.9155 0.9106 0.9112 12965 ```
[ "middle", "young", "old" ]
dima806/farm_insects_image_detection
Returns farm insect type given an image with about 91% accuracy. See https://www.kaggle.com/code/dima806/farm-insects-image-detection-vit for more details. ``` Classification report: precision recall f1-score support Fall Armyworms 0.7895 0.3191 0.4545 47 Western Corn Rootworms 0.9787 0.9787 0.9787 47 Colorado Potato Beetles 1.0000 0.9792 0.9895 48 Thrips 0.9762 0.8723 0.9213 47 Corn Earworms 0.9070 0.8125 0.8571 48 Cabbage Loopers 0.9388 0.9583 0.9485 48 Armyworms 0.6143 0.9149 0.7350 47 Brown Marmorated Stink Bugs 1.0000 1.0000 1.0000 48 Tomato Hornworms 0.9792 1.0000 0.9895 47 Citrus Canker 0.9038 1.0000 0.9495 47 Aphids 0.9020 0.9583 0.9293 48 Corn Borers 0.8148 0.9167 0.8627 48 Fruit Flies 1.0000 1.0000 1.0000 48 Africanized Honey Bees (Killer Bees) 1.0000 1.0000 1.0000 48 Spider Mites 0.9167 0.9167 0.9167 48 accuracy 0.9090 714 macro avg 0.9147 0.9085 0.9022 714 weighted avg 0.9151 0.9090 0.9027 714 ```
[ "fall armyworms", "western corn rootworms", "colorado potato beetles", "thrips", "corn earworms", "cabbage loopers", "armyworms", "brown marmorated stink bugs", "tomato hornworms", "citrus canker", "aphids", "corn borers", "fruit flies", "africanized honey bees (killer bees)", "spider mites" ]
abelkrw/beans_image_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beans_image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1072 - Accuracy: 0.96 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 12 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.94 | 8 | 1.3666 | 0.66 | | 0.3651 | 2.0 | 17 | 0.3823 | 0.84 | | 0.5622 | 2.94 | 25 | 0.3333 | 0.86 | | 0.3373 | 4.0 | 34 | 0.1274 | 0.97 | | 0.2055 | 4.94 | 42 | 0.1882 | 0.93 | | 0.1819 | 6.0 | 51 | 0.2265 | 0.9 | | 0.1819 | 6.94 | 59 | 0.2395 | 0.91 | | 0.2428 | 8.0 | 68 | 0.1451 | 0.97 | | 0.1305 | 8.94 | 76 | 0.1554 | 0.94 | | 0.1203 | 9.41 | 80 | 0.1705 | 0.92 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
abhirajeshbhai/weather_vit_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # weather_vit_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1100 - Accuracy: 0.9735 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 127 | 0.1199 | 0.9735 | | No log | 2.0 | 254 | 0.1290 | 0.9646 | | No log | 3.0 | 381 | 0.1100 | 0.9735 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "cloudy", "rain", "shine", "sunrise" ]
bryandts/garbage_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # garbage_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0790 - Accuracy: 0.9707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1259 | 1.0 | 1254 | 0.0790 | 0.9707 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "o", "r" ]
gianlab/swin-tiny-patch4-window7-224-finetuned-ecg-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-ecg-classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 ## Model description This model was created by importing the dataset of the photos of ECG image into Google Colab from kaggle here: https://www.kaggle.com/datasets/erhmrai/ecg-image-data/data . I then used the image classification tutorial here: https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb obtaining the following notebook: https://colab.research.google.com/drive/1KC6twirtsc7N1kmlwY3IQKVUmSuK7zlh?usp=sharing The possible classified data are: <ul> <li>N: Normal beat</li> <li>S: Supraventricular premature beat</li> <li>V: Premature ventricular contraction</li> <li>F: Fusion of ventricular and normal beat</li> <li>Q: Unclassifiable beat</li> <li>M: myocardial infarction</li> </ul> ### ECG example: ![Screenshot](N1.png) ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0476 | 1.0 | 697 | 0.0000 | 1.0 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
[ "f", "m", "n", "q", "s", "v" ]
khleeloo/vit-base-skin
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-skin This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6917 - Accuracy: 0.8549 - F1: 0.8552 - Precision: 0.8560 - Recall: 0.8549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.9322 | 0.16 | 100 | 0.8109 | 0.6943 | 0.6290 | 0.5939 | 0.6943 | | 0.7518 | 0.32 | 200 | 0.6722 | 0.7409 | 0.6832 | 0.6945 | 0.7409 | | 0.6616 | 0.48 | 300 | 0.7126 | 0.7358 | 0.7077 | 0.7039 | 0.7358 | | 0.8264 | 0.64 | 400 | 0.6001 | 0.8135 | 0.8092 | 0.8178 | 0.8135 | | 0.5767 | 0.8 | 500 | 0.6306 | 0.7772 | 0.7619 | 0.7945 | 0.7772 | | 0.5939 | 0.96 | 600 | 0.4621 | 0.8290 | 0.7988 | 0.8397 | 0.8290 | | 0.4351 | 1.12 | 700 | 0.5544 | 0.7979 | 0.7894 | 0.8410 | 0.7979 | | 0.4737 | 1.28 | 800 | 0.5151 | 0.8238 | 0.8334 | 0.8708 | 0.8238 | | 0.428 | 1.44 | 900 | 0.4980 | 0.8238 | 0.8170 | 0.8299 | 0.8238 | | 0.4596 | 1.6 | 1000 | 0.5650 | 0.7927 | 0.8032 | 0.8428 | 0.7927 | | 0.4096 | 1.76 | 1100 | 0.4544 | 0.8342 | 0.8178 | 0.8567 | 0.8342 | | 0.4328 | 1.92 | 1200 | 0.4524 | 0.8290 | 0.8294 | 0.8482 | 0.8290 | | 0.2272 | 2.08 | 1300 | 0.4808 | 0.8290 | 0.8304 | 0.8409 | 0.8290 | | 0.2415 | 2.24 | 1400 | 0.5585 | 0.7927 | 0.7916 | 0.8057 | 0.7927 | | 0.2743 | 2.4 | 1500 | 0.4144 | 0.8497 | 0.8484 | 0.8497 | 0.8497 | | 0.1943 | 2.56 | 1600 | 0.3977 | 0.8705 | 0.8722 | 0.8761 | 0.8705 | | 0.1839 | 2.72 | 1700 | 0.4784 | 0.8394 | 0.8382 | 0.8517 | 0.8394 | | 0.1905 | 2.88 | 1800 | 0.4314 | 0.8653 | 0.8669 | 0.8724 | 0.8653 | | 0.111 | 3.04 | 1900 | 0.5080 | 0.8290 | 0.8309 | 0.8348 | 0.8290 | | 0.0872 | 3.19 | 2000 | 0.5320 | 0.8549 | 0.8520 | 0.8649 | 0.8549 | | 0.1169 | 3.35 | 2100 | 0.5110 | 0.8342 | 0.8386 | 0.8477 | 0.8342 | | 0.1181 | 3.51 | 2200 | 0.4916 | 0.8446 | 0.8482 | 0.8563 | 0.8446 | | 0.0879 | 3.67 | 2300 | 0.5428 | 0.8601 | 0.8657 | 0.8829 | 0.8601 | | 0.1896 | 3.83 | 2400 | 0.5534 | 0.8497 | 0.8484 | 0.8536 | 0.8497 | | 0.0794 | 3.99 | 2500 | 0.6542 | 0.8342 | 0.8259 | 0.8270 | 0.8342 | | 0.0398 | 4.15 | 2600 | 0.5962 | 0.8187 | 0.8243 | 0.8338 | 0.8187 | | 0.0512 | 4.31 | 2700 | 0.6286 | 0.8497 | 0.8447 | 0.8457 | 0.8497 | | 0.0106 | 4.47 | 2800 | 0.6446 | 0.8394 | 0.8372 | 0.8377 | 0.8394 | | 0.0058 | 4.63 | 2900 | 0.5754 | 0.8653 | 0.8616 | 0.8618 | 0.8653 | | 0.0268 | 4.79 | 3000 | 0.5966 | 0.8653 | 0.8651 | 0.8658 | 0.8653 | | 0.0146 | 4.95 | 3100 | 0.6707 | 0.8601 | 0.8535 | 0.8577 | 0.8601 | | 0.0325 | 5.11 | 3200 | 0.6543 | 0.8549 | 0.8518 | 0.8511 | 0.8549 | | 0.0063 | 5.27 | 3300 | 0.6780 | 0.8497 | 0.8519 | 0.8583 | 0.8497 | | 0.003 | 5.43 | 3400 | 0.6675 | 0.8601 | 0.8577 | 0.8562 | 0.8601 | | 0.0143 | 5.59 | 3500 | 0.6967 | 0.8601 | 0.8554 | 0.8539 | 0.8601 | | 0.004 | 5.75 | 3600 | 0.6992 | 0.8601 | 0.8573 | 0.8552 | 0.8601 | | 0.003 | 5.91 | 3700 | 0.6917 | 0.8549 | 0.8552 | 0.8560 | 0.8549 | ### Framework versions - Transformers 4.29.2 - Pytorch 1.13.1 - Datasets 2.14.5 - Tokenizers 0.13.3
[ "mel", "nv", "bcc", "akiec", "bkl", "df", "vasc" ]
regis-funke/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0587 - Accuracy: 0.9804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.235 | 1.0 | 190 | 0.1109 | 0.9611 | | 0.1616 | 2.0 | 380 | 0.0706 | 0.9774 | | 0.1309 | 3.0 | 570 | 0.0587 | 0.9804 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.2.0.dev20231018 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "annualcrop", "forest", "herbaceousvegetation", "highway", "industrial", "pasture", "permanentcrop", "residential", "river", "sealake" ]
yusuf802/Leaf-Disease-Predictor
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # working This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the leaf-images dataset. It achieves the following results on the evaluation set: - Loss: 0.0857 - Accuracy: 0.9801 ## Model description Finetuned model on 66000+ images of different species of leaves along with their diseases ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 48 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9728 | 0.08 | 100 | 0.9026 | 0.8922 | | 0.4538 | 0.17 | 200 | 0.4412 | 0.9270 | | 0.2368 | 0.25 | 300 | 0.2870 | 0.9399 | | 0.2388 | 0.34 | 400 | 0.2208 | 0.9504 | | 0.1422 | 0.42 | 500 | 0.2046 | 0.9508 | | 0.1663 | 0.51 | 600 | 0.1538 | 0.9625 | | 0.1535 | 0.59 | 700 | 0.1427 | 0.9653 | | 0.1233 | 0.68 | 800 | 0.1133 | 0.9724 | | 0.1079 | 0.76 | 900 | 0.1005 | 0.9759 | | 0.1154 | 0.84 | 1000 | 0.0989 | 0.9748 | | 0.08 | 0.93 | 1100 | 0.0857 | 0.9801 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
[ "apple_black_rot", "apple_cedar_apple_rust", "corn_(maize)_healthy", "cotton_leaf_diseased", "cotton_leaf_fresh", "grape_black_rot", "grape___esca_(black_measles)", "grape___leaf_blight_(isariopsis_leaf_spot)", "grape___healthy", "orange_haunglongbing_(citrus_greening)", "orange__black_rot", "orange__canker", "apple_powdery_mildew", "orange__healthy", "peach_bacterial_spot", "peach_healthy", "pepper,_bell_bacterial_spot", "pepper,_bell_healthy", "potato_early_blight", "potato_late_blight", "potato_healthy", "squash_powdery_mildew", "strawberry_leaf_scorch", "apple_healthy", "strawberry_healthy", "tomato_bacterial_spot", "tomato_early_blight", "tomato_late_blight", "tomato_leaf_mold", "tomato_septoria_leaf_spot", "tomato_spider_mites_two_spotted_spider_mite", "tomato_target_spot", "tomato_tomato_yellow_leaf_curl_virus", "tomato_tomato_mosaic_virus", "apple_scab", "tomato_healthy", "wheat_healthy", "wheat_leaf_rust", "wheat_nitrogen_deficiency", "cherry_(including_sour)_powdery_mildew", "cherry_(including_sour)_healthy", "corn_(maize)_cercospora_leaf_spot gray_leaf_spot", "corn_(maize)_common_rust", "corn_(maize)_northern_leaf_blight" ]
platzi/platzi-vit-model-gio-testing
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-gio-testing This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0153 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1508 | 3.85 | 500 | 0.0153 | 1.0 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
[ "angular_leaf_spot", "bean_rust", "healthy" ]
Abhiram4/SwinMark2
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SwinMark2 This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.0952 - Accuracy: 0.9666 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1407 | 1.0 | 231 | 0.1230 | 0.9586 | | 0.1209 | 2.0 | 462 | 0.1066 | 0.9630 | | 0.0987 | 3.0 | 693 | 0.0952 | 0.9666 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
[ "cnv", "dme", "drusen", "normal" ]
zkdeng/resnet-50-finetuned-dangerousSpiders
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50-finetuned-dangerousSpiders This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.8733 - eval_accuracy: 0.5635 - eval_precision: 0.1112 - eval_recall: 0.0821 - eval_f1: 0.0750 - eval_runtime: 120.0747 - eval_samples_per_second: 224.177 - eval_steps_per_second: 14.016 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Framework versions - Transformers 4.33.2 - Pytorch 2.2.0.dev20230921 - Datasets 2.14.5 - Tokenizers 0.13.3
[ "acantholycosa_lignaria", "aculepeira_ceropegia", "agalenatea_redii", "agelena_labyrinthica", "aglaoctenus_castaneus", "aglaoctenus_lagotis", "allocosa_funerea", "allotrochosina_schauinslandi", "alopecosa_albofasciata", "alopecosa_barbipes", "alopecosa_cuneata", "alopecosa_inquilina", "alopecosa_kochi", "alopecosa_pulverulenta", "anahita_punctulata", "anasaitis_canosa", "ancylometes_bogotensis", "ancylometes_concolor", "ancylometes_rufus", "anoteropsis_hilaris", "anoteropsis_litoralis", "anyphaena_accentuata", "aphonopelma_hentzi", "araneus_diadematus", "araneus_marmoreus", "araneus_quadratus", "araneus_trifolium", "araniella_displicata", "arctosa_cinerea", "arctosa_leopardus", "arctosa_littoralis", "arctosa_perita", "arctosa_personata", "argiope_argentata", "argiope_aurantia", "argiope_bruennichi", "argiope_keyserlingi", "argiope_lobata", "argiope_trifasciata", "asthenoctenus_borellii", "attulus_fasciger", "aulonia_albimana", "austracantha_minax", "badumna_longinqua", "carrhotus_xanthogramma", "centroctenus_brevipes", "cheiracanthium_erraticum", "cheiracanthium_gracile", "cheiracanthium_inclusum", "cheiracanthium_mildei", "cheiracanthium_punctorium", "colonus_hesperus", "colonus_sylvanus", "ctenus_amphora", "ctenus_hibernalis", "ctenus_medius", "ctenus_ornatus", "cupiennius_coccineus", "cupiennius_getazi", "cupiennius_salei", "cyclosa_conica", "cyrtophora_citricola", "diapontia_uruguayensis", "dolomedes_albineus", "dolomedes_minor", "dolomedes_scriptus", "dolomedes_tenebrosus", "dolomedes_triton", "dysdera_crocata", "ebrechtella_tricuspidata", "enoplognatha_ovata", "eratigena_agrestis", "eratigena_duellica", "eriophora_ravilla", "eris_militaris", "evarcha_arcuata", "gasteracantha_cancriformis", "geolycosa_vultuosa", "gladicosa_gulosa", "gladicosa_pulchra", "habronattus_pyrrithrix", "hasarius_adansoni", "helpis_minitabunda", "hentzia_mitrata", "hentzia_palmarum", "herpyllus_ecclesiasticus", "heteropoda_venatoria", "hippasa_holmerae", "hogna_antelucana", "hogna_baltimoriana", "hogna_bivittata", "hogna_carolinensis", "hogna_crispipes", "hogna_frondicola", "hogna_gumia", "hogna_radiata", "holocnemus_pluchei", "kukulcania_hibernalis", "lampona_cylindrata", "larinioides_cornutus", "larinioides_sclopetarius", "latrodectus_bishopi", "latrodectus_curacaviensis", "latrodectus_geometricus", "latrodectus_hasselti", "latrodectus_hesperus", "latrodectus_katipo", "latrodectus_mactans", "latrodectus_mirabilis", "latrodectus_renivulvatus", "latrodectus_tredecimguttatus", "latrodectus_variolus", "leucauge_argyra", "leucauge_argyrobapta", "leucauge_dromedaria", "leucauge_venusta", "loxosceles_amazonica", "loxosceles_deserta", "loxosceles_laeta", "loxosceles_reclusa", "loxosceles_rufescens", "loxosceles_tenochtitlan", "loxosceles_yucatana", "lycosa_erythrognatha", "lycosa_hispanica", "lycosa_pampeana", "lycosa_praegrandis", "lycosa_singoriensis", "lycosa_tarantula", "lyssomanes_viridis", "maevia_inclemens", "mangora_acalypha", "maratus_griseus", "marpissa_muscosa", "mecynogea_lemniscata", "menemerus_bivittatus", "menemerus_semilimbatus", "micrathena_gracilis", "micrathena_sagittata", "micrommata_virescens", "missulena_bradleyi", "missulena_occatoria", "misumena_vatia", "misumenoides_formosipes", "misumessus_oblongus", "naphrys_pulex", "neoscona_arabesca", "neoscona_crucifera", "neoscona_oaxacensis", "nephila_pilipes", "neriene_radiata", "nesticodes_rufipes", "nuctenea_umbratica", "oxyopes_salticus", "oxyopes_scalaris", "paraphidippus_aurantius", "parasteatoda_tepidariorum", "paratrochosina_amica", "pardosa_amentata", "pardosa_lapidicina", "pardosa_mercurialis", "pardosa_moesta", "pardosa_wagleri", "peucetia_viridans", "phidippus_audax", "phidippus_clarus", "phidippus_johnsoni", "phidippus_putnami", "philaeus_chrysops", "philodromus_dispar", "pholcus_phalangioides", "phoneutria_boliviensis", "phoneutria_depilata", "phoneutria_fera", "phoneutria_nigriventer", "phoneutria_pertyi", "phoneutria_reidyi", "pirata_piraticus", "pisaura_mirabilis", "pisaurina_mira", "platycryptus_californicus", "platycryptus_undatus", "plebs_eburnus", "plexippus_paykulli", "portacosa_cinerea", "rabidosa_hentzi", "rabidosa_punctulata", "rabidosa_rabida", "salticus_scenicus", "sassacus_vitis", "schizocosa_avida", "schizocosa_malitiosa", "schizocosa_mccooki", "scytodes_thoracica", "sicarius_thomisoides", "socca_pustulosa", "sosippus_californicus", "steatoda_grossa", "steatoda_nobilis", "steatoda_triangulosa", "synema_globosum", "thomisus_onustus", "tigrosa_annexa", "tigrosa_aspersa", "tigrosa_georgicola", "tigrosa_helluo", "trichonephila_clavata", "trichonephila_clavipes", "trichonephila_edulis", "trichonephila_plumipes", "trochosa_ruricola", "trochosa_sepulchralis", "trochosa_terricola", "tropicosa_moesta", "venator_immansuetus", "venator_spenceri", "venatrix_furcillata", "verrucosa_arenata", "wadicosa_fidelis", "xerolycosa_miniata", "xerolycosa_nemoralis", "zoropsis_spinimana", "zygiella_x-notata" ]
diana9m/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.5666 - Accuracy: 0.7778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.92 | 6 | 4.5666 | 0.7778 | | 5.077 | 2.0 | 13 | 1.7078 | 0.7778 | | 5.077 | 2.77 | 18 | 1.4156 | 0.7778 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.13.3
[ "0", "0", "1", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "1", "1", "1", "0", "0", "1", "0", "0", "0", "0", "0", "0", "1", "0", "0", "1", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "1", "0", "0", "0", "0", "0", "1", "1", "1", "1", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "1", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "1", "1", "1", "1", "1", "1", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "1", "1", "1", "1", "0", "0", "0", "0", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "1", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "1", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "1", "1", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "1", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "1", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "1", "1", "1", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "1", "1", "1", "0", "0", "1", "0", "1", "1", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "1", "0", "0", "0", "0", "1", "0", "0", "1", "0", "0", "0", "0", "1", "1", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "1", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "1", "0", "1", "0", "0", "0", "0", "0", "0", "1", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "1", "0", "0", "1", "1", "0", "1", "1", "0", "1", "1", "1", "1", "1", "1", "0", "0", "0", "0", "0", "1", "0", "1", "1", "0", "0", "1", "1", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "1", "1", "0", "0", "0", "1", "0", "0", "0", "0", "0", "1", "0", "0", "0", "0", "0", "0", "0", "1", "1", "0", "0", "0", "0", "0" ]
bdpc/vit-base_rvl_cdip_aurc
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip_aurc This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2759 - Accuracy: 0.893 - Brier Loss: 0.1798 - Nll: 0.8614 - F1 Micro: 0.893 - F1 Macro: 0.8928 - Ece: 0.0750 - Aurc: 0.0215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | 0.0303 | 1.0 | 500 | 0.1865 | 0.8795 | 0.1840 | 1.2087 | 0.8795 | 0.8791 | 0.0495 | 0.0241 | | 0.0262 | 2.0 | 1000 | 0.2146 | 0.8788 | 0.1909 | 1.1956 | 0.8788 | 0.8789 | 0.0603 | 0.0257 | | 0.0121 | 3.0 | 1500 | 0.2117 | 0.886 | 0.1799 | 1.0878 | 0.886 | 0.8865 | 0.0611 | 0.0230 | | 0.0057 | 4.0 | 2000 | 0.2279 | 0.8878 | 0.1803 | 1.0108 | 0.8878 | 0.8879 | 0.0678 | 0.0228 | | 0.0038 | 5.0 | 2500 | 0.2491 | 0.8872 | 0.1827 | 0.9661 | 0.8872 | 0.8877 | 0.0725 | 0.0234 | | 0.0028 | 6.0 | 3000 | 0.2398 | 0.89 | 0.1806 | 0.9378 | 0.89 | 0.8901 | 0.0725 | 0.0215 | | 0.0016 | 7.0 | 3500 | 0.2736 | 0.891 | 0.1792 | 0.8975 | 0.891 | 0.8914 | 0.0744 | 0.0221 | | 0.0014 | 8.0 | 4000 | 0.2357 | 0.8905 | 0.1811 | 0.8993 | 0.8905 | 0.8910 | 0.0764 | 0.0210 | | 0.001 | 9.0 | 4500 | 0.2714 | 0.8898 | 0.1807 | 0.8650 | 0.8898 | 0.8897 | 0.0783 | 0.0213 | | 0.0009 | 10.0 | 5000 | 0.2759 | 0.893 | 0.1798 | 0.8614 | 0.893 | 0.8928 | 0.0750 | 0.0215 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip_ce
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip_ce This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5626 - Accuracy: 0.8932 - Brier Loss: 0.1854 - Nll: 0.8898 - F1 Micro: 0.8932 - F1 Macro: 0.8934 - Ece: 0.0831 - Aurc: 0.0199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | 0.1771 | 1.0 | 500 | 0.4123 | 0.887 | 0.1720 | 1.2003 | 0.887 | 0.8872 | 0.0534 | 0.0204 | | 0.1349 | 2.0 | 1000 | 0.4344 | 0.8895 | 0.1754 | 1.1219 | 0.8895 | 0.8900 | 0.0614 | 0.0207 | | 0.0656 | 3.0 | 1500 | 0.4602 | 0.8852 | 0.1836 | 1.0477 | 0.8852 | 0.8856 | 0.0734 | 0.0197 | | 0.0314 | 4.0 | 2000 | 0.5044 | 0.889 | 0.1851 | 1.0124 | 0.889 | 0.8888 | 0.0729 | 0.0230 | | 0.0134 | 5.0 | 2500 | 0.5193 | 0.8895 | 0.1861 | 0.9779 | 0.8895 | 0.8905 | 0.0803 | 0.0207 | | 0.0075 | 6.0 | 3000 | 0.5300 | 0.8915 | 0.1848 | 0.9515 | 0.8915 | 0.8922 | 0.0793 | 0.0203 | | 0.0057 | 7.0 | 3500 | 0.5552 | 0.89 | 0.1893 | 0.9200 | 0.89 | 0.8897 | 0.0852 | 0.0205 | | 0.0047 | 8.0 | 4000 | 0.5589 | 0.892 | 0.1871 | 0.9245 | 0.892 | 0.8923 | 0.0826 | 0.0198 | | 0.0046 | 9.0 | 4500 | 0.5620 | 0.8935 | 0.1854 | 0.8987 | 0.8935 | 0.8937 | 0.0828 | 0.0199 | | 0.0042 | 10.0 | 5000 | 0.5626 | 0.8932 | 0.1854 | 0.8898 | 0.8932 | 0.8934 | 0.0831 | 0.0199 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_AURC_64
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_AURC_64 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3118 - Accuracy: 0.8952 - Brier Loss: 0.1766 - Nll: 0.8835 - F1 Micro: 0.8952 - F1 Macro: 0.8951 - Ece: 0.0747 - Aurc: 0.0206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 250 | 0.1770 | 0.8875 | 0.1709 | 1.2031 | 0.8875 | 0.8885 | 0.0519 | 0.0208 | | 0.0228 | 2.0 | 500 | 0.2135 | 0.8852 | 0.1813 | 1.1542 | 0.8852 | 0.8853 | 0.0557 | 0.0228 | | 0.0228 | 3.0 | 750 | 0.1750 | 0.8918 | 0.1729 | 1.0088 | 0.8918 | 0.8917 | 0.0628 | 0.0192 | | 0.0066 | 4.0 | 1000 | 0.2117 | 0.8955 | 0.1697 | 0.9611 | 0.8955 | 0.8954 | 0.0655 | 0.0189 | | 0.0066 | 5.0 | 1250 | 0.2578 | 0.8958 | 0.1714 | 0.9234 | 0.8958 | 0.8958 | 0.0690 | 0.0194 | | 0.0021 | 6.0 | 1500 | 0.2752 | 0.8962 | 0.1730 | 0.9093 | 0.8962 | 0.8964 | 0.0709 | 0.0197 | | 0.0021 | 7.0 | 1750 | 0.2949 | 0.8972 | 0.1748 | 0.8841 | 0.8972 | 0.8972 | 0.0708 | 0.0200 | | 0.0014 | 8.0 | 2000 | 0.3037 | 0.8955 | 0.1755 | 0.8842 | 0.8955 | 0.8954 | 0.0739 | 0.0204 | | 0.0014 | 9.0 | 2250 | 0.3045 | 0.8952 | 0.1764 | 0.8839 | 0.8952 | 0.8951 | 0.0741 | 0.0206 | | 0.0013 | 10.0 | 2500 | 0.3118 | 0.8952 | 0.1766 | 0.8835 | 0.8952 | 0.8951 | 0.0747 | 0.0206 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_ce_64
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_ce_64 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5145 - Accuracy: 0.8908 - Brier Loss: 0.1847 - Nll: 0.9466 - F1 Micro: 0.8907 - F1 Macro: 0.8910 - Ece: 0.0829 - Aurc: 0.0191 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 250 | 0.4009 | 0.8892 | 0.1695 | 1.1791 | 0.8892 | 0.8896 | 0.0538 | 0.0185 | | 0.1472 | 2.0 | 500 | 0.4214 | 0.8938 | 0.1688 | 1.1365 | 0.8938 | 0.8948 | 0.0527 | 0.0199 | | 0.1472 | 3.0 | 750 | 0.4245 | 0.8898 | 0.1722 | 1.0919 | 0.8898 | 0.8900 | 0.0633 | 0.0185 | | 0.0462 | 4.0 | 1000 | 0.4571 | 0.891 | 0.1776 | 1.0386 | 0.891 | 0.8914 | 0.0699 | 0.0198 | | 0.0462 | 5.0 | 1250 | 0.4775 | 0.8922 | 0.1797 | 1.0236 | 0.8922 | 0.8926 | 0.0745 | 0.0196 | | 0.0118 | 6.0 | 1500 | 0.4953 | 0.8878 | 0.1845 | 0.9920 | 0.8878 | 0.8882 | 0.0823 | 0.0190 | | 0.0118 | 7.0 | 1750 | 0.5052 | 0.89 | 0.1847 | 0.9631 | 0.89 | 0.8903 | 0.0820 | 0.0193 | | 0.0065 | 8.0 | 2000 | 0.5068 | 0.8905 | 0.1832 | 0.9653 | 0.8905 | 0.8910 | 0.0816 | 0.0190 | | 0.0065 | 9.0 | 2250 | 0.5143 | 0.8905 | 0.1850 | 0.9551 | 0.8905 | 0.8908 | 0.0833 | 0.0191 | | 0.0053 | 10.0 | 2500 | 0.5145 | 0.8908 | 0.1847 | 0.9466 | 0.8907 | 0.8910 | 0.0829 | 0.0191 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_ce_32
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_ce_32 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5671 - Accuracy: 0.8915 - Brier Loss: 0.1895 - Nll: 0.9175 - F1 Micro: 0.8915 - F1 Macro: 0.8919 - Ece: 0.0850 - Aurc: 0.0200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | 0.1771 | 1.0 | 500 | 0.4121 | 0.8885 | 0.1719 | 1.2085 | 0.8885 | 0.8888 | 0.0509 | 0.0203 | | 0.134 | 2.0 | 1000 | 0.4415 | 0.8882 | 0.1782 | 1.1210 | 0.8882 | 0.8886 | 0.0626 | 0.0212 | | 0.0682 | 3.0 | 1500 | 0.4722 | 0.8855 | 0.1847 | 1.0778 | 0.8855 | 0.8858 | 0.0740 | 0.0213 | | 0.0325 | 4.0 | 2000 | 0.4851 | 0.8905 | 0.1796 | 1.0195 | 0.8905 | 0.8911 | 0.0712 | 0.0213 | | 0.0145 | 5.0 | 2500 | 0.5409 | 0.8842 | 0.1946 | 1.0096 | 0.8842 | 0.8850 | 0.0860 | 0.0217 | | 0.0082 | 6.0 | 3000 | 0.5378 | 0.8872 | 0.1886 | 0.9573 | 0.8872 | 0.8879 | 0.0858 | 0.0206 | | 0.0059 | 7.0 | 3500 | 0.5446 | 0.8895 | 0.1870 | 0.9288 | 0.8895 | 0.8897 | 0.0844 | 0.0206 | | 0.0046 | 8.0 | 4000 | 0.5580 | 0.8885 | 0.1874 | 0.9153 | 0.8885 | 0.8889 | 0.0859 | 0.0203 | | 0.0043 | 9.0 | 4500 | 0.5675 | 0.8905 | 0.1903 | 0.9313 | 0.8905 | 0.8910 | 0.0864 | 0.0201 | | 0.004 | 10.0 | 5000 | 0.5671 | 0.8915 | 0.1895 | 0.9175 | 0.8915 | 0.8919 | 0.0850 | 0.0200 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_AURC_32
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_AURC_32 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3439 - Accuracy: 0.8962 - Brier Loss: 0.1805 - Nll: 0.8184 - F1 Micro: 0.8962 - F1 Macro: 0.8963 - Ece: 0.0767 - Aurc: 0.0220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | 0.0301 | 1.0 | 500 | 0.1897 | 0.8808 | 0.1804 | 1.1636 | 0.8808 | 0.8807 | 0.0528 | 0.0227 | | 0.0229 | 2.0 | 1000 | 0.2504 | 0.883 | 0.1834 | 1.1357 | 0.883 | 0.8832 | 0.0573 | 0.0248 | | 0.0081 | 3.0 | 1500 | 0.2251 | 0.8858 | 0.1787 | 1.0242 | 0.8858 | 0.8858 | 0.0653 | 0.0221 | | 0.004 | 4.0 | 2000 | 0.3075 | 0.886 | 0.1831 | 0.9279 | 0.886 | 0.8850 | 0.0744 | 0.0227 | | 0.0023 | 5.0 | 2500 | 0.2491 | 0.8908 | 0.1791 | 0.9302 | 0.8907 | 0.8916 | 0.0728 | 0.0212 | | 0.0014 | 6.0 | 3000 | 0.3067 | 0.8925 | 0.1795 | 0.8631 | 0.8925 | 0.8929 | 0.0752 | 0.0215 | | 0.0012 | 7.0 | 3500 | 0.3277 | 0.8925 | 0.1812 | 0.8729 | 0.8925 | 0.8922 | 0.0764 | 0.0218 | | 0.0009 | 8.0 | 4000 | 0.3386 | 0.895 | 0.1797 | 0.8406 | 0.895 | 0.8951 | 0.0760 | 0.0219 | | 0.0007 | 9.0 | 4500 | 0.3383 | 0.8968 | 0.1808 | 0.8293 | 0.8968 | 0.8969 | 0.0747 | 0.0220 | | 0.0006 | 10.0 | 5000 | 0.3439 | 0.8962 | 0.1805 | 0.8184 | 0.8962 | 0.8963 | 0.0767 | 0.0220 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_AURC_16
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_AURC_16 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2895 - Accuracy: 0.8925 - Brier Loss: 0.1833 - Nll: 0.8632 - F1 Micro: 0.8925 - F1 Macro: 0.8927 - Ece: 0.0768 - Aurc: 0.0218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | 0.0448 | 1.0 | 1000 | 0.1956 | 0.8758 | 0.1900 | 1.1701 | 0.8758 | 0.8769 | 0.0566 | 0.0252 | | 0.0381 | 2.0 | 2000 | 0.2463 | 0.8715 | 0.1989 | 1.1688 | 0.8715 | 0.8716 | 0.0715 | 0.0261 | | 0.0136 | 3.0 | 3000 | 0.2947 | 0.87 | 0.2081 | 1.0890 | 0.87 | 0.8693 | 0.0752 | 0.0271 | | 0.0092 | 4.0 | 4000 | 0.2718 | 0.881 | 0.1901 | 1.0230 | 0.881 | 0.8811 | 0.0759 | 0.0253 | | 0.0048 | 5.0 | 5000 | 0.2823 | 0.8812 | 0.1934 | 0.9914 | 0.8812 | 0.8814 | 0.0777 | 0.0238 | | 0.0045 | 6.0 | 6000 | 0.2555 | 0.8855 | 0.1889 | 0.9305 | 0.8855 | 0.8861 | 0.0768 | 0.0223 | | 0.0022 | 7.0 | 7000 | 0.2754 | 0.886 | 0.1873 | 0.8958 | 0.886 | 0.8860 | 0.0804 | 0.0221 | | 0.0019 | 8.0 | 8000 | 0.2784 | 0.8858 | 0.1914 | 0.9248 | 0.8858 | 0.8866 | 0.0796 | 0.0229 | | 0.0008 | 9.0 | 9000 | 0.2855 | 0.8878 | 0.1885 | 0.8671 | 0.8878 | 0.8876 | 0.0809 | 0.0226 | | 0.0005 | 10.0 | 10000 | 0.2895 | 0.8925 | 0.1833 | 0.8632 | 0.8925 | 0.8927 | 0.0768 | 0.0218 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_ce_16
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_ce_16 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6681 - Accuracy: 0.89 - Brier Loss: 0.2001 - Nll: 0.9073 - F1 Micro: 0.89 - F1 Macro: 0.8905 - Ece: 0.0923 - Aurc: 0.0219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | 0.209 | 1.0 | 1000 | 0.4595 | 0.8775 | 0.1885 | 1.1949 | 0.8775 | 0.8784 | 0.0616 | 0.0237 | | 0.1707 | 2.0 | 2000 | 0.4835 | 0.881 | 0.1887 | 1.1366 | 0.881 | 0.8803 | 0.0720 | 0.0237 | | 0.0893 | 3.0 | 3000 | 0.5434 | 0.8808 | 0.1991 | 1.0313 | 0.8808 | 0.8805 | 0.0830 | 0.0237 | | 0.0442 | 4.0 | 4000 | 0.5746 | 0.8845 | 0.1964 | 0.9971 | 0.8845 | 0.8850 | 0.0858 | 0.0234 | | 0.0176 | 5.0 | 5000 | 0.6168 | 0.8802 | 0.2062 | 1.0035 | 0.8802 | 0.8799 | 0.0935 | 0.0241 | | 0.0098 | 6.0 | 6000 | 0.6533 | 0.882 | 0.2074 | 0.9667 | 0.882 | 0.8829 | 0.0953 | 0.0237 | | 0.0066 | 7.0 | 7000 | 0.6557 | 0.8838 | 0.2041 | 0.9568 | 0.8838 | 0.8833 | 0.0942 | 0.0235 | | 0.0049 | 8.0 | 8000 | 0.6557 | 0.8878 | 0.1995 | 0.9076 | 0.8878 | 0.8883 | 0.0934 | 0.0220 | | 0.0027 | 9.0 | 9000 | 0.6693 | 0.8882 | 0.2024 | 0.9127 | 0.8882 | 0.8888 | 0.0939 | 0.0222 | | 0.0031 | 10.0 | 10000 | 0.6681 | 0.89 | 0.2001 | 0.9073 | 0.89 | 0.8905 | 0.0923 | 0.0219 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_AURC_8
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_AURC_8 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3096 - Accuracy: 0.883 - Brier Loss: 0.2014 - Nll: 0.9150 - F1 Micro: 0.883 - F1 Macro: 0.8832 - Ece: 0.0891 - Aurc: 0.0256 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | 0.0736 | 1.0 | 2000 | 0.2385 | 0.8592 | 0.2172 | 1.2230 | 0.8592 | 0.8608 | 0.0734 | 0.0313 | | 0.0594 | 2.0 | 4000 | 0.2561 | 0.8712 | 0.2047 | 1.2297 | 0.8713 | 0.8716 | 0.0678 | 0.0283 | | 0.0421 | 3.0 | 6000 | 0.2432 | 0.867 | 0.2104 | 1.1813 | 0.867 | 0.8679 | 0.0749 | 0.0303 | | 0.0256 | 4.0 | 8000 | 0.2882 | 0.8632 | 0.2199 | 1.1103 | 0.8632 | 0.8635 | 0.0847 | 0.0310 | | 0.0147 | 5.0 | 10000 | 0.4246 | 0.8515 | 0.2466 | 1.1118 | 0.8515 | 0.8489 | 0.1059 | 0.0360 | | 0.0105 | 6.0 | 12000 | 0.2747 | 0.8668 | 0.2220 | 1.0335 | 0.8668 | 0.8691 | 0.0986 | 0.0278 | | 0.004 | 7.0 | 14000 | 0.2954 | 0.878 | 0.2034 | 0.9467 | 0.878 | 0.8783 | 0.0865 | 0.0264 | | 0.0034 | 8.0 | 16000 | 0.3339 | 0.8708 | 0.2185 | 0.9551 | 0.8708 | 0.8713 | 0.0969 | 0.0286 | | 0.0017 | 9.0 | 18000 | 0.3125 | 0.8748 | 0.2099 | 0.9454 | 0.8748 | 0.8761 | 0.0953 | 0.0265 | | 0.0009 | 10.0 | 20000 | 0.3096 | 0.883 | 0.2014 | 0.9150 | 0.883 | 0.8832 | 0.0891 | 0.0256 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_ce_8
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_ce_8 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8308 - Accuracy: 0.8822 - Brier Loss: 0.2169 - Nll: 0.9246 - F1 Micro: 0.8822 - F1 Macro: 0.8823 - Ece: 0.1044 - Aurc: 0.0265 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | 0.2906 | 1.0 | 2000 | 0.5302 | 0.867 | 0.2106 | 1.2043 | 0.867 | 0.8685 | 0.0746 | 0.0284 | | 0.236 | 2.0 | 4000 | 0.5819 | 0.8695 | 0.2142 | 1.1215 | 0.8695 | 0.8688 | 0.0909 | 0.0267 | | 0.1236 | 3.0 | 6000 | 0.7115 | 0.8605 | 0.2390 | 1.1453 | 0.8605 | 0.8604 | 0.1069 | 0.0295 | | 0.0703 | 4.0 | 8000 | 0.6965 | 0.8715 | 0.2265 | 1.0124 | 0.8715 | 0.8720 | 0.1015 | 0.0290 | | 0.0307 | 5.0 | 10000 | 0.7503 | 0.8742 | 0.2229 | 0.9824 | 0.8742 | 0.8746 | 0.1052 | 0.0257 | | 0.0229 | 6.0 | 12000 | 0.8042 | 0.874 | 0.2304 | 1.0125 | 0.874 | 0.8742 | 0.1091 | 0.0269 | | 0.0114 | 7.0 | 14000 | 0.8335 | 0.8715 | 0.2283 | 1.0146 | 0.8715 | 0.8709 | 0.1103 | 0.0267 | | 0.0082 | 8.0 | 16000 | 0.8655 | 0.873 | 0.2297 | 1.0222 | 0.8730 | 0.8735 | 0.1112 | 0.0279 | | 0.002 | 9.0 | 18000 | 0.8350 | 0.8808 | 0.2180 | 0.9519 | 0.8808 | 0.8812 | 0.1067 | 0.0266 | | 0.0041 | 10.0 | 20000 | 0.8308 | 0.8822 | 0.2169 | 0.9246 | 0.8822 | 0.8823 | 0.1044 | 0.0265 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
galbitang/autotrain-bed_frame_1021-96393146649
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 96393146649 - CO2 Emissions (in grams): 0.3812 ## Validation Metrics - Loss: 0.224 - Accuracy: 0.926 - Macro F1: 0.925 - Micro F1: 0.926 - Weighted F1: 0.927 - Macro Precision: 0.917 - Micro Precision: 0.926 - Weighted Precision: 0.928 - Macro Recall: 0.934 - Micro Recall: 0.926 - Weighted Recall: 0.926
[ "casual", "classic", "modern", "natural", "romantic" ]
galbitang/autotrain-lamp_1021-96396146650
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 96396146650 - CO2 Emissions (in grams): 2.4774 ## Validation Metrics - Loss: 0.402 - Accuracy: 0.881 - Macro F1: 0.805 - Micro F1: 0.881 - Weighted F1: 0.873 - Macro Precision: 0.884 - Micro Precision: 0.881 - Weighted Precision: 0.881 - Macro Recall: 0.764 - Micro Recall: 0.881 - Weighted Recall: 0.881
[ "casual", "classic", "modern", "natural", "romantic" ]
galbitang/autotrain-chair_1021-96395146651
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 96395146651 - CO2 Emissions (in grams): 2.3369 ## Validation Metrics - Loss: 0.364 - Accuracy: 0.857 - Macro F1: 0.839 - Micro F1: 0.857 - Weighted F1: 0.855 - Macro Precision: 0.876 - Micro Precision: 0.857 - Weighted Precision: 0.860 - Macro Recall: 0.810 - Micro Recall: 0.857 - Weighted Recall: 0.857
[ "casual", "classic", "modern", "natural", "romantic" ]
galbitang/autotrain-sofa_1021-96392146654
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 96392146654 - CO2 Emissions (in grams): 2.8405 ## Validation Metrics - Loss: 0.290 - Accuracy: 0.905 - Macro F1: 0.892 - Micro F1: 0.905 - Weighted F1: 0.905 - Macro Precision: 0.905 - Micro Precision: 0.905 - Weighted Precision: 0.906 - Macro Recall: 0.881 - Micro Recall: 0.905 - Weighted Recall: 0.905
[ "casual", "classic", "modern", "natural", "romantic" ]
galbitang/autotrain-table_1021_2-96399146655
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 96399146655 - CO2 Emissions (in grams): 0.3221 ## Validation Metrics - Loss: 0.552 - Accuracy: 0.827 - Macro F1: 0.789 - Micro F1: 0.827 - Weighted F1: 0.823 - Macro Precision: 0.866 - Micro Precision: 0.827 - Weighted Precision: 0.833 - Macro Recall: 0.750 - Micro Recall: 0.827 - Weighted Recall: 0.827
[ "casual", "classic", "modern", "natural", "romantic" ]
bdpc/vit-base_rvl_cdip-N1K_AURC_4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_AURC_4 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2768 - Accuracy: 0.8738 - Brier Loss: 0.2167 - Nll: 0.9821 - F1 Micro: 0.8738 - F1 Macro: 0.8749 - Ece: 0.0970 - Aurc: 0.0292 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | 0.1764 | 1.0 | 4000 | 0.3808 | 0.8217 | 0.2750 | 1.2675 | 0.8217 | 0.8194 | 0.1016 | 0.0461 | | 0.1131 | 2.0 | 8000 | 0.3321 | 0.8413 | 0.2583 | 1.3120 | 0.8413 | 0.8421 | 0.0949 | 0.0418 | | 0.113 | 3.0 | 12000 | 0.3781 | 0.8207 | 0.2910 | 1.4889 | 0.8207 | 0.8213 | 0.1162 | 0.0496 | | 0.0814 | 4.0 | 16000 | 0.4793 | 0.8157 | 0.3036 | 1.4208 | 0.8157 | 0.8151 | 0.1302 | 0.0552 | | 0.0542 | 5.0 | 20000 | 0.2914 | 0.8658 | 0.2279 | 1.1541 | 0.8658 | 0.8657 | 0.0955 | 0.0320 | | 0.0238 | 6.0 | 24000 | 0.3059 | 0.8568 | 0.2401 | 1.1686 | 0.8568 | 0.8581 | 0.1012 | 0.0354 | | 0.0197 | 7.0 | 28000 | 0.3077 | 0.8545 | 0.2390 | 1.1659 | 0.8545 | 0.8553 | 0.1059 | 0.0354 | | 0.0116 | 8.0 | 32000 | 0.3169 | 0.8705 | 0.2172 | 1.0323 | 0.8705 | 0.8704 | 0.0918 | 0.0314 | | 0.0054 | 9.0 | 36000 | 0.2850 | 0.8738 | 0.2199 | 1.0171 | 0.8738 | 0.8747 | 0.0960 | 0.0302 | | 0.0128 | 10.0 | 40000 | 0.2768 | 0.8738 | 0.2167 | 0.9821 | 0.8738 | 0.8749 | 0.0970 | 0.0292 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_ce_4
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_ce_4 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9480 - Accuracy: 0.8792 - Brier Loss: 0.2240 - Nll: 1.0075 - F1 Micro: 0.8793 - F1 Macro: 0.8794 - Ece: 0.1101 - Aurc: 0.0274 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | 0.4172 | 1.0 | 4000 | 0.6321 | 0.8475 | 0.2427 | 1.1862 | 0.8475 | 0.8484 | 0.0957 | 0.0352 | | 0.3421 | 2.0 | 8000 | 0.6729 | 0.8645 | 0.2301 | 1.1766 | 0.8645 | 0.8642 | 0.1020 | 0.0295 | | 0.2091 | 3.0 | 12000 | 0.7988 | 0.854 | 0.2563 | 1.1608 | 0.854 | 0.8555 | 0.1183 | 0.0352 | | 0.1319 | 4.0 | 16000 | 0.8683 | 0.861 | 0.2503 | 1.1575 | 0.861 | 0.8617 | 0.1188 | 0.0354 | | 0.0673 | 5.0 | 20000 | 0.9057 | 0.8642 | 0.2479 | 1.1524 | 0.8643 | 0.8635 | 0.1195 | 0.0314 | | 0.0333 | 6.0 | 24000 | 0.9553 | 0.8605 | 0.2524 | 1.1006 | 0.8605 | 0.8600 | 0.1226 | 0.0366 | | 0.0223 | 7.0 | 28000 | 0.9393 | 0.8708 | 0.2350 | 1.1027 | 0.8708 | 0.8713 | 0.1159 | 0.0274 | | 0.0194 | 8.0 | 32000 | 1.0108 | 0.8705 | 0.2407 | 1.0850 | 0.8705 | 0.8704 | 0.1169 | 0.0309 | | 0.0015 | 9.0 | 36000 | 0.9412 | 0.876 | 0.2291 | 1.0136 | 0.8760 | 0.8763 | 0.1123 | 0.0270 | | 0.004 | 10.0 | 40000 | 0.9480 | 0.8792 | 0.2240 | 1.0075 | 0.8793 | 0.8794 | 0.1101 | 0.0274 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
khleeloo/vit-focal-skin
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-focal-skin This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5830 - Accuracy: 0.8497 - F1: 0.8472 - Precision: 0.8527 - Recall: 0.8497 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.1586 | 1.0 | 626 | 0.3295 | 0.8808 | 0.8764 | 0.9007 | 0.8808 | | 0.096 | 2.0 | 1252 | 0.4315 | 0.8601 | 0.8562 | 0.8600 | 0.8601 | | 0.0181 | 3.0 | 1878 | 0.4395 | 0.8756 | 0.8685 | 0.8799 | 0.8756 | | 0.0058 | 4.0 | 2504 | 0.5563 | 0.8549 | 0.8571 | 0.8653 | 0.8549 | | 0.0004 | 5.0 | 3130 | 0.6044 | 0.8653 | 0.8619 | 0.8688 | 0.8653 | | 0.0003 | 6.0 | 3756 | 0.5830 | 0.8497 | 0.8472 | 0.8527 | 0.8497 | ### Framework versions - Transformers 4.29.2 - Pytorch 1.13.1 - Datasets 2.14.5 - Tokenizers 0.13.3
[ "mel", "nv", "bcc", "akiec", "bkl", "df", "vasc" ]
SeyedAli/Remote-Sensing-UAV-image-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Remote-Sensing-UAV-image-classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an [jonathan-roberts1/RSSCN7](https://huggingface.co/datasets/jonathan-roberts1/RSSCN7) dataset. It achieves the following results on the evaluation set: - Loss: 0.0593 - Accuracy: 0.9907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3922 | 0.71 | 100 | 0.4227 | 0.8821 | | 0.2986 | 1.43 | 200 | 0.3142 | 0.9089 | | 0.1109 | 2.14 | 300 | 0.2056 | 0.9518 | | 0.0864 | 2.86 | 400 | 0.2472 | 0.9375 | | 0.0193 | 3.57 | 500 | 0.0593 | 0.9907 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "field", "forest", "grass", "industry", "parking", "resident", "river or lake" ]
JLB-JLB/Model_folder
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Model_folder This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0171 - Matthews Correlation: 0.9888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.0488 | 0.91 | 30 | 0.1366 | 0.9449 | | 0.0077 | 1.82 | 60 | 0.0508 | 0.9775 | | 0.0057 | 2.73 | 90 | 0.0366 | 0.9888 | | 0.0042 | 3.64 | 120 | 0.0171 | 0.9888 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
arslanafzal/birds_transform_full
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # birds_transform_full This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Accuracy: 0.7303 - Loss: 1.4588 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:------:|:--------:|:---------------:| | 5.6427 | 1.0 | 1984 | 0.4519 | 5.2504 | | 4.6563 | 2.0 | 3968 | 0.5068 | 4.2749 | | 3.6656 | 3.0 | 5952 | 0.5454 | 3.3311 | | 2.7653 | 4.0 | 7936 | 0.5748 | 2.5181 | | 2.0465 | 5.0 | 9920 | 0.6300 | 1.9205 | | 1.5876 | 6.0 | 11904 | 0.6593 | 1.5696 | | 1.3174 | 7.0 | 13888 | 0.6870 | 1.3831 | | 1.1279 | 8.0 | 15872 | 0.7064 | 1.2516 | | 1.0051 | 9.0 | 17856 | 0.7067 | 1.1999 | | 0.9318 | 10.0 | 19840 | 0.7077 | 1.1631 | | 0.8294 | 11.0 | 21824 | 0.7089 | 1.1444 | | 0.7976 | 12.0 | 23808 | 0.7175 | 1.1156 | | 0.7084 | 13.0 | 25792 | 0.7218 | 1.1209 | | 0.6752 | 14.0 | 27776 | 0.7198 | 1.1032 | | 0.6641 | 15.0 | 29760 | 0.7198 | 1.1192 | | 0.6083 | 16.0 | 31744 | 0.7268 | 1.1044 | | 0.5703 | 17.0 | 33728 | 0.7248 | 1.1287 | | 0.5376 | 18.0 | 35712 | 0.7286 | 1.1115 | | 0.5073 | 19.0 | 37696 | 0.7218 | 1.1429 | | 0.5072 | 20.0 | 39680 | 0.7208 | 1.1519 | | 0.4945 | 21.0 | 41664 | 0.7228 | 1.1636 | | 0.4651 | 22.0 | 43648 | 0.7213 | 1.1771 | | 0.4408 | 23.0 | 45632 | 0.7233 | 1.1650 | | 0.4222 | 24.0 | 47616 | 0.7157 | 1.1841 | | 0.409 | 25.0 | 49600 | 0.7145 | 1.2150 | | 0.403 | 26.0 | 51584 | 0.7152 | 1.2203 | | 0.3813 | 27.0 | 53568 | 0.7238 | 1.2064 | | 0.3756 | 28.0 | 55552 | 0.7177 | 1.2526 | | 0.365 | 29.0 | 57536 | 0.7208 | 1.2670 | | 0.3729 | 30.0 | 59520 | 0.7180 | 1.2659 | | 0.36 | 31.0 | 61504 | 0.7127 | 1.2545 | | 0.3596 | 32.0 | 63488 | 0.7182 | 1.2728 | | 0.3606 | 33.0 | 65472 | 0.7180 | 1.2886 | | 0.325 | 34.0 | 67456 | 0.7157 | 1.2929 | | 0.329 | 35.0 | 69440 | 0.7205 | 1.3074 | | 0.3431 | 36.0 | 71424 | 0.7185 | 1.3122 | | 0.3206 | 37.0 | 73408 | 0.7233 | 1.2993 | | 0.3137 | 38.0 | 75392 | 0.7220 | 1.3206 | | 0.3265 | 39.0 | 77376 | 0.7180 | 1.3246 | | 0.3332 | 40.0 | 79360 | 0.7240 | 1.3163 | | 0.3193 | 41.0 | 81344 | 0.7288 | 1.3259 | | 0.3242 | 42.0 | 83328 | 0.7215 | 1.3320 | | 0.2976 | 43.0 | 85312 | 0.7213 | 1.3283 | | 0.3191 | 44.0 | 87296 | 0.7195 | 1.3453 | | 0.3067 | 45.0 | 89280 | 0.7243 | 1.3550 | | 0.2994 | 46.0 | 91264 | 0.7240 | 1.3324 | | 0.3072 | 47.0 | 93248 | 0.7263 | 1.3412 | | 0.2932 | 48.0 | 95232 | 0.7245 | 1.3345 | | 0.2919 | 49.0 | 97216 | 0.7266 | 1.3759 | | 0.2922 | 50.0 | 99200 | 0.7225 | 1.3873 | | 0.304 | 51.0 | 101184 | 0.7235 | 1.3631 | | 0.2898 | 52.0 | 103168 | 0.7205 | 1.3819 | | 0.2773 | 53.0 | 105152 | 0.7251 | 1.3827 | | 0.2756 | 54.0 | 107136 | 0.7228 | 1.3770 | | 0.2789 | 55.0 | 109120 | 0.7248 | 1.3822 | | 0.261 | 56.0 | 111104 | 0.7263 | 1.3878 | | 0.2593 | 57.0 | 113088 | 0.7240 | 1.3955 | | 0.2801 | 58.0 | 115072 | 0.7256 | 1.3659 | | 0.2632 | 59.0 | 117056 | 0.7301 | 1.3719 | | 0.2811 | 60.0 | 119040 | 0.7321 | 1.3775 | | 0.2267 | 61.0 | 121024 | 0.7256 | 1.3689 | | 0.2676 | 62.0 | 123008 | 0.7245 | 1.4069 | | 0.2523 | 63.0 | 124992 | 0.7230 | 1.4166 | | 0.2622 | 64.0 | 126976 | 0.7296 | 1.4018 | | 0.2467 | 65.0 | 128960 | 0.7256 | 1.4287 | | 0.2504 | 66.0 | 130944 | 0.7314 | 1.4019 | | 0.2468 | 67.0 | 132928 | 0.7303 | 1.4058 | | 0.2098 | 68.0 | 134912 | 0.7308 | 1.4093 | | 0.2382 | 69.0 | 136896 | 0.7293 | 1.4206 | | 0.2304 | 70.0 | 138880 | 0.7301 | 1.4078 | | 0.251 | 71.0 | 140864 | 0.7251 | 1.4275 | | 0.237 | 72.0 | 142848 | 0.7288 | 1.4283 | | 0.2485 | 73.0 | 144832 | 0.7281 | 1.4338 | | 0.2229 | 74.0 | 146816 | 0.7253 | 1.4386 | | 0.2472 | 75.0 | 148800 | 0.7210 | 1.4440 | | 0.2149 | 76.0 | 150784 | 0.7230 | 1.4319 | | 0.2337 | 77.0 | 152768 | 0.7261 | 1.4422 | | 0.2063 | 78.0 | 154752 | 0.7268 | 1.4456 | | 0.216 | 79.0 | 156736 | 0.7218 | 1.4426 | | 0.2249 | 80.0 | 158720 | 0.7198 | 1.4533 | | 0.2148 | 81.0 | 160704 | 0.7230 | 1.4480 | | 0.2321 | 82.0 | 162688 | 0.7273 | 1.4416 | | 0.2306 | 83.0 | 164672 | 0.7286 | 1.4392 | | 0.213 | 84.0 | 166656 | 0.7263 | 1.4609 | | 0.2202 | 85.0 | 168640 | 0.7266 | 1.4590 | | 0.206 | 86.0 | 170624 | 0.7245 | 1.4638 | | 0.1987 | 87.0 | 172608 | 0.7251 | 1.4626 | | 0.2181 | 88.0 | 174592 | 0.7261 | 1.4615 | | 0.2076 | 89.0 | 176576 | 0.7253 | 1.4665 | | 0.1999 | 90.0 | 178560 | 0.7251 | 1.4569 | | 0.2287 | 91.0 | 180544 | 0.7266 | 1.4591 | | 0.1985 | 92.0 | 182528 | 0.7263 | 1.4508 | | 0.2166 | 93.0 | 184512 | 0.7266 | 1.4621 | | 0.1943 | 94.0 | 186496 | 0.7276 | 1.4649 | | 0.2189 | 95.0 | 188480 | 0.7293 | 1.4555 | | 0.1911 | 96.0 | 190464 | 0.7306 | 1.4565 | | 0.1954 | 97.0 | 192448 | 0.7271 | 1.4624 | | 0.2053 | 98.0 | 194432 | 0.7286 | 1.4603 | | 0.2067 | 99.0 | 196416 | 0.7306 | 1.4589 | | 0.1917 | 100.0 | 198400 | 0.7303 | 1.4588 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.1.0+cu121 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "acadian_flycatcher", "acorn_woodpecker", "alder_flycatcher", "allens_hummingbird", "altamira_oriole", "american_avocet", "american_bittern", "american_black_duck", "american_coot", "american_crow", "american_dipper", "american_golden_plover", "american_goldfinch", "american_kestrel", "american_oystercatcher", "american_pipit", "american_redstart", "american_robin", "american_three_toed_woodpecker", "american_tree_sparrow", "american_white_pelican", "american_wigeon", "american_woodcock", "anhinga", "annas_hummingbird", "arctic_tern", "ash_throated_flycatcher", "audubons_oriole", "bairds_sandpiper", "bald_eagle", "baltimore_oriole", "band_tailed_pigeon", "barn_swallow", "barred_owl", "barrows_goldeneye", "bay_breasted_warbler", "bells_vireo", "belted_kingfisher", "bewicks_wren", "black_guillemot", "black_oystercatcher", "black_phoebe", "black_rosy_finch", "black_scoter", "black_skimmer", "black_tern", "black_turnstone", "black_vulture", "black_and_white_warbler", "black_backed_woodpecker", "black_bellied_plover", "black_billed_cuckoo", "black_billed_magpie", "black_capped_chickadee", "black_chinned_hummingbird", "black_chinned_sparrow", "black_crested_titmouse", "black_crowned_night_heron", "black_headed_grosbeak", "black_legged_kittiwake", "black_necked_stilt", "black_throated_blue_warbler", "black_throated_gray_warbler", "black_throated_green_warbler", "black_throated_sparrow", "blackburnian_warbler", "blackpoll_warbler", "blue_grosbeak", "blue_jay", "blue_gray_gnatcatcher", "blue_headed_vireo", "blue_winged_teal", "blue_winged_warbler", "boat_tailed_grackle", "bobolink", "bohemian_waxwing", "bonapartes_gull", "boreal_chickadee", "brandts_cormorant", "brant", "brewers_blackbird", "brewers_sparrow", "bridled_titmouse", "broad_billed_hummingbird", "broad_tailed_hummingbird", "broad_winged_hawk", "bronzed_cowbird", "brown_creeper", "brown_pelican", "brown_thrasher", "brown_capped_rosy_finch", "brown_crested_flycatcher", "brown_headed_cowbird", "brown_headed_nuthatch", "bufflehead", "bullocks_oriole", "burrowing_owl", "bushtit", "cackling_goose", "cactus_wren", "california_gull", "california_quail", "california_thrasher", "california_towhee", "calliope_hummingbird", "canada_goose", "canada_warbler", "canvasback", "canyon_towhee", "canyon_wren", "cape_may_warbler", "carolina_chickadee", "carolina_wren", "caspian_tern", "cassins_finch", "cassins_kingbird", "cassins_sparrow", "cassins_vireo", "cattle_egret", "cave_swallow", "cedar_waxwing", "cerulean_warbler", "chestnut_backed_chickadee", "chestnut_collared_longspur", "chestnut_sided_warbler", "chihuahuan_raven", "chimney_swift", "chipping_sparrow", "cinnamon_teal", "clapper_rail", "clarks_grebe", "clarks_nutcracker", "clay_colored_sparrow", "cliff_swallow", "common_black_hawk", "common_eider", "common_gallinule", "common_goldeneye", "common_grackle", "common_ground_dove", "common_loon", "common_merganser", "common_murre", "common_nighthawk", "common_raven", "common_redpoll", "common_tern", "common_yellowthroat", "connecticut_warbler", "coopers_hawk", "cordilleran_flycatcher", "costas_hummingbird", "couchs_kingbird", "crested_caracara", "curve_billed_thrasher", "dark_eyed_junco", "dickcissel", "double_crested_cormorant", "downy_woodpecker", "dunlin", "dusky_flycatcher", "dusky_grouse", "eared_grebe", "eastern_bluebird", "eastern_kingbird", "eastern_meadowlark", "eastern_phoebe", "eastern_screech_owl", "eastern_towhee", "eastern_wood_pewee", "elegant_trogon", "elf_owl", "eurasian_collared_dove", "eurasian_wigeon", "european_starling", "evening_grosbeak", "ferruginous_hawk", "ferruginous_pygmy_owl", "field_sparrow", "fish_crow", "florida_scrub_jay", "forsters_tern", "fox_sparrow", "franklins_gull", "fulvous_whistling_duck", "gadwall", "gambels_quail", "gila_woodpecker", "glaucous_gull", "glaucous_winged_gull", "glossy_ibis", "golden_eagle", "golden_crowned_kinglet", "golden_crowned_sparrow", "golden_fronted_woodpecker", "golden_winged_warbler", "grasshopper_sparrow", "gray_catbird", "gray_flycatcher", "gray_jay", "gray_kingbird", "gray_cheeked_thrush", "gray_crowned_rosy_finch", "great_black_backed_gull", "great_blue_heron", "great_cormorant", "great_crested_flycatcher", "great_egret", "great_gray_owl", "great_horned_owl", "great_kiskadee", "great_tailed_grackle", "greater_prairie_chicken", "greater_roadrunner", "greater_sage_grouse", "greater_scaup", "greater_white_fronted_goose", "greater_yellowlegs", "green_jay", "green_tailed_towhee", "green_winged_teal", "groove_billed_ani", "gull_billed_tern", "hairy_woodpecker", "hammonds_flycatcher", "harlequin_duck", "harriss_hawk", "harriss_sparrow", "heermanns_gull", "henslows_sparrow", "hepatic_tanager", "hermit_thrush", "herring_gull", "hoary_redpoll", "hooded_merganser", "hooded_oriole", "hooded_warbler", "horned_grebe", "horned_lark", "house_finch", "house_sparrow", "house_wren", "huttons_vireo", "iceland_gull", "inca_dove", "indigo_bunting", "killdeer", "king_rail", "ladder_backed_woodpecker", "lapland_longspur", "lark_bunting", "lark_sparrow", "laughing_gull", "lazuli_bunting", "le_contes_sparrow", "least_bittern", "least_flycatcher", "least_grebe", "least_sandpiper", "least_tern", "lesser_goldfinch", "lesser_nighthawk", "lesser_scaup", "lesser_yellowlegs", "lewiss_woodpecker", "limpkin", "lincolns_sparrow", "little_blue_heron", "loggerhead_shrike", "long_billed_curlew", "long_billed_dowitcher", "long_billed_thrasher", "long_eared_owl", "long_tailed_duck", "louisiana_waterthrush", "magnificent_frigatebird", "magnolia_warbler", "mallard", "marbled_godwit", "marsh_wren", "merlin", "mew_gull", "mexican_jay", "mississippi_kite", "monk_parakeet", "mottled_duck", "mountain_bluebird", "mountain_chickadee", "mountain_plover", "mourning_dove", "mourning_warbler", "muscovy_duck", "mute_swan", "nashville_warbler", "nelsons_sparrow", "neotropic_cormorant", "northern_bobwhite", "northern_cardinal", "northern_flicker", "northern_gannet", "northern_goshawk", "northern_harrier", "northern_hawk_owl", "northern_mockingbird", "northern_parula", "northern_pintail", "northern_rough_winged_swallow", "northern_saw_whet_owl", "northern_shrike", "northern_waterthrush", "nuttalls_woodpecker", "oak_titmouse", "olive_sparrow", "olive_sided_flycatcher", "orange_crowned_warbler", "orchard_oriole", "osprey", "ovenbird", "pacific_golden_plover", "pacific_loon", "pacific_wren", "pacific_slope_flycatcher", "painted_bunting", "painted_redstart", "palm_warbler", "pectoral_sandpiper", "peregrine_falcon", "phainopepla", "philadelphia_vireo", "pied_billed_grebe", "pigeon_guillemot", "pileated_woodpecker", "pine_grosbeak", "pine_siskin", "pine_warbler", "piping_plover", "plumbeous_vireo", "prairie_falcon", "prairie_warbler", "prothonotary_warbler", "purple_finch", "purple_gallinule", "purple_martin", "purple_sandpiper", "pygmy_nuthatch", "pyrrhuloxia", "red_crossbill", "red_knot", "red_phalarope", "red_bellied_woodpecker", "red_breasted_merganser", "red_breasted_nuthatch", "red_breasted_sapsucker", "red_cockaded_woodpecker", "red_eyed_vireo", "red_headed_woodpecker", "red_naped_sapsucker", "red_necked_grebe", "red_necked_phalarope", "red_shouldered_hawk", "red_tailed_hawk", "red_throated_loon", "red_winged_blackbird", "reddish_egret", "redhead", "ring_billed_gull", "ring_necked_duck", "ring_necked_pheasant", "rock_pigeon", "rock_ptarmigan", "rock_sandpiper", "rock_wren", "rose_breasted_grosbeak", "roseate_tern", "rosss_goose", "rough_legged_hawk", "royal_tern", "ruby_crowned_kinglet", "ruby_throated_hummingbird", "ruddy_duck", "ruddy_turnstone", "ruffed_grouse", "rufous_hummingbird", "rufous_crowned_sparrow", "rusty_blackbird", "sage_thrasher", "saltmarsh_sparrow", "sanderling", "sandhill_crane", "sandwich_tern", "says_phoebe", "scaled_quail", "scarlet_tanager", "scissor_tailed_flycatcher", "scotts_oriole", "seaside_sparrow", "sedge_wren", "semipalmated_plover", "semipalmated_sandpiper", "sharp_shinned_hawk", "sharp_tailed_grouse", "short_billed_dowitcher", "short_eared_owl", "snail_kite", "snow_bunting", "snow_goose", "snowy_egret", "snowy_owl", "snowy_plover", "solitary_sandpiper", "song_sparrow", "sooty_grouse", "sora", "spotted_owl", "spotted_sandpiper", "spotted_towhee", "spruce_grouse", "stellers_jay", "stilt_sandpiper", "summer_tanager", "surf_scoter", "surfbird", "swainsons_hawk", "swainsons_thrush", "swallow_tailed_kite", "swamp_sparrow", "tennessee_warbler", "thayers_gull", "townsends_solitaire", "townsends_warbler", "tree_swallow", "tricolored_heron", "tropical_kingbird", "trumpeter_swan", "tufted_titmouse", "tundra_swan", "turkey_vulture", "upland_sandpiper", "varied_thrush", "veery", "verdin", "vermilion_flycatcher", "vesper_sparrow", "violet_green_swallow", "virginia_rail", "wandering_tattler", "warbling_vireo", "western_bluebird", "western_grebe", "western_gull", "western_kingbird", "western_meadowlark", "western_sandpiper", "western_screech_owl", "western_scrub_jay", "western_tanager", "western_wood_pewee", "whimbrel", "white_ibis", "white_breasted_nuthatch", "white_crowned_sparrow", "white_eyed_vireo", "white_faced_ibis", "white_headed_woodpecker", "white_rumped_sandpiper", "white_tailed_hawk", "white_tailed_kite", "white_tailed_ptarmigan", "white_throated_sparrow", "white_throated_swift", "white_winged_crossbill", "white_winged_dove", "white_winged_scoter", "wild_turkey", "willet", "williamsons_sapsucker", "willow_flycatcher", "willow_ptarmigan", "wilsons_phalarope", "wilsons_plover", "wilsons_snipe", "wilsons_warbler", "winter_wren", "wood_stork", "wood_thrush", "worm_eating_warbler", "wrentit", "yellow_warbler", "yellow_bellied_flycatcher", "yellow_bellied_sapsucker", "yellow_billed_cuckoo", "yellow_billed_magpie", "yellow_breasted_chat", "yellow_crowned_night_heron", "yellow_eyed_junco", "yellow_headed_blackbird", "yellow_rumped_warbler", "yellow_throated_vireo", "yellow_throated_warbler", "zone_tailed_hawk" ]
aryap2/UBC-resnet-50-3eph-224
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # UBC-resnet-50-3eph-224 This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7999 - Recall: 0.6061 - Specificity: 0.8937 - Precision: 0.7089 - Npv: 0.9097 - Accuracy: 0.6860 - F1: 0.6373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Recall | Specificity | Precision | Npv | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-----------:|:---------:|:------:|:--------:|:------:| | 0.9759 | 1.0 | 6080 | 0.9368 | 0.5185 | 0.8740 | 0.6852 | 0.8972 | 0.6337 | 0.5423 | | 0.8617 | 2.0 | 12160 | 0.8285 | 0.5921 | 0.8910 | 0.6964 | 0.9062 | 0.6757 | 0.6221 | | 0.8362 | 3.0 | 18240 | 0.7999 | 0.6061 | 0.8937 | 0.7089 | 0.9097 | 0.6860 | 0.6373 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
[ "cc", "ec", "hgsc", "lgsc", "mc" ]
Mahendra42/swin-tiny-patch4-window7-224-finetunedRCC_Classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetunedRCC_Classifier This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 6.0707 - F1: 0.0140 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0016 | 1.0 | 155 | 5.7392 | 0.0080 | | 0.0008 | 2.0 | 310 | 5.3965 | 0.0218 | | 0.0 | 3.0 | 465 | 6.0707 | 0.0140 | ### Framework versions - Transformers 4.34.1 - Pytorch 1.12.1 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "clear cell rcc", "non clear cell" ]
barten/vit-base-patch16-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-eurosat This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5725 - Accuracy: 0.8394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1364 | 0.99 | 53 | 0.5924 | 0.8217 | | 0.0876 | 2.0 | 107 | 0.5917 | 0.8252 | | 0.0874 | 2.99 | 160 | 0.6156 | 0.8239 | | 0.0779 | 4.0 | 214 | 0.5792 | 0.8363 | | 0.0747 | 4.95 | 265 | 0.5725 | 0.8394 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "louis vuitton", "burberry", "chanel", "diesel", "dolce gabanna", "fendi", "gucci", "guess", "nike", "prada" ]
dima806/closed_eyes_image_detection
Returns whether there is an open or a closed eye given an image from surrounding area with about 99% accuracy. See https://www.kaggle.com/code/dima806/closed-eye-image-detection-vit for more details. ``` Classification report: precision recall f1-score support closeEye 0.9921 0.9888 0.9904 4296 openEye 0.9889 0.9921 0.9905 4295 accuracy 0.9905 8591 macro avg 0.9905 0.9905 0.9905 8591 weighted avg 0.9905 0.9905 0.9905 8591 ```
[ "closeeye", "openeye" ]
barten/vit-base-patch16-224-type
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-type This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7249 - Accuracy: 0.7583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4991 | 0.99 | 78 | 1.2167 | 0.6019 | | 1.0157 | 1.99 | 157 | 0.8529 | 0.7083 | | 0.8163 | 3.0 | 236 | 0.7725 | 0.7287 | | 0.7916 | 4.0 | 315 | 0.7622 | 0.7343 | | 0.6525 | 4.99 | 393 | 0.7374 | 0.7361 | | 0.6159 | 5.99 | 472 | 0.7188 | 0.75 | | 0.5413 | 7.0 | 551 | 0.7029 | 0.7463 | | 0.4838 | 8.0 | 630 | 0.7254 | 0.7352 | | 0.4587 | 8.99 | 708 | 0.7219 | 0.7565 | | 0.4332 | 9.99 | 787 | 0.7077 | 0.7528 | | 0.379 | 11.0 | 866 | 0.7106 | 0.7583 | | 0.4181 | 12.0 | 945 | 0.7158 | 0.7556 | | 0.3798 | 12.99 | 1023 | 0.7234 | 0.7537 | | 0.3841 | 13.99 | 1102 | 0.7211 | 0.7556 | | 0.3464 | 14.86 | 1170 | 0.7249 | 0.7583 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "брюки", "джинсы", "пиджак", "платье", "рубашка", "свитер", "футболка", "шорты", "юбка" ]
JLB-JLB/ViT_Seizure_Detection
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT_Seizure_Detection This model is a fine-tuned version of [/content/drive/MyDrive/Seizure_EEG_Research/ViT_Seizure_Detection](https://huggingface.co//content/drive/MyDrive/Seizure_EEG_Research/ViT_Seizure_Detection) on the JLB-JLB/seizure_eeg_greyscale_224x224_6secWindow dataset. It achieves the following results on the evaluation set: - Loss: 0.1622 - Matthews Correlation: 0.4110 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:-----:|:---------------:|:--------------------:| | 0.0742 | 0.79 | 10000 | 0.2080 | 0.4431 | | 0.0409 | 1.57 | 20000 | 0.2175 | 0.4470 | | 0.0345 | 2.36 | 30000 | 0.2514 | 0.4717 | | 0.0184 | 3.14 | 40000 | 0.3040 | 0.4261 | | 0.0092 | 3.93 | 50000 | 0.3495 | 0.4389 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "seiz", "bckg" ]
Pollathorn/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Pollathorn/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9782 - Validation Loss: 1.2511 - Train Accuracy: 0.849 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.9782 | 1.2511 | 0.849 | 0 | ### Framework versions - Transformers 4.34.1 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
mimunto/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mimunto/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9400 - Validation Loss: 1.2381 - Train Accuracy: 0.86 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.9400 | 1.2381 | 0.86 | 0 | ### Framework versions - Transformers 4.34.1 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
gojonumbertwo/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # gojonumbertwo/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.2045 - Validation Loss: 1.3878 - Train Accuracy: 0.839 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.2045 | 1.3878 | 0.839 | 0 | ### Framework versions - Transformers 4.34.1 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
KeeApichai6103/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # KeeApichai6103/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.7449 - Validation Loss: 1.6355 - Train Accuracy: 0.81 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.7449 | 1.6355 | 0.81 | 0 | ### Framework versions - Transformers 4.34.1 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
aikidoaikido115/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # aikidoaikido115/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.7880 - Validation Loss: 1.6485 - Train Accuracy: 0.826 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.7880 | 1.6485 | 0.826 | 0 | ### Framework versions - Transformers 4.34.1 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
jovanlopez32/vit_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0261 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1441 | 3.85 | 500 | 0.0261 | 0.9925 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "angular_leaf_spot", "bean_rust", "healthy" ]
dima806/jellyfish_types_image_detection
Returns jellyfish type based on image. See https://www.kaggle.com/code/dima806/jellyfish-types-image-detection-vit for more details. ``` Classification report: precision recall f1-score support blue_jellyfish 1.0000 1.0000 1.0000 30 barrel_jellyfish 1.0000 1.0000 1.0000 30 mauve_stinger_jellyfish 1.0000 1.0000 1.0000 30 Moon_jellyfish 1.0000 1.0000 1.0000 30 compass_jellyfish 1.0000 1.0000 1.0000 30 lions_mane_jellyfish 1.0000 1.0000 1.0000 30 accuracy 1.0000 180 macro avg 1.0000 1.0000 1.0000 180 weighted avg 1.0000 1.0000 1.0000 180 ```
[ "blue_jellyfish", "barrel_jellyfish", "mauve_stinger_jellyfish", "moon_jellyfish", "compass_jellyfish", "lions_mane_jellyfish" ]
justinsiow/vit_101
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_101 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 1.6267 - Accuracy: 0.88 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7266 | 0.99 | 62 | 2.5317 | 0.814 | | 1.8315 | 2.0 | 125 | 1.7931 | 0.864 | | 1.5845 | 2.98 | 186 | 1.6267 | 0.88 | ### Framework versions - Transformers 4.27.2 - Pytorch 2.1.0.dev20230428 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
aspends/coco_multiclass_classification
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # aspends/assignment_part_3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the COCO dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0932 - Validation Loss: 0.2218 - Train Accuracy: 0.9313 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 8000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.8768 | 0.4404 | 0.9387 | 0 | | 0.3198 | 0.2664 | 0.9475 | 1 | | 0.1919 | 0.2303 | 0.9425 | 2 | | 0.1357 | 0.1959 | 0.9463 | 3 | | 0.0932 | 0.2218 | 0.9313 | 4 | ### Framework versions - Transformers 4.34.1 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "cat", "horse", "train", "zebra" ]
ahmadmooktaree/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ahmadmooktaree/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8192 - Validation Loss: 1.6728 - Train Accuracy: 0.825 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 2.8192 | 1.6728 | 0.825 | 0 | ### Framework versions - Transformers 4.34.1 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
dima806/215_mushroom_types_image_detection
Returns mushroom type given an image. See https://www.kaggle.com/code/dima806/mushroom-types-image-detection-vit for more details. ``` Classification report: precision recall f1-score support mosaic_puffball 1.0000 1.0000 1.0000 7 scarlet_elfcup 1.0000 1.0000 1.0000 7 splendid_waxcap 1.0000 0.4286 0.6000 7 tawny_grisette 0.8750 1.0000 0.9333 7 jubilee_waxcap 1.0000 1.0000 1.0000 6 king_alfreds_cakes 1.0000 0.8333 0.9091 6 heath_waxcap 0.7500 1.0000 0.8571 6 silky_rosegill 1.0000 1.0000 1.0000 6 golden_waxcap 0.4286 1.0000 0.6000 6 macro_mushroom 1.0000 0.8571 0.9231 7 spectacular_rustgill 0.7500 0.8571 0.8000 7 pink_waxcap 1.0000 1.0000 1.0000 6 brown_birch_bolete 0.8333 0.8333 0.8333 6 scaly_wood_mushroom 1.0000 1.0000 1.0000 6 stinkhorn 0.8571 1.0000 0.9231 6 blackening_brittlegill 1.0000 0.7143 0.8333 7 penny_bun 0.8571 1.0000 0.9231 6 chicken_of_the_woods 1.0000 1.0000 1.0000 7 common_bonnet 1.0000 0.7143 0.8333 7 common_rustgill 1.0000 0.8333 0.9091 6 hedgehog_fungus 1.0000 0.8333 0.9091 6 shaggy_scalycap 1.0000 0.8333 0.9091 6 dyers_mazegill 0.8571 1.0000 0.9231 6 earthballs 1.0000 1.0000 1.0000 7 purple_brittlegill 1.0000 0.8333 0.9091 6 smoky_bracket 0.7143 0.7143 0.7143 7 elfin_saddle 1.0000 1.0000 1.0000 6 shaggy_bracket 0.7778 1.0000 0.8750 7 greencracked_brittlegill 1.0000 0.6667 0.8000 6 sulphur_tuft 1.0000 1.0000 1.0000 6 warted_amanita 1.0000 0.7143 0.8333 7 white_domecap 0.7778 1.0000 0.8750 7 winter_chanterelle 1.0000 1.0000 1.0000 7 grey_knight 1.0000 0.8571 0.9231 7 pale_oyster 1.0000 0.5714 0.7273 7 medusa_mushroom 0.6667 0.8571 0.7500 7 spotted_toughshank 1.0000 1.0000 1.0000 7 dog_stinkhorn 1.0000 0.8333 0.9091 6 stubble_rosegill 1.0000 0.6667 0.8000 6 truffles 1.0000 1.0000 1.0000 6 panthercap 0.8000 0.6667 0.7273 6 vermillion_waxcap 1.0000 1.0000 1.0000 7 ascot_hat 0.8571 1.0000 0.9231 6 birch_polypore 1.0000 0.5000 0.6667 6 common_morel 0.7778 1.0000 0.8750 7 shaggy_parasol 1.0000 0.6667 0.8000 6 turkey_tail 0.6667 1.0000 0.8000 6 the_blusher 0.6250 0.8333 0.7143 6 deathcap 0.3333 1.0000 0.5000 7 chestnut_bolete 1.0000 0.7143 0.8333 7 grey_spotted_amanita 1.0000 0.8571 0.9231 7 slender_parasol 1.0000 0.8571 0.9231 7 horn_of_plenty 1.0000 1.0000 1.0000 7 magpie_inkcap 1.0000 0.8333 0.9091 6 fools_funnel 0.8333 0.8333 0.8333 6 orange_birch_bolete 1.0000 1.0000 1.0000 6 scarlet_waxcap 0.5714 0.6667 0.6154 6 yellow_stainer 1.0000 0.6667 0.8000 6 field_mushroom 1.0000 0.8333 0.9091 6 fragrant_funnel 0.8333 0.8333 0.8333 6 spring_fieldcap 0.8333 0.7143 0.7692 7 bronze_bolete 1.0000 0.4286 0.6000 7 orange_grisette 1.0000 0.8571 0.9231 7 parasol 0.8333 0.7143 0.7692 7 trooping_funnel 1.0000 0.7143 0.8333 7 beechwood_sickener 1.0000 0.6667 0.8000 6 rosy_bonnet 0.8333 0.8333 0.8333 6 dusky_puffball 1.0000 1.0000 1.0000 7 the_miller 0.7000 1.0000 0.8235 7 white_saddle 1.0000 1.0000 1.0000 7 old_man_of_the_woods 1.0000 1.0000 1.0000 6 crimped_gill 1.0000 0.8333 0.9091 6 blushing_rosette 1.0000 1.0000 1.0000 6 pine_bolete 1.0000 1.0000 1.0000 6 brown_rollrim 1.0000 0.8333 0.9091 6 deadly_webcap 1.0000 1.0000 1.0000 7 devils_bolete 1.0000 1.0000 1.0000 6 scarlet_caterpillarclub 1.0000 1.0000 1.0000 7 red_cracking_bolete 1.0000 1.0000 1.0000 6 false_chanterelle 1.0000 0.8333 0.9091 6 woodland_inkcap 0.6667 0.8571 0.7500 7 cucumber_cap 1.0000 0.8571 0.9231 7 leccinum_albostipitatum 1.0000 1.0000 1.0000 6 fairy_ring_champignons 0.8333 0.8333 0.8333 6 rooting_bolete 0.7500 1.0000 0.8571 6 wood_blewit 0.7500 1.0000 0.8571 6 lilac_bonnet 0.8333 0.8333 0.8333 6 butter_cap 1.0000 1.0000 1.0000 7 black_bulgar 1.0000 1.0000 1.0000 7 giant_puffball 0.8571 1.0000 0.9231 6 false_deathcap 0.0000 0.0000 0.0000 6 white_fibrecap 1.0000 1.0000 1.0000 6 velvet_shank 1.0000 0.8571 0.9231 7 slippery_jack 0.5556 0.8333 0.6667 6 white_dapperling 0.6667 0.8571 0.7500 7 parrot_waxcap 1.0000 0.8333 0.9091 6 wrinkled_peach 0.8571 1.0000 0.9231 6 silverleaf_fungus 1.0000 1.0000 1.0000 7 amanita_gemmata 1.0000 1.0000 1.0000 6 stinking_dapperling 1.0000 0.8333 0.9091 6 plums_and_custard 1.0000 0.6667 0.8000 6 peppery_bolete 0.8000 0.6667 0.7273 6 terracotta_hedgehog 0.8333 0.8333 0.8333 6 egghead_mottlegill 1.0000 1.0000 1.0000 6 bearded_milkcap 1.0000 0.8333 0.9091 6 inky_mushroom 1.0000 0.5000 0.6667 6 larch_bolete 0.8571 0.8571 0.8571 7 porcelain_fungus 0.8571 1.0000 0.9231 6 jelly_tooth 1.0000 1.0000 1.0000 6 scarletina_bolete 0.5000 1.0000 0.6667 6 yellow_foot_waxcap 1.0000 1.0000 1.0000 6 the_prince 1.0000 0.5000 0.6667 6 aniseed_funnel 1.0000 0.8333 0.9091 6 white_false_death_cap 0.5000 0.8333 0.6250 6 false_saffron_milkcap 1.0000 0.8333 0.9091 6 yellow_swamp_brittlegill 1.0000 0.8333 0.9091 6 semifree_morel 1.0000 1.0000 1.0000 7 bitter_bolete 1.0000 0.7143 0.8333 7 almond_mushroom 1.0000 1.0000 1.0000 6 shaggy_inkcap 0.8750 1.0000 0.9333 7 blushing_wood_mushroom 1.0000 0.6667 0.8000 6 common_puffball 1.0000 1.0000 1.0000 6 funeral_bell 0.7500 1.0000 0.8571 6 bay_bolete 1.0000 0.8333 0.9091 6 blackening_waxcap 1.0000 0.5714 0.7273 7 liberty_cap 0.6000 1.0000 0.7500 6 snowy_waxcap 0.6667 1.0000 0.8000 6 the_goblet 1.0000 1.0000 1.0000 7 deer_shield 1.0000 1.0000 1.0000 7 freckled_dapperling 0.6667 1.0000 0.8000 6 slimy_waxcap 0.6667 1.0000 0.8000 6 common_inkcap 0.7778 1.0000 0.8750 7 amethyst_chanterelle 0.8750 1.0000 0.9333 7 cedarwood_waxcap 0.7143 0.8333 0.7692 6 honey_fungus 1.0000 0.8571 0.9231 7 bruising_webcap 1.0000 0.4286 0.6000 7 stump_puffball 0.8571 1.0000 0.9231 6 giant_funnel 0.8333 0.8333 0.8333 6 tuberous_polypore 1.0000 0.6667 0.8000 6 poison_pie 0.8571 0.8571 0.8571 7 curry_milkcap 1.0000 1.0000 1.0000 6 amethyst_deceiver 1.0000 1.0000 1.0000 7 golden_bootleg 1.0000 0.7143 0.8333 7 clustered_domecap 1.0000 0.6667 0.8000 6 ochre_brittlegill 0.7143 0.7143 0.7143 7 blackening_polypore 1.0000 0.8333 0.9091 6 suede_bolete 1.0000 1.0000 1.0000 7 horse_mushroom 0.5455 1.0000 0.7059 6 geranium_brittlegill 0.6667 1.0000 0.8000 6 st_georges_mushroom 1.0000 0.8333 0.9091 6 destroying_angel 0.0000 0.0000 0.0000 6 field_blewit 1.0000 0.5714 0.7273 7 cinnamon_bracket 1.0000 1.0000 1.0000 6 lions_mane 1.0000 0.8333 0.9091 6 orange_peel_fungus 1.0000 1.0000 1.0000 6 chanterelle 0.8750 1.0000 0.9333 7 the_sickener 0.8571 1.0000 0.9231 6 birch_woodwart 0.8571 1.0000 0.9231 6 pavement_mushroom 0.7500 1.0000 0.8571 6 false_morel 1.0000 1.0000 1.0000 7 oak_bolete 1.0000 0.8333 0.9091 6 poplar_fieldcap 1.0000 0.5000 0.6667 6 jelly_ears 1.0000 1.0000 1.0000 6 summer_bolete 0.6250 0.8333 0.7143 6 frosted_chanterelle 0.5714 0.6667 0.6154 6 morel 1.0000 0.8333 0.9091 6 the_deceiver 1.0000 0.8571 0.9231 7 splitgill 0.8571 1.0000 0.9231 6 ruby_bolete 0.8571 0.8571 0.8571 7 sepia_bolete 1.0000 0.5714 0.7273 7 bovine_bolete 0.8750 1.0000 0.9333 7 fly_agaric 1.0000 1.0000 1.0000 7 thimble_morel 0.8571 1.0000 0.9231 6 black_morel 0.8333 0.8333 0.8333 6 poplar_bell 1.0000 1.0000 1.0000 6 fleecy_milkcap 0.7778 1.0000 0.8750 7 golden_scalycap 0.7500 1.0000 0.8571 6 yellow_stagshorn 1.0000 1.0000 1.0000 6 oak_polypore 1.0000 0.8333 0.9091 6 weeping_widow 0.7500 0.8571 0.8000 7 meadow_waxcap 0.8750 1.0000 0.9333 7 clouded_agaric 0.7500 0.8571 0.8000 7 woolly_milkcap 0.8750 1.0000 0.9333 7 snakeskin_grisette 1.0000 0.8333 0.9091 6 hairy_curtain_crust 0.8750 1.0000 0.9333 7 lurid_bolete 1.0000 0.6667 0.8000 6 wood_mushroom 0.8571 0.8571 0.8571 7 dryads_saddle 0.8750 1.0000 0.9333 7 sheathed_woodtuft 1.0000 0.8571 0.9231 7 orange_bolete 0.6667 1.0000 0.8000 6 lilac_fibrecap 1.0000 0.8571 0.9231 7 cauliflower_fungus 1.0000 1.0000 1.0000 7 saffron_milkcap 0.7500 0.5000 0.6000 6 pestle_puffball 1.0000 0.8571 0.9231 7 red_belted_bracket 1.0000 1.0000 1.0000 6 beefsteak_fungus 1.0000 1.0000 1.0000 7 oak_mazegill 1.0000 0.4286 0.6000 7 glistening_inkcap 0.8571 0.8571 0.8571 7 tripe_fungus 1.0000 0.6667 0.8000 6 blushing_bracket 0.7143 0.7143 0.7143 7 deadly_fibrecap 0.8571 1.0000 0.9231 6 root_rot 0.5556 0.8333 0.6667 6 powdery_brittlegill 1.0000 1.0000 1.0000 6 grisettes 0.6667 0.6667 0.6667 6 charcoal_burner 0.8333 0.7143 0.7692 7 rooting_shank 1.0000 1.0000 1.0000 6 hen_of_the_woods 0.8571 1.0000 0.9231 6 crimson_waxcap 1.0000 1.0000 1.0000 6 fenugreek_milkcap 1.0000 1.0000 1.0000 7 oyster_mushroom 0.6667 1.0000 0.8000 6 blue_roundhead 0.8571 1.0000 0.9231 6 hoof_fungus 0.7500 1.0000 0.8571 6 bitter_beech_bolete 1.0000 0.5714 0.7273 7 tawny_funnel 1.0000 1.0000 1.0000 6 yellow_false_truffle 1.0000 1.0000 1.0000 6 accuracy 0.8699 1376 macro avg 0.8933 0.8701 0.8670 1376 weighted avg 0.8949 0.8699 0.8676 1376 ```
[ "mosaic_puffball", "scarlet_elfcup", "splendid_waxcap", "tawny_grisette", "jubilee_waxcap", "king_alfreds_cakes", "heath_waxcap", "silky_rosegill", "golden_waxcap", "macro_mushroom", "spectacular_rustgill", "pink_waxcap", "brown_birch_bolete", "scaly_wood_mushroom", "stinkhorn", "blackening_brittlegill", "penny_bun", "chicken_of_the_woods", "common_bonnet", "common_rustgill", "hedgehog_fungus", "shaggy_scalycap", "dyers_mazegill", "earthballs", "purple_brittlegill", "smoky_bracket", "elfin_saddle", "shaggy_bracket", "greencracked_brittlegill", "sulphur_tuft", "warted_amanita", "white_domecap", "winter_chanterelle", "grey_knight", "pale_oyster", "medusa_mushroom", "spotted_toughshank", "dog_stinkhorn", "stubble_rosegill", "truffles", "panthercap", "vermillion_waxcap", "ascot_hat", "birch_polypore", "common_morel", "shaggy_parasol", "turkey_tail", "the_blusher", "deathcap", "chestnut_bolete", "grey_spotted_amanita", "slender_parasol", "horn_of_plenty", "magpie_inkcap", "fools_funnel", "orange_birch_bolete", "scarlet_waxcap", "yellow_stainer", "field_mushroom", "fragrant_funnel", "spring_fieldcap", "bronze_bolete", "orange_grisette", "parasol", "trooping_funnel", "beechwood_sickener", "rosy_bonnet", "dusky_puffball", "the_miller", "white_saddle", "old_man_of_the_woods", "crimped_gill", "blushing_rosette", "pine_bolete", "brown_rollrim", "deadly_webcap", "devils_bolete", "scarlet_caterpillarclub", "red_cracking_bolete", "false_chanterelle", "woodland_inkcap", "cucumber_cap", "leccinum_albostipitatum", "fairy_ring_champignons", "rooting_bolete", "wood_blewit", "lilac_bonnet", "butter_cap", "black_bulgar", "giant_puffball", "false_deathcap", "white_fibrecap", "velvet_shank", "slippery_jack", "white_dapperling", "parrot_waxcap", "wrinkled_peach", "silverleaf_fungus", "amanita_gemmata", "stinking_dapperling", "plums_and_custard", "peppery_bolete", "terracotta_hedgehog", "egghead_mottlegill", "bearded_milkcap", "inky_mushroom", "larch_bolete", "porcelain_fungus", "jelly_tooth", "scarletina_bolete", "yellow_foot_waxcap", "the_prince", "aniseed_funnel", "white_false_death_cap", "false_saffron_milkcap", "yellow_swamp_brittlegill", "semifree_morel", "bitter_bolete", "almond_mushroom", "shaggy_inkcap", "blushing_wood_mushroom", "common_puffball", "funeral_bell", "bay_bolete", "blackening_waxcap", "liberty_cap", "snowy_waxcap", "the_goblet", "deer_shield", "freckled_dapperling", "slimy_waxcap", "common_inkcap", "amethyst_chanterelle", "cedarwood_waxcap", "honey_fungus", "bruising_webcap", "stump_puffball", "giant_funnel", "tuberous_polypore", "poison_pie", "curry_milkcap", "amethyst_deceiver", "golden_bootleg", "clustered_domecap", "ochre_brittlegill", "blackening_polypore", "suede_bolete", "horse_mushroom", "geranium_brittlegill", "st_georges_mushroom", "destroying_angel", "field_blewit", "cinnamon_bracket", "lions_mane", "orange_peel_fungus", "chanterelle", "the_sickener", "birch_woodwart", "pavement_mushroom", "false_morel", "oak_bolete", "poplar_fieldcap", "jelly_ears", "summer_bolete", "frosted_chanterelle", "morel", "the_deceiver", "splitgill", "ruby_bolete", "sepia_bolete", "bovine_bolete", "fly_agaric", "thimble_morel", "black_morel", "poplar_bell", "fleecy_milkcap", "golden_scalycap", "yellow_stagshorn", "oak_polypore", "weeping_widow", "meadow_waxcap", "clouded_agaric", "woolly_milkcap", "snakeskin_grisette", "hairy_curtain_crust", "lurid_bolete", "wood_mushroom", "dryads_saddle", "sheathed_woodtuft", "orange_bolete", "lilac_fibrecap", "cauliflower_fungus", "saffron_milkcap", "pestle_puffball", "red_belted_bracket", "beefsteak_fungus", "oak_mazegill", "glistening_inkcap", "tripe_fungus", "blushing_bracket", "deadly_fibrecap", "root_rot", "powdery_brittlegill", "grisettes", "charcoal_burner", "rooting_shank", "hen_of_the_woods", "crimson_waxcap", "fenugreek_milkcap", "oyster_mushroom", "blue_roundhead", "hoof_fungus", "bitter_beech_bolete", "tawny_funnel", "yellow_false_truffle" ]
sakethbngr/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 0.0952 - Accuracy: 0.9696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4921 | 1.0 | 351 | 0.1464 | 0.955 | | 0.4008 | 2.0 | 703 | 0.1049 | 0.9668 | | 0.3386 | 2.99 | 1053 | 0.0952 | 0.9696 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
arieg/my_awesome_food_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.7792 - Accuracy: 0.99 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.918 | 0.96 | 12 | 0.8973 | 0.97 | | 0.8361 | 2.0 | 25 | 0.7851 | 0.995 | | 0.7704 | 2.88 | 36 | 0.7792 | 0.99 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
arieg/food_classifier
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/food_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2049 - Validation Loss: 0.2772 - Train Accuracy: 0.917 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.3304 | 0.3024 | 0.93 | 0 | | 0.3047 | 0.3004 | 0.928 | 1 | | 0.2481 | 0.2744 | 0.935 | 2 | | 0.2262 | 0.2737 | 0.919 | 3 | | 0.2049 | 0.2772 | 0.917 | 4 | ### Framework versions - Transformers 4.34.1 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
arieg/food_classifier_noaug
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/food_classifier_noaug This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1400 - Validation Loss: 0.1328 - Train Accuracy: 0.969 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.1614 | 0.1377 | 0.971 | 0 | | 0.1519 | 0.1422 | 0.968 | 1 | | 0.1429 | 0.1329 | 0.968 | 2 | | 0.1340 | 0.1328 | 0.969 | 3 | | 0.1400 | 0.1328 | 0.969 | 4 | ### Framework versions - Transformers 4.34.1 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
dima806/pneumonia_chest_xray_image_detection
See https://www.kaggle.com/code/dima806/pneumonia-chest-x-ray-image-detection-vit for more details.
[ "normal", "pneumonia" ]
100rab25/swin-tiny-patch4-window7-224-fraud_number_classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-fraud_number_classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0107 - Accuracy: 0.9963 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0229 | 1.0 | 19 | 0.0516 | 0.9851 | | 0.0193 | 2.0 | 38 | 0.0107 | 0.9963 | | 0.0062 | 3.0 | 57 | 0.0275 | 0.9963 | | 0.0172 | 4.0 | 76 | 0.0313 | 0.9963 | | 0.028 | 5.0 | 95 | 0.0431 | 0.9926 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "fraud_number", "fraud_number_not_found" ]
02shanky/vit-finetuned-cifar10
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-cifar-10 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0831 - eval_accuracy: 0.9802 - eval_runtime: 75.4306 - eval_samples_per_second: 66.286 - eval_steps_per_second: 16.572 - epoch: 1.0 - step: 4500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
Mahendra42/vit-base-patch16-224-in21k-finetunedRCC_Classifier
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetunedRCC_Classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 2.5623 - Accuracy: 0.6074 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0019 | 1.0 | 155 | 2.0291 | 0.6532 | | 0.0013 | 2.0 | 310 | 2.4863 | 0.6074 | | 0.001 | 3.0 | 465 | 2.5623 | 0.6074 | ### Framework versions - Transformers 4.34.1 - Pytorch 1.12.1 - Datasets 2.14.5 - Tokenizers 0.14.1
[ "clear cell rcc", "non clear cell" ]
emaeon/vit-base-patch16-224-in21k-finetuned-gecko
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-gecko This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1890 - Accuracy: 0.9885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.97 | 21 | 3.2699 | 0.6210 | | No log | 1.98 | 43 | 2.0011 | 0.8468 | | 3.1155 | 2.99 | 65 | 1.2851 | 0.8641 | | 3.1155 | 4.0 | 87 | 0.7751 | 0.9389 | | 1.1003 | 4.97 | 108 | 0.6060 | 0.9274 | | 1.1003 | 5.98 | 130 | 0.4584 | 0.9378 | | 0.5229 | 6.99 | 152 | 0.3417 | 0.9585 | | 0.5229 | 8.0 | 174 | 0.2415 | 0.9816 | | 0.5229 | 8.97 | 195 | 0.2014 | 0.9873 | | 0.3249 | 9.66 | 210 | 0.1890 | 0.9885 | ### Framework versions - Transformers 4.34.1 - Pytorch 1.14.0a0+410ce96 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "10_sd22-0010", "11_sd22-0011", "12_sd22-0012", "13_sd21-0013", "14_sd21-0014", "15_sd21-0015", "16_sd21-0016", "17_sd22-0017", "18_sd22-0018", "19_sd22-0019", "1_sd18-0001", "20_sd22-0020", "21_sd22-0021", "22_sd22-0022", "23_sd22-0023", "24_sd22-0024", "25_sd22-0025", "26_sd22-0026", "27_sd22-0027", "28_sd22-0028", "29_sd22-0029", "2_sd22-0002", "30_sd22-0030", "31_sd22-0031", "32_sd22-0032", "33_sd22-0033", "34_sd21-0034", "35_sd21-0035", "36_sd22-0036", "37_ax21-0037", "38_ax22-0038", "39_hax22-0039", "3_sd22-0003", "40_hlax22-0020", "41_ax22-0041", "42_ax22-0042", "43_lax22-0043", "44_hax22-0044", "45_sc22-0045", "46_cal22-0046", "47_cal22-0047", "48_sc22-0048", "49_sc23-0049", "4_sd22-0004", "50_sc23-0050", "51_sab22-0051", "52_lw21-0052", "53_nor22-0053", "54_nor22-0054", "55_nor22-0055", "5_sd22-0005", "6_sd22-0006", "7_sd22-0007", "80_lw", "81_lw", "82_nor", "83_nor", "84_nor", "85_nor", "86_nor", "87_nor", "88_nor", "89_nor(hax)", "8_sd22-0008", "90_lw_b", "91_lw_b", "92_nor_b", "93_nor_b", "9_sd22-0009" ]
KevinTao511/pets_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pets_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9289 - Accuracy: 0.8621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.8 | 3 | 1.0377 | 0.6897 | | No log | 1.87 | 7 | 0.9472 | 0.8276 | | No log | 2.4 | 9 | 0.9289 | 0.8621 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "abyssinian", "basset", "beagle" ]
elucidator8918/VIT-MUSH
# Transfer Learning Vision Transformer (ViT) - Google 224 ViT Base Patch ## Description This model is a Transfer Learning Vision Transformer (ViT) based on Google's 224 ViT Base Patch architecture. It has been fine-tuned on a dataset consisting of fungal images from Russia, with a specific focus on various fungi and lichen species. ## Model Information - Model Name: Transfer Learning ViT - Google 224 ViT Base Patch - Model Architecture: Vision Transformer (ViT) - Base Architecture: Google's 224 ViT Base Patch - Pre-trained on General ImageNet dataset - Fine-tuned on: Fungal image dataset from Russia ## Performance - Accuracy: 90.31% - F1 Score: 86.33% ## Training Details - Training Loss: - Initial: 1.043200 - Final: 0.116200 - Validation Loss: - Initial: 0.822428 - Final: 0.335994 - Training Epochs: 10 - Training Runtime: 18575.04 seconds - Training Samples per Second: 33.327 - Training Steps per Second: 1.042 - Total FLOPs: 4.801 x 10^19 ## Recommended Use Cases - Species classification of various fungi and lichen in Russia. - Fungal biodiversity studies. - Image recognition tasks related to fungi and lichen species. ## Limitations - The model's performance is optimized for fungal species and may not generalize well to other domains. - The model may not perform well on images of fungi and lichen species from regions other than Russia. ## Model Author Siddhant Dutta
[ "boletus reticulatus", "coprinopsis atramentaria", "pleurotus pulmonarius", "gyromitra infula", "lactarius turpis", "nectria cinnabarina", "laetiporus sulphureus", "phellinus tremulae", "pholiota aurivella", "peltigera aphthosa", "lactarius torminosus", "armillaria borealis", "pseudevernia furfuracea", "vulpicida pinastri", "hericium coralloides", "hypogymnia physodes", "fomitopsis betulina", "amanita muscaria", "pleurotus ostreatus", "verpa bohemica", "coprinellus micaceus", "xanthoria parietina", "suillus luteus", "sarcosoma globosum", "coprinellus disseminatus", "rhytisma acerinum", "fomes fomentarius", "stropharia aeruginosa", "lycoperdon perlatum", "suillus grevillei", "sarcoscypha austriaca", "cerioporus squamosus", "coltricia perennis", "paxillus involutus", "kuehneromyces mutabilis", "chondrostereum purpureum", "trichaptum biforme", "daedaleopsis tricolor", "gyromitra gigas", "cantharellus cibarius", "macrolepiota procera", "hygrophoropsis aurantiaca", "hypholoma lateritium", "coprinus comatus", "peltigera praetextata", "lepista nuda", "phellinus igniarius", "tremella mesenterica", "apioperdon pyriforme", "cladonia stellaris", "flammulina velutipes", "parmelia sulcata", "leccinum aurantiacum", "merulius tremellosus", "daedaleopsis confragosa", "pholiota squarrosa", "lobaria pulmonaria", "phaeophyscia orbicularis", "calycina citrina", "sarcomyxa serotina", "fomitopsis pinicola", "urnula craterium", "cladonia rangiferina", "leccinum versipelle", "leccinum albostipitatum", "boletus edulis", "phallus impudicus", "imleria badia", "cladonia fimbriata", "chlorociboria aeruginascens", "amanita pantherina", "trametes ochracea", "mutinus ravenelii", "schizophyllum commune", "artomyces pyxidatus", "graphis scripta", "amanita citrina", "crucibulum laeve", "clitocybe nebularis", "stereum hirsutum", "cetraria islandica", "bjerkandera adusta", "suillus granulatus", "hypholoma fasciculare", "physcia adscendens", "trametes hirsuta", "gyromitra esculenta", "tricholomopsis rutilans", "panellus stipticus", "lactarius deliciosus", "inonotus obliquus", "evernia mesomorpha", "ganoderma applanatum", "phlebia radiata", "trametes versicolor", "calocera viscosa", "evernia prunastri", "platismatia glauca", "leccinum scabrum", "amanita rubescens" ]
bdpc/vit-base_rvl_cdip-N1K_ce_256
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_ce_256 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4495 - Accuracy: 0.8935 - Brier Loss: 0.1753 - Nll: 1.0235 - F1 Micro: 0.8935 - F1 Macro: 0.8937 - Ece: 0.0696 - Aurc: 0.0181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 63 | 0.3678 | 0.8972 | 0.1554 | 1.1865 | 0.8972 | 0.8975 | 0.0427 | 0.0165 | | No log | 2.0 | 126 | 0.3774 | 0.896 | 0.1584 | 1.1527 | 0.8960 | 0.8962 | 0.0470 | 0.0170 | | No log | 3.0 | 189 | 0.4050 | 0.892 | 0.1688 | 1.1092 | 0.892 | 0.8924 | 0.0578 | 0.0177 | | No log | 4.0 | 252 | 0.4089 | 0.8945 | 0.1675 | 1.0874 | 0.8945 | 0.8948 | 0.0582 | 0.0177 | | No log | 5.0 | 315 | 0.4255 | 0.8935 | 0.1704 | 1.0678 | 0.8935 | 0.8936 | 0.0640 | 0.0179 | | No log | 6.0 | 378 | 0.4324 | 0.8945 | 0.1715 | 1.0540 | 0.8945 | 0.8948 | 0.0648 | 0.0179 | | No log | 7.0 | 441 | 0.4404 | 0.894 | 0.1728 | 1.0302 | 0.894 | 0.8941 | 0.0672 | 0.0181 | | 0.0579 | 8.0 | 504 | 0.4452 | 0.8932 | 0.1747 | 1.0316 | 0.8932 | 0.8934 | 0.0685 | 0.0180 | | 0.0579 | 9.0 | 567 | 0.4479 | 0.8935 | 0.1749 | 1.0256 | 0.8935 | 0.8937 | 0.0693 | 0.0181 | | 0.0579 | 10.0 | 630 | 0.4495 | 0.8935 | 0.1753 | 1.0235 | 0.8935 | 0.8937 | 0.0696 | 0.0181 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_AURC_256
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_AURC_256 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2459 - Accuracy: 0.8968 - Brier Loss: 0.1720 - Nll: 0.9246 - F1 Micro: 0.8968 - F1 Macro: 0.8967 - Ece: 0.0709 - Aurc: 0.0191 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 63 | 0.1138 | 0.8922 | 0.1604 | 1.1695 | 0.8922 | 0.8926 | 0.0478 | 0.0170 | | No log | 2.0 | 126 | 0.1565 | 0.8952 | 0.1607 | 1.1000 | 0.8952 | 0.8952 | 0.0532 | 0.0176 | | No log | 3.0 | 189 | 0.1722 | 0.8972 | 0.1620 | 1.0250 | 0.8972 | 0.8973 | 0.0584 | 0.0175 | | No log | 4.0 | 252 | 0.2006 | 0.897 | 0.1642 | 0.9921 | 0.897 | 0.8969 | 0.0615 | 0.0181 | | No log | 5.0 | 315 | 0.2142 | 0.8988 | 0.1668 | 0.9670 | 0.8988 | 0.8986 | 0.0640 | 0.0183 | | No log | 6.0 | 378 | 0.2207 | 0.8975 | 0.1688 | 0.9482 | 0.8975 | 0.8975 | 0.0674 | 0.0186 | | No log | 7.0 | 441 | 0.2310 | 0.897 | 0.1700 | 0.9397 | 0.897 | 0.8969 | 0.0697 | 0.0188 | | 0.008 | 8.0 | 504 | 0.2401 | 0.8968 | 0.1714 | 0.9268 | 0.8968 | 0.8966 | 0.0705 | 0.0190 | | 0.008 | 9.0 | 567 | 0.2441 | 0.8975 | 0.1719 | 0.9262 | 0.8975 | 0.8974 | 0.0709 | 0.0191 | | 0.008 | 10.0 | 630 | 0.2459 | 0.8968 | 0.1720 | 0.9246 | 0.8968 | 0.8967 | 0.0709 | 0.0191 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_ce_128
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_ce_128 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4776 - Accuracy: 0.8912 - Brier Loss: 0.1798 - Nll: 0.9844 - F1 Micro: 0.8912 - F1 Macro: 0.8915 - Ece: 0.0768 - Aurc: 0.0189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 125 | 0.3896 | 0.893 | 0.1649 | 1.1887 | 0.893 | 0.8933 | 0.0484 | 0.0175 | | No log | 2.0 | 250 | 0.3908 | 0.8948 | 0.1606 | 1.1433 | 0.8948 | 0.8950 | 0.0499 | 0.0179 | | No log | 3.0 | 375 | 0.4188 | 0.892 | 0.1708 | 1.0860 | 0.892 | 0.8923 | 0.0607 | 0.0184 | | 0.0953 | 4.0 | 500 | 0.4268 | 0.892 | 0.1707 | 1.0788 | 0.892 | 0.8924 | 0.0654 | 0.0184 | | 0.0953 | 5.0 | 625 | 0.4414 | 0.8938 | 0.1719 | 1.0502 | 0.8938 | 0.8941 | 0.0664 | 0.0187 | | 0.0953 | 6.0 | 750 | 0.4570 | 0.8932 | 0.1754 | 1.0253 | 0.8932 | 0.8936 | 0.0714 | 0.0187 | | 0.0953 | 7.0 | 875 | 0.4681 | 0.891 | 0.1779 | 1.0018 | 0.891 | 0.8912 | 0.0752 | 0.0191 | | 0.0128 | 8.0 | 1000 | 0.4720 | 0.8902 | 0.1792 | 0.9789 | 0.8902 | 0.8905 | 0.0771 | 0.0188 | | 0.0128 | 9.0 | 1125 | 0.4757 | 0.8918 | 0.1794 | 0.9865 | 0.8918 | 0.8920 | 0.0760 | 0.0188 | | 0.0128 | 10.0 | 1250 | 0.4776 | 0.8912 | 0.1798 | 0.9844 | 0.8912 | 0.8915 | 0.0768 | 0.0189 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_AURC_128
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_AURC_128 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2754 - Accuracy: 0.8962 - Brier Loss: 0.1742 - Nll: 0.8794 - F1 Micro: 0.8962 - F1 Macro: 0.8963 - Ece: 0.0736 - Aurc: 0.0200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 125 | 0.1357 | 0.8898 | 0.1657 | 1.2064 | 0.8898 | 0.8907 | 0.0492 | 0.0181 | | No log | 2.0 | 250 | 0.1615 | 0.898 | 0.1602 | 1.0955 | 0.898 | 0.8986 | 0.0473 | 0.0181 | | No log | 3.0 | 375 | 0.1795 | 0.896 | 0.1630 | 1.0031 | 0.8960 | 0.8959 | 0.0599 | 0.0180 | | 0.0132 | 4.0 | 500 | 0.2094 | 0.8978 | 0.1662 | 0.9561 | 0.8978 | 0.8977 | 0.0633 | 0.0187 | | 0.0132 | 5.0 | 625 | 0.2290 | 0.898 | 0.1692 | 0.9249 | 0.898 | 0.8979 | 0.0665 | 0.0190 | | 0.0132 | 6.0 | 750 | 0.2430 | 0.898 | 0.1714 | 0.9150 | 0.898 | 0.8981 | 0.0690 | 0.0194 | | 0.0132 | 7.0 | 875 | 0.2567 | 0.898 | 0.1718 | 0.8888 | 0.898 | 0.8979 | 0.0702 | 0.0196 | | 0.0022 | 8.0 | 1000 | 0.2740 | 0.8975 | 0.1734 | 0.8800 | 0.8975 | 0.8975 | 0.0718 | 0.0199 | | 0.0022 | 9.0 | 1125 | 0.2715 | 0.896 | 0.1743 | 0.8824 | 0.8960 | 0.8960 | 0.0737 | 0.0199 | | 0.0022 | 10.0 | 1250 | 0.2754 | 0.8962 | 0.1742 | 0.8794 | 0.8962 | 0.8963 | 0.0736 | 0.0200 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
everycoffee/autotrain-coffee-bean-quality-97496146930
# Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 97496146930 - CO2 Emissions (in grams): 2.6219 ## Validation Metrics - Loss: 0.097 - Accuracy: 0.990 - Precision: 0.980 - Recall: 1.000 - AUC: 0.998 - F1: 0.990
[ "defect", "good" ]
02shanky/vit-finetuned-vanilla-cifar10-0
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-finetuned-vanilla-cifar10-0 This model is a fine-tuned version of [02shanky/vit-finetuned-cifar10](https://huggingface.co/02shanky/vit-finetuned-cifar10) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 0.0306 - Accuracy: 0.992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 316 | 0.0619 | 0.9836 | | 0.2651 | 2.0 | 633 | 0.0460 | 0.9867 | | 0.2651 | 3.0 | 949 | 0.0415 | 0.9878 | | 0.1967 | 4.0 | 1266 | 0.0326 | 0.9916 | | 0.1552 | 4.99 | 1580 | 0.0306 | 0.992 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
arieg/food_classifier_noaug_streaming
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arieg/food_classifier_noaug_streaming This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4578 - Validation Loss: 1.3138 - Train Accuracy: 0.801 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 3.1605 | 2.7599 | 0.602 | 0 | | 1.6013 | 1.9823 | 0.67 | 1 | | 0.9193 | 1.5901 | 0.699 | 2 | | 0.6189 | 1.3822 | 0.712 | 3 | | 0.4578 | 1.3138 | 0.801 | 4 | ### Framework versions - Transformers 4.34.1 - TensorFlow 2.14.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
JLB-JLB/seizure_vit_jlb_231027
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # seizure_vit_jlb_231027 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the JLB-JLB/seizure_eeg_greyscale_224x224_6secWindow_adjusted dataset. It achieves the following results on the evaluation set: - Loss: 0.4759 - Roc Auc: 0.7822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Roc Auc | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.4787 | 0.17 | 1000 | 0.5094 | 0.7706 | | 0.3695 | 0.34 | 2000 | 0.5111 | 0.7359 | | 0.337 | 0.51 | 3000 | 0.4734 | 0.7829 | | 0.3604 | 0.68 | 4000 | 0.5508 | 0.7457 | | 0.3222 | 0.85 | 5000 | 0.5817 | 0.7687 | | 0.2315 | 1.02 | 6000 | 0.6515 | 0.7679 | | 0.2388 | 1.19 | 7000 | 0.5681 | 0.7543 | | 0.2691 | 1.36 | 8000 | 0.5307 | 0.7691 | | 0.268 | 1.53 | 9000 | 0.5643 | 0.7610 | | 0.131 | 1.7 | 10000 | 0.7293 | 0.7451 | | 0.2303 | 1.87 | 11000 | 0.6291 | 0.7704 | | 0.1442 | 2.04 | 12000 | 0.6372 | 0.7871 | | 0.1325 | 2.21 | 13000 | 0.8672 | 0.7319 | | 0.1986 | 2.38 | 14000 | 0.7352 | 0.7532 | | 0.1669 | 2.55 | 15000 | 0.8195 | 0.7562 | | 0.1228 | 2.72 | 16000 | 1.0106 | 0.7239 | | 0.1071 | 2.89 | 17000 | 0.8957 | 0.7463 | | 0.1322 | 3.06 | 18000 | 1.0871 | 0.7408 | | 0.1676 | 3.24 | 19000 | 0.9173 | 0.7683 | | 0.1105 | 3.41 | 20000 | 1.0175 | 0.7700 | | 0.1451 | 3.58 | 21000 | 0.9357 | 0.7404 | | 0.082 | 3.75 | 22000 | 1.1246 | 0.7404 | | 0.1457 | 3.92 | 23000 | 1.0082 | 0.7502 | | 0.0336 | 4.09 | 24000 | 1.3685 | 0.7443 | | 0.0742 | 4.26 | 25000 | 1.5080 | 0.7227 | | 0.0353 | 4.43 | 26000 | 1.3573 | 0.7421 | | 0.0557 | 4.6 | 27000 | 1.2484 | 0.7472 | | 0.075 | 4.77 | 28000 | 1.2750 | 0.7462 | | 0.0569 | 4.94 | 29000 | 1.3954 | 0.7355 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
[ "seiz", "bckg" ]
bdpc/vit-base_rvl_cdip-N1K_aAURC_128
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_aAURC_128 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4634 - Accuracy: 0.8915 - Brier Loss: 0.1791 - Nll: 0.9824 - F1 Micro: 0.8915 - F1 Macro: 0.8918 - Ece: 0.0767 - Aurc: 0.0184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 125 | 0.3790 | 0.8935 | 0.1649 | 1.1886 | 0.8935 | 0.8937 | 0.0488 | 0.0175 | | No log | 2.0 | 250 | 0.3783 | 0.8958 | 0.1605 | 1.1495 | 0.8958 | 0.8959 | 0.0497 | 0.0178 | | No log | 3.0 | 375 | 0.4065 | 0.8915 | 0.1700 | 1.0956 | 0.8915 | 0.8918 | 0.0617 | 0.0183 | | 0.0928 | 4.0 | 500 | 0.4158 | 0.8932 | 0.1705 | 1.0843 | 0.8932 | 0.8936 | 0.0635 | 0.0183 | | 0.0928 | 5.0 | 625 | 0.4328 | 0.8932 | 0.1721 | 1.0369 | 0.8932 | 0.8935 | 0.0673 | 0.0186 | | 0.0928 | 6.0 | 750 | 0.4442 | 0.891 | 0.1764 | 1.0214 | 0.891 | 0.8913 | 0.0737 | 0.0183 | | 0.0928 | 7.0 | 875 | 0.4542 | 0.8935 | 0.1770 | 1.0053 | 0.8935 | 0.8938 | 0.0722 | 0.0187 | | 0.0125 | 8.0 | 1000 | 0.4587 | 0.891 | 0.1790 | 0.9941 | 0.891 | 0.8913 | 0.0767 | 0.0183 | | 0.0125 | 9.0 | 1125 | 0.4616 | 0.891 | 0.1786 | 0.9847 | 0.891 | 0.8912 | 0.0767 | 0.0185 | | 0.0125 | 10.0 | 1250 | 0.4634 | 0.8915 | 0.1791 | 0.9824 | 0.8915 | 0.8918 | 0.0767 | 0.0184 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]
bdpc/vit-base_rvl_cdip-N1K_aAURC_64
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl_cdip-N1K_aAURC_64 This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4857 - Accuracy: 0.8892 - Brier Loss: 0.1843 - Nll: 0.9506 - F1 Micro: 0.8892 - F1 Macro: 0.8895 - Ece: 0.0837 - Aurc: 0.0193 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 250 | 0.3824 | 0.888 | 0.1700 | 1.1756 | 0.888 | 0.8884 | 0.0548 | 0.0185 | | 0.1403 | 2.0 | 500 | 0.3988 | 0.8925 | 0.1681 | 1.1230 | 0.8925 | 0.8936 | 0.0549 | 0.0199 | | 0.1403 | 3.0 | 750 | 0.4099 | 0.8865 | 0.1756 | 1.0948 | 0.8865 | 0.8868 | 0.0672 | 0.0187 | | 0.0442 | 4.0 | 1000 | 0.4297 | 0.8925 | 0.1747 | 1.0568 | 0.8925 | 0.8931 | 0.0685 | 0.0191 | | 0.0442 | 5.0 | 1250 | 0.4467 | 0.8925 | 0.1775 | 1.0202 | 0.8925 | 0.8928 | 0.0734 | 0.0194 | | 0.0119 | 6.0 | 1500 | 0.4612 | 0.8908 | 0.1808 | 0.9834 | 0.8907 | 0.8914 | 0.0772 | 0.0191 | | 0.0119 | 7.0 | 1750 | 0.4762 | 0.8882 | 0.1845 | 0.9761 | 0.8882 | 0.8885 | 0.0827 | 0.0197 | | 0.0062 | 8.0 | 2000 | 0.4763 | 0.892 | 0.1824 | 0.9652 | 0.892 | 0.8923 | 0.0789 | 0.0192 | | 0.0062 | 9.0 | 2250 | 0.4854 | 0.8892 | 0.1844 | 0.9509 | 0.8892 | 0.8895 | 0.0834 | 0.0193 | | 0.0051 | 10.0 | 2500 | 0.4857 | 0.8892 | 0.1843 | 0.9506 | 0.8892 | 0.8895 | 0.0837 | 0.0193 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0.dev20231002 - Datasets 2.7.1 - Tokenizers 0.13.3
[ "letter", "form", "email", "handwritten", "advertisement", "scientific_report", "scientific_publication", "specification", "file_folder", "news_article", "budget", "invoice", "presentation", "questionnaire", "resume", "memo" ]