model_id
stringlengths 7
105
| model_card
stringlengths 1
130k
| model_labels
listlengths 2
80k
|
---|---|---|
DBD-research-group/ConvNeXT-Base-BirdSet-XCL
|
# ConvNext (trained on XCL from BirdSet)
ConvNext trained on the XCL dataset from BirdSet, covering 9736 bird species from Xeno-Canto. Please refer to the [BirdSet Paper](https://arxiv.org/pdf/2403.10380) and the
[BirdSet Repository](https://github.com/DBD-research-group/BirdSet/tree/main) for further information.
### Model Details
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them.
## How to use
The BirdSet data needs a custom processor that is available in the BirdSet repository. The model does not have a processor available.
The model accepts a mono image (spectrogram) as input (e.g., `torch.Size([16, 1, 128, 334])`)
- The model is trained on 5-second clips of bird vocalizations.
- num_channels: 1
- pretrained checkpoint: facebook/convnext-base-224-22k
- sampling_rate: 32_000
- normalize spectrogram: mean: -4.268, std: 4.569 (from esc-50)
- spectrogram: n_fft: 1024, hop_length: 320, power: 2.0
- melscale: n_mels: 128, n_stft: 513
- dbscale: top_db: 80
See [example inference notebook](https://github.com/DBD-research-group/BirdSet/blob/main/notebooks/tutorials/model_inference.ipynb).
Run in [Google Colab](https://colab.research.google.com/drive/1pp_RCJEjSR4gPBGFtxDdgnr4Uk1_KimU?usp=sharing):
```python
from transformers import ConvNextForImageClassification
import torch
import torchaudio
from torchvision import transforms
import requests
import torchaudio
import io
# download the audio file of a bird sound: Common Craw
url = "https://xeno-canto.org/704485/download"
response = requests.get(url)
audio, sample_rate = torchaudio.load(io.BytesIO(response.content))
print("Original shape and sample rate: ", audio.shape, sample_rate)
# crop to 5 seconds
audio = audio[:, : 5 * sample_rate]
# resample to 32kHz
resample = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=32000)
audio = resample(audio)
print("Resampled shape and sample rate: ", audio.shape, 32000)
CACHE_DIR = "../../data_birdset" # Change this to your own cache directory
# Load the model
model = ConvNextForImageClassification.from_pretrained(
"DBD-research-group/ConvNeXT-Base-BirdSet-XCL",
cache_dir=CACHE_DIR,
ignore_mismatched_sizes=True,
)
class PowerToDB(torch.nn.Module):
"""
A power spectrogram to decibel conversion layer. See birdset.datamodule.components.augmentations
"""
def __init__(self, ref=1.0, amin=1e-10, top_db=80.0):
super(PowerToDB, self).__init__()
# Initialize parameters
self.ref = ref
self.amin = amin
self.top_db = top_db
def forward(self, S):
# Convert S to a PyTorch tensor if it is not already
S = torch.as_tensor(S, dtype=torch.float32)
if self.amin <= 0:
raise ValueError("amin must be strictly positive")
if torch.is_complex(S):
magnitude = S.abs()
else:
magnitude = S
# Check if ref is a callable function or a scalar
if callable(self.ref):
ref_value = self.ref(magnitude)
else:
ref_value = torch.abs(torch.tensor(self.ref, dtype=S.dtype))
# Compute the log spectrogram
log_spec = 10.0 * torch.log10(
torch.maximum(magnitude, torch.tensor(self.amin, device=magnitude.device))
)
log_spec -= 10.0 * torch.log10(
torch.maximum(ref_value, torch.tensor(self.amin, device=magnitude.device))
)
# Apply top_db threshold if necessary
if self.top_db is not None:
if self.top_db < 0:
raise ValueError("top_db must be non-negative")
log_spec = torch.maximum(log_spec, log_spec.max() - self.top_db)
return log_spec
# Initialize the transformations
spectrogram_converter = torchaudio.transforms.Spectrogram(
n_fft=1024, hop_length=320, power=2.0
)
mel_converter = torchaudio.transforms.MelScale(
n_mels=128, n_stft=513, sample_rate=32_000
)
normalizer = transforms.Normalize((-4.268,), (4.569,))
powerToDB = PowerToDB(top_db=80)
def preprocess(audio, sample_rate_of_audio):
"""
Preprocess the audio to the format that the model expects
- Resample to 32kHz
- Convert to melscale spectrogram n_fft: 1024, hop_length: 320, power: 2. melscale: n_mels: 128, n_stft: 513
- Normalize the melscale spectrogram with mean: -4.268, std: 4.569 (from AudioSet)
"""
# convert waveform to spectrogram
spectrogram = spectrogram_converter(audio)
spectrogram = spectrogram.to(torch.float32)
melspec = mel_converter(spectrogram)
dbscale = powerToDB(melspec)
normalized_dbscale = normalizer(dbscale)
# add dimension 3 from left
normalized_dbscale = normalized_dbscale.unsqueeze(-3)
return normalized_dbscale
preprocessed_audio = preprocess(audio, sample_rate)
print("Preprocessed_audio shape:", preprocessed_audio.shape)
logits = model(preprocessed_audio).logits
print("Logits shape: ", logits.shape)
top5 = torch.topk(logits, 5)
print("Top 5 logits:", top5.values)
print("Top 5 predicted classes:")
print([model.config.id2label[i] for i in top5.indices.squeeze().tolist()])
```
## Model Source
- **Repository:** [BirdSet Repository](https://github.com/DBD-research-group/BirdSet/tree/main)
- **Paper [optional]:** [BirdSet Paper](https://arxiv.org/pdf/2403.10380)
## Citation
|
[
"ostric2",
"grerhe1",
"norcas1",
"whhstd1",
"whcsap1",
"vibhum1",
"bucsap1",
"grbtur1",
"bfgbir1",
"grygab1",
"wbgbir1",
"wesple1",
"easple1",
"puctur2",
"torduc1",
"ruwtur2",
"prrtur1",
"whctur1",
"viotur1",
"rostur1",
"yebtur1",
"whctur2",
"rectur1",
"guitur1",
"livtur1",
"spwgoo1",
"schtur1",
"knytur1",
"blbtur1",
"fistur1",
"hartur1",
"grebus1",
"arabus1",
"korbus1",
"houbus1",
"ludbus1",
"comduc3",
"whbbus2",
"blubus1",
"karbus1",
"ruebus1",
"savbus1",
"bucbus1",
"recbus1",
"blabus3",
"whqbus1",
"bkbbus1",
"comduc2",
"lesflo2",
"litbus1",
"guicuc1",
"greani1",
"smbani",
"grbani",
"strcuc1",
"phecuc1",
"pavcuc1",
"legcuc1",
"buwgoo1",
"greroa",
"lesroa1",
"rvgcuc1",
"scgcuc1",
"bagcuc1",
"rwgcuc1",
"rbgcuc1",
"buhcou1",
"piecou1",
"grbcou1",
"egygoo",
"biacou1",
"rufcou1",
"grbcou2",
"blfcou1",
"blhcou1",
"shtcou1",
"baycou1",
"gabcou1",
"bltcou1",
"sencou1",
"origoo1",
"blhcou2",
"cotcou1",
"whbcou1",
"whbcou3",
"grecou1",
"madcou1",
"golcou1",
"blacou1",
"phicou1",
"lescou1",
"andgoo1",
"viocou1",
"lebcou1",
"phecou2",
"andcou1",
"bogcuc1",
"sugcuc1",
"cbgcuc1",
"crecou1",
"blucou1",
"reccou1",
"uplgoo1",
"refcou1",
"coqcou1",
"runcou1",
"giacou1",
"rebcou1",
"rafmal1",
"yellow5",
"yellow6",
"sirmal1",
"rebmal2",
"emu1",
"kelgoo1",
"yebmal1",
"chbmal2",
"chbmal1",
"blfmal1",
"blbmal1",
"grbmal1",
"recmal1",
"scfmal1",
"chwcuc1",
"grscuc1",
"ashgoo1",
"levcuc1",
"piecuc1",
"litcuc2",
"dwacuc1",
"asccuc1",
"squcuc1",
"blbcuc1",
"dabcuc1",
"yebcuc",
"pebcuc1",
"ruhgoo1",
"mancuc",
"bkbcuc",
"gyccuc",
"chbcuc4",
"babcuc4",
"grelic1",
"purlic1",
"hislic1",
"thbcuc1",
"dwakoe1",
"radshe1",
"asikoe2",
"bkbkoe1",
"asikoe3",
"lotkoe1",
"chbcuc2",
"asecuc1",
"viocuc1",
"didcuc1",
"klacuc1",
"yetcuc1",
"comshe",
"afecuc1",
"lobcuc1",
"hobcuc1",
"blecuc1",
"rtbcuc1",
"shbcuc1",
"webcuc1",
"libcuc1",
"palcuc1",
"whckoe1",
"rudshe",
"chbcuc3",
"fatcuc1",
"babcuc2",
"placuc1",
"placuc3",
"brucuc1",
"brucuc2",
"molcuc1",
"dltcuc1",
"oltcuc1",
"soashe1",
"bltcuc1",
"phidrc1",
"asidrc3",
"asidrc2",
"asidrc4",
"mohcuc1",
"larhac2",
"larhac1",
"cohcuc1",
"nohcuc1",
"ausshe1",
"phhcuc1",
"malhac1",
"hodhac1",
"blacuc1",
"reccuc1",
"lescuc1",
"suhcuc1",
"indcuc1",
"madcuc1",
"afrcuc1",
"parshe1",
"himcuc1",
"oricuc2",
"suncuc2",
"comcuc",
"whbmes2",
"bromes1",
"tibsan1",
"palsan1",
"pitsan1",
"namsan1",
"pieduc1",
"chbsan",
"sposan1",
"blbsan1",
"yetsan1",
"crosan1",
"blfsan1",
"madsan1",
"licsan1",
"paisan1",
"fobsan1",
"grytin1",
"musduc",
"dobsan1",
"bursan1",
"rocpig",
"hilpig1",
"snopig1",
"spepig1",
"whcpig1",
"stodov1",
"pabpig1",
"cowpig1",
"whwduc1",
"tropig1",
"bolpig1",
"laupig1",
"afepig1",
"rampig1",
"compig1",
"spwpig1",
"aswpig1",
"niwpig1",
"siwpig1",
"harduc1",
"jawpig1",
"metpig1",
"whhpig1",
"yelpig1",
"delpig1",
"brnpig1",
"satpig1",
"lemdov2",
"whcpig2",
"scnpig1",
"wooduc",
"scapig2",
"picpig2",
"baepig2",
"spwpig3",
"batpig1",
"pavpig2",
"rebpig1",
"perpig2",
"plapig",
"plupig2",
"manduc",
"rudpig",
"shbpig",
"duspig2",
"matdov1",
"pinpig2",
"eutdov",
"dutdov1",
"adtdov1",
"ortdov",
"eucdov",
"manduc1",
"eurcod2",
"afcdov1",
"wwcdov1",
"afmdov1",
"reedov1",
"rindov",
"vindov1",
"recdov1",
"spodov",
"laudov1",
"afrpyg1",
"bacdov1",
"sbcdov1",
"sulcud1",
"rucdov1",
"engcud1",
"barcud1",
"timcud1",
"tancud1",
"ducdov1",
"phcdov1",
"copgoo1",
"brcdov1",
"ancdov1",
"bbcdov1",
"macdov1",
"licdov1",
"grcdov2",
"picdov1",
"crcdov1",
"wfcdov1",
"slacud1",
"grnpyg1",
"eswdov1",
"bbwdov1",
"bswdov1",
"tamdov1",
"bhwdov1",
"namdov1",
"emedov2",
"emedov3",
"stedov1",
"negbro1",
"bratea1",
"combro1",
"brubro1",
"crepig1",
"spipig2",
"squpig1",
"parpig1",
"tbgpig2",
"wonpig1",
"diadov1",
"zebdov",
"soltin1",
"rintea1",
"peadov1",
"bardov2",
"basdov1",
"incdov",
"scadov1",
"cogdov",
"pbgdov1",
"ecgdov1",
"rugdov",
"pigdov1",
"creduc1",
"crgdov1",
"blgdov1",
"mcgdov1",
"begdov1",
"bwgdov1",
"gsgdov1",
"ltgdov1",
"bhqdov1",
"sapqud1",
"sapqud2",
"speduc2",
"ruqdov",
"viqdov1",
"wfqdov",
"kwqdov",
"brqdov1",
"obqdov1",
"whtdov",
"latdov1",
"grfdov1",
"gyhdov1",
"baitea",
"paldov1",
"gredov1",
"cardov1",
"grcdov1",
"ocbdov1",
"toldov1",
"tuqdov1",
"bfqdov1",
"pbqdov1",
"wfqdov1",
"gargan",
"wtqdov1",
"liqdov1",
"chqdov1",
"rcqdov1",
"moudov",
"eardov1",
"zendov",
"whwdov",
"wepdov1",
"nicpig1",
"hottea1",
"sugdov1",
"mantho1",
"minblh1",
"nebhea1",
"wbgdov1",
"wtgdov1",
"frgdov1",
"scgdov1",
"phepig1",
"vicpig1",
"puntea1",
"whedov1",
"amedov1",
"daedov2",
"cihpig1",
"ligpig1",
"pinpig3",
"orbpig1",
"pomgrp2",
"pomgrp5",
"pomgrp4",
"siltea1",
"pomgrp1",
"thbpig1",
"gycpig1",
"sugpig2",
"flgpig1",
"timgrp1",
"lagpig1",
"yefpig1",
"brgpig1",
"madgrp1",
"redsho1",
"madgrp2",
"afrgrp1",
"gnspig1",
"yevpig1",
"wetpig1",
"whbpig1",
"whigrp1",
"whgpig1",
"bbfdov1",
"shbtre1",
"cintea",
"rnfdov1",
"phfdov1",
"fbfdov1",
"ybfdov2",
"refdov1",
"jafdov1",
"macfrd3",
"bcfdov1",
"sbfdov1",
"wofdov1",
"blatin1",
"buwtea",
"psfdov1",
"tafdov1",
"offdov1",
"wafdov1",
"sufdov1",
"mcfdov1",
"pucfrd1",
"kosfrd1",
"pafdov1",
"cifdov1",
"capsho1",
"mafdov2",
"rcfdov1",
"gygfrd1",
"mafdov1",
"atfdov1",
"rbfdov1",
"wcfdov1",
"cofdov1",
"befdov1",
"bcfdov2",
"norsho",
"whbfrd1",
"yebfrd1",
"yebfrd2",
"cbfdov1",
"whfdov2",
"obfdov1",
"gyhfrd1",
"cafdov1",
"bknfrd1",
"dwafrd1",
"gadwal",
"oradov1",
"goldov1",
"veldov1",
"mabpig1",
"cobpig1",
"sebpig1",
"pbipig1",
"wbipig1",
"gripig1",
"grnimp2",
"falduc",
"wheimp2",
"wheimp1",
"elipig1",
"paipig1",
"miipig1",
"marimp1",
"rkipig1",
"spipig1",
"spiimp2",
"ptipig1",
"eurwig",
"cbipig2",
"fiipig1",
"isipig1",
"phipig1",
"ciipig1",
"gryimp1",
"peipig1",
"cbipig1",
"baipig1",
"ncipig1",
"chiwig1",
"piipig2",
"zoeimp1",
"mouimp1",
"moipig1",
"dbipig1",
"tiipig1",
"piipig1",
"whiimp1",
"torimp1",
"torimp2",
"amewig",
"nezpig2",
"nezpig3",
"sompig2",
"afrfin1",
"masfin3",
"sungre1",
"madwor1",
"tsiwor1",
"whsflu1",
"busflu1",
"afbduc1",
"recflu1",
"chhflu1",
"stbflu1",
"strflu1",
"madflu1",
"slbflu1",
"wsfrai1",
"chfrai1",
"forrai1",
"astcra1",
"yebduc1",
"pabcra",
"colcra2",
"sporai",
"blarai1",
"plurai1",
"unicra1",
"rnwrai1",
"liwrai1",
"runwor1",
"gycwor1",
"gretin1",
"melduc1",
"brwrai1",
"giwrai1",
"rwwrai1",
"sbwrai1",
"ridrai1",
"clarai11",
"kinrai2",
"manrai1",
"kinrai4",
"virrai",
"pabduc1",
"bograi1",
"virrai1",
"ausrai1",
"watrai1",
"bncrai1",
"afrrai1",
"madrai1",
"afrcra1",
"whtrai1",
"corcra",
"hawduc",
"slbrai1",
"lewrai1",
"invrai1",
"weka1",
"cherai1",
"okirai1",
"barrai1",
"bubrai1",
"lohrai1",
"spfgal1",
"phiduc1",
"sora",
"spocra1",
"auscra1",
"tanhen1",
"lesmoo1",
"dusmoo1",
"comgal1",
"commoo3",
"trimoo3",
"refcoo1",
"isbduc1",
"giacoo1",
"regcoo1",
"eurcoo",
"rekcoo1",
"hawcoo",
"y00475",
"slccoo1",
"whwcoo1",
"allgal1",
"purgal2",
"spbduc",
"fusfly1",
"azugal1",
"purswa1",
"purswa2",
"purswa3",
"purswa6",
"takahe3",
"ocecra1",
"ruccra1",
"chhcra1",
"mallar3",
"swirai1",
"yelrai",
"yebcra1",
"blkrai",
"mappyt1",
"galrai1",
"dowcra1",
"rudcra1",
"ruscra1",
"rufcra2",
"motduc",
"rufcra1",
"rawcra1",
"grbcra1",
"whtcra1",
"blbcra1",
"blacra1",
"sakrai1",
"rubcra1",
"babcra1",
"bltcra1",
"ambduc",
"brocra1",
"baicra1",
"litcra1",
"spocra2",
"sllcra1",
"andcra1",
"relcra1",
"rencra1",
"nkurai1",
"whbcra1",
"mexduc",
"strcra1",
"waterc1",
"whbwat1",
"plabuh1",
"isabuh1",
"rutbuh1",
"gywtru1",
"pawtru2",
"dawtru1",
"grccra1",
"whttin1",
"captea1",
"blccra1",
"sibcra1",
"sancra",
"whncra1",
"sarcra1",
"brolga1",
"watcra2",
"blucra2",
"demcra1",
"reccra1",
"whcpin",
"whocra",
"comcra",
"hoocra1",
"blncra1",
"limpki",
"litgre1",
"litgre4",
"ausgre1",
"madgre1",
"leagre",
"rebduc1",
"pibgre",
"whtgre3",
"titgre1",
"gregre1",
"rengre",
"grcgre1",
"horgre",
"eargre",
"silgre1",
"jungre1",
"yebpin1",
"hoogre1",
"wesgre",
"clagre",
"grefla3",
"grefla2",
"chifla1",
"lesfla1",
"andfla2",
"jamfla1",
"smabut2",
"eatpin1",
"rebbut2",
"hotbut1",
"hotbut3",
"yelbut1",
"barbut1",
"chbbut2",
"paibut",
"eutkne1",
"indthk1",
"setkne1",
"norpin",
"watkne1",
"sptkne1",
"dstkne",
"petkne1",
"butkne1",
"grtkne1",
"beathk1",
"blfshe1",
"magplo1",
"magoys1",
"gnwtea",
"blaoys1",
"blkoys",
"ameoys",
"afroys1",
"euroys1",
"soioys1",
"pieoys1",
"varoys1",
"chaoys1",
"soooys1",
"yebtea1",
"ibisbi1",
"bkwsti",
"piesti1",
"bknsti",
"bknsti2",
"blasti1",
"pieavo1",
"ameavo",
"renavo1",
"andavo1",
"spetea3",
"norlap",
"lotlap1",
"blaplo1",
"spwlap1",
"rivlap1",
"blhlap1",
"yewlap2",
"whhlap1",
"senlap1",
"blwlap1",
"suntea1",
"crolap1",
"watlap1",
"spblap1",
"brclap1",
"gyhlap1",
"rewlap1",
"banlap1",
"maslap1",
"soclap1",
"whtlap1",
"higtin1",
"andtea1",
"soulap1",
"andlap1",
"rekdot1",
"inldot2",
"wrybil1",
"eugplo",
"pagplo",
"amgplo",
"bkbplo",
"rebdot1",
"gretea1",
"corplo",
"semplo",
"lobplo1",
"lirplo",
"wilplo",
"killde",
"pipplo",
"madplo1",
"kitplo1",
"sthplo1",
"chetea1",
"thbplo1",
"forplo1",
"whfplo1",
"kenplo1",
"whfplo2",
"snoplo5",
"javplo1",
"recplo1",
"chbplo1",
"colplo1",
"bertea1",
"punplo1",
"twbplo1",
"dobplo1",
"lesplo",
"grsplo",
"casplo1",
"oriplo1",
"eurdot",
"rucdot1",
"mouplo",
"brotea1",
"shoplo1",
"blfdot1",
"tatdot1",
"diaplo1",
"pielap1",
"egyplo1",
"grpsni1",
"soapas1",
"lesjac1",
"afrjac1",
"caitea1",
"madjac1",
"cocjac1",
"phtjac1",
"brwjac1",
"norjac",
"watjac1",
"rubsee2",
"whbsee2",
"gybsee1",
"leasee1",
"martea1",
"uplsan",
"brtcur",
"whimbr",
"whimbr3",
"litcur",
"lobcur",
"faecur",
"slbcur",
"eurcur",
"batgod",
"recpoc",
"bktgod",
"hudgod",
"margod",
"rudtur",
"blktur",
"tuasan1",
"grekno",
"redkno",
"surfbi",
"ruff",
"robpoc1",
"brbsan",
"shtsan",
"stisan",
"cursan",
"temsti",
"lotsti",
"spbsan1",
"rensti",
"sander",
"dunlin",
"canvas",
"rocsan",
"pursan",
"baisan",
"litsti",
"leasan",
"whrsan",
"bubsan",
"pecsan",
"semsan",
"wessan",
"tabtin1",
"redhea",
"asidow1",
"lobdow",
"shbdow",
"eurwoo",
"amawoo1",
"duswoo4",
"duswoo3",
"bukwoo1",
"molwoo1",
"amewoo",
"compoc",
"chisni1",
"snisni1",
"jacsni",
"solsni1",
"latsni1",
"woosni1",
"pitsni",
"swisni1",
"afrsni1",
"madsni1",
"ferduc",
"gresni1",
"comsni",
"wilsni1",
"soasni2",
"soasni3",
"punsni1",
"nobsni1",
"giasni1",
"fuesni1",
"andsni1",
"nezsca1",
"impsni1",
"tersan",
"wilpha",
"renpha",
"redpha1",
"comsan",
"sposan",
"grnsan",
"solsan",
"wantat1",
"rinduc",
"gyttat1",
"lesyel",
"willet1",
"comred1",
"marsan",
"woosan",
"spored",
"comgre",
"norgre1",
"greyel",
"tufduc",
"craplo1",
"crccou1",
"somcou1",
"burcou2",
"temcou1",
"dobcou2",
"thbcou1",
"brwcou1",
"jercou1",
"auspra1",
"gresca",
"colpra",
"oripra",
"blwpra1",
"madpra1",
"rocpra1",
"smapra1",
"brnnod",
"lesnod1",
"blknod",
"bugnod",
"lessca",
"whiter",
"blkski",
"afrski1",
"indski1",
"swtgul1",
"bklkit",
"relkit",
"ivogul",
"sabgul",
"slbgul1",
"steeid",
"bongul",
"silgul2",
"blbgul1",
"andgul1",
"bnhgul1",
"brhgul2",
"bkhgul",
"grhgul",
"hargul1",
"saugul2",
"speeid",
"litgul",
"rosgul",
"dolgul2",
"lavgul1",
"laugul",
"fragul",
"grygul",
"audgul1",
"medgul1",
"gbhgul2",
"hootin1",
"kineid",
"whegul2",
"soogul2",
"pacgul1",
"belgul",
"olrgul1",
"bktgul",
"heegul",
"mewgul",
"mewgul2",
"ribgul",
"comeid",
"calgul",
"gbbgul",
"kelgul",
"glwgul",
"wesgul",
"yefgul",
"glagul",
"y00478",
"hergul",
"amhgul1",
"harduc",
"veggul1",
"casgul2",
"yelgul1",
"armgul1",
"slbgul",
"lbbgul",
"gubter1",
"caster1",
"royter1",
"grcter1",
"sursco",
"lecter2",
"royter2",
"chcter2",
"santer1",
"santer2",
"eleter1",
"litter1",
"sauter2",
"leater1",
"yebter2",
"whwsco3",
"faiter2",
"damter2",
"aleter1",
"gybter1",
"briter1",
"sooter1",
"rivter1",
"roster",
"whfter1",
"blnter1",
"whwsco2",
"soater1",
"comter",
"whcter1",
"arcter",
"antter1",
"kerter1",
"forter",
"truter",
"blbter1",
"blfter1",
"whwsco1",
"whiter2",
"whwter",
"blkter",
"labter1",
"incter1",
"chisku1",
"brnsku3",
"gresku1",
"pomjae",
"parjae",
"blksco1",
"lotjae",
"doveki",
"thbmur",
"commur",
"razorb",
"blkgui",
"piggui",
"marmur",
"xanmur2",
"cramur",
"blksco2",
"ancmur",
"japmur1",
"casauk",
"leaauk",
"whiauk",
"atlpuf",
"horpuf",
"kagu1",
"sunbit1",
"rebtro",
"lotduc",
"rettro",
"whttro",
"retloo",
"arcloo",
"pacloo",
"comloo",
"yebloo",
"kinpen1",
"genpen1",
"litpen1",
"lesrhe2",
"bertin1",
"buffle",
"galpen1",
"humpen1",
"magpen1",
"jacpen1",
"roypen1",
"rocpen4",
"rocpen1",
"fiopen1",
"snapen1",
"wispet",
"comgol",
"wvspet1",
"wfspet",
"bbspet1",
"layalb",
"bkfalb",
"wavalb",
"wanalb",
"wanalb2",
"wanalb3",
"royalb1",
"bargol",
"royalb3",
"limalb1",
"bkbalb",
"whcalb1",
"salalb1",
"bulalb2",
"bripet",
"ftspet",
"rispet1",
"swspet",
"smew",
"lcspet",
"barpet",
"monstp1",
"cavstp1",
"maspet",
"trspet",
"norgip1",
"norful",
"cappet",
"blupet1",
"hoomer",
"brbpri1",
"dovpri1",
"slbpri1",
"faipri1",
"fulpri1",
"kerpet2",
"whhpet1",
"grwpet2",
"atlpet1",
"solpet1",
"bramer1",
"magpet1",
"soppet1",
"madpet",
"feapet1",
"feapet2",
"berpet",
"jufpet",
"kerpet",
"herpet2",
"tripet1",
"commer",
"phopet1",
"barpet1",
"hawpet1",
"galpet",
"motpet",
"bkwpet",
"chapet1",
"coopet",
"grapet",
"whcpet1",
"rebmer",
"parpet1",
"wespet1",
"strshe",
"corshe",
"cavshe1",
"wetshe",
"sooshe",
"shtshe",
"pifshe",
"flfshe",
"scsmer1",
"greshe",
"chrshe",
"manshe",
"levshe1",
"balshe1",
"bkvshe",
"towshe1",
"flushe1",
"hutshe1",
"audshe",
"blhduc1",
"pershe1",
"troshe5",
"audshe3",
"litshe8",
"litshe1",
"litshe2",
"sgdpet1",
"codpet1",
"bulpet",
"woosto",
"cintin1",
"masduc",
"milsto1",
"yebsto1",
"paisto1",
"asiope1",
"blasto1",
"whisto1",
"oristo1",
"jabiru",
"lesadj1",
"marsto1",
"rudduc",
"magfri",
"grefri",
"norgan",
"capgan1",
"ausgan1",
"abbboo2",
"bfoboo",
"perboo1",
"masboo",
"nazboo1",
"andduc1",
"refboo",
"brnboo",
"darter2",
"darter3",
"darter4",
"anhing",
"pygcor2",
"lotcor1",
"litcor1",
"lipcor1",
"lakduc1",
"relcor1",
"bracor",
"refcor",
"pelcor",
"soccor1",
"piisha1",
"sposha1",
"blfcor1",
"piecor1",
"libcor1",
"blbduc1",
"indcor1",
"grecor4",
"grecor",
"eursha1",
"flicor1",
"neocor",
"doccor",
"magcor1",
"rofsha1",
"chisha1",
"macduc1",
"impcor1",
"kersha1",
"sacibi2",
"blhibi1",
"ausibi1",
"stnibi1",
"renibi1",
"whsibi1",
"giaibi1",
"waldra1",
"whhduc1",
"balibi1",
"creibi1",
"oliibi2",
"spbibi1",
"hadibi1",
"watibi1",
"pluibi1",
"bunibi1",
"bkfibi1",
"bkfibi2",
"musduc1",
"shtibi1",
"greibi1",
"bafibi1",
"whiibi",
"scaibi",
"gloibi",
"whfibi",
"punibi1",
"madibi1",
"eurspo1",
"ausbrt1",
"blfspo1",
"afrspo1",
"royspo1",
"rosspo1",
"whcbit1",
"ruther1",
"father1",
"btther1",
"agaher1",
"bobher1",
"watbrt1",
"zigher1",
"grebit1",
"ausbit1",
"amebit",
"pinbit1",
"stbbit1",
"leabit",
"litbit1",
"bkbbit1",
"yelbit",
"littin1",
"rebbrt1",
"schbit1",
"cinbit1",
"dwabit1",
"blabit1",
"janher1",
"manher1",
"wbnher1",
"bcnher",
"runher1",
"ycnher",
"bkbbrt1",
"grnher",
"strher",
"squher1",
"inpher1",
"chpher1",
"rubher2",
"categr",
"categr2",
"graher1",
"grbher3",
"bncbrt1",
"cocher1",
"pacher1",
"blhher1",
"grbher2",
"golher1",
"purher1",
"greegr",
"capher1",
"whiher1",
"pieher2",
"micscr1",
"whfher1",
"redegr",
"blaher1",
"triher",
"libher",
"snoegr",
"litegr",
"werher",
"litegr2",
"pacreh1",
"tabscr1",
"hamerk1",
"grwpel1",
"spbpel1",
"dalpel1",
"auspel1",
"amwpel",
"brnpel",
"perpel1",
"hoatzi1",
"kinvul1",
"tanscr1",
"andcon1",
"blkvul",
"turvul",
"lyhvul1",
"gyhvul1",
"secret2",
"osprey",
"osprey4",
"bkskit1",
"auskit1",
"dusscr1",
"whtkit",
"peakit1",
"sctkit1",
"afhhaw1",
"mahhaw1",
"panvul1",
"lammer1",
"egyvul1",
"maseag1",
"grhkit1",
"dusscr3",
"whckit1",
"hobkit",
"euhbuz1",
"orihob2",
"barhob1",
"barhob2",
"swtkit",
"sqtkit1",
"bkbkit1",
"afrcuh1",
"melscr1",
"jerbaz1",
"pacbaz1",
"blabaz1",
"whbvul1",
"whrvul1",
"indvul1",
"himgri1",
"eurgri1",
"capgri1",
"cinvul1",
"vanscr1",
"crseag1",
"moseag1",
"suseag1",
"phseag1",
"anseag1",
"grpeag1",
"shteag1",
"brseag1",
"faseag1",
"coseag1",
"teptin1",
"negscr1",
"bathaw1",
"negeag1",
"creeag1",
"hareag1",
"y00839",
"flohae1",
"mouhae1",
"mouhae2",
"blyhae1",
"javhae1",
"orfscr1",
"sulhae1",
"pinhae1",
"walhae1",
"blheag1",
"bawhae1",
"orheag1",
"baceag2",
"crheag1",
"rubeag2",
"mareag1",
"placha",
"loceag1",
"blaeag1",
"leseag1",
"inseag1",
"grseag1",
"waheag3",
"booeag1",
"liteag1",
"ayheag1",
"taweag1",
"grhcha1",
"steeag1",
"spaeag1",
"impeag1",
"goleag",
"weteag1",
"vereag1",
"cashae1",
"boneag2",
"afrhae1",
"dotkit1",
"chwcha1",
"rutkit1",
"lizbuz1",
"gabgos2",
"dacgos1",
"eacgos1",
"pacgos1",
"lothaw1",
"redgos1",
"dorgos1",
"tinhaw1",
"ruvcha1",
"semhaw2",
"cregos1",
"gybhaw1",
"recgos3",
"afrgos1",
"shikra1",
"levspa1",
"grfhaw1",
"fragos2",
"sptgos1",
"ruhcha1",
"grygos1",
"vargos1",
"brogos1",
"blmgos1",
"piegos1",
"necgos1",
"fijgos1",
"molgos1",
"retspa1",
"litspa1",
"rubcha1",
"japspa1",
"besra1",
"smaspa1",
"colspa1",
"vibspa1",
"madspa1",
"ovaspa2",
"eurspa1",
"shshaw",
"shshaw3",
"wemcha1",
"shshaw4",
"shshaw5",
"coohaw",
"gunhaw1",
"bichaw1",
"bichaw4",
"blagos1",
"hengos1",
"norgos",
"wemhar1",
"chacha1",
"easmah1",
"easmah2",
"swahar1",
"afmhar1",
"reuhar2",
"reuhar3",
"lowhar1",
"blahar1",
"norhar1",
"norhar2",
"brotin1",
"whbcha1",
"cinhar1",
"palhar1",
"piehar1",
"monhar1",
"redkit1",
"blakit1",
"blkkit3",
"whikit1",
"brakit1",
"wbseag1",
"specha3",
"solsee1",
"affeag1",
"pafeag1",
"whteag",
"baleag",
"stseag",
"lefeag1",
"gyhfie1",
"grabuz1",
"whebuz1",
"specha2",
"ruwbuz1",
"gyfbuz1",
"miskit",
"plukit1",
"blchaw1",
"snakit",
"slbkit1",
"crahaw",
"pluhaw",
"slchaw2",
"specha4",
"comblh1",
"cubblh1",
"ruchaw1",
"savhaw1",
"whnhaw2",
"grbhaw1",
"soleag1",
"croeag1",
"barhaw1",
"roahaw",
"colcha1",
"hrshaw",
"whrhaw1",
"whthaw",
"rebhaw2",
"bcbeag1",
"manhaw2",
"whihaw1",
"gybhaw2",
"semhaw",
"blfhaw1",
"varcha1",
"whbhaw2",
"gryhaw2",
"gryhaw3",
"reshaw",
"ridhaw1",
"brwhaw",
"whthaw1",
"shthaw",
"hawhaw",
"swahaw",
"varcha3",
"galhaw1",
"zothaw",
"rethaw",
"ruthaw1",
"ferhaw",
"rolhaw",
"uplbuz1",
"combuz6",
"combuz9",
"lolbuz1",
"bubcha1",
"combuz4",
"combuz1",
"moubuz3",
"moubuz2",
"renbuz1",
"madbuz1",
"augbuz2",
"jacbuz1",
"sooowl1",
"lesowl1",
"batgua1",
"minowl1",
"talowl1",
"lemowl1",
"aumowl1",
"sulowl1",
"marowl1",
"brnowl",
"barowl28",
"barowl7",
"afgowl1",
"beagua1",
"ausgro1",
"orbowl1",
"srlbao1",
"paphao1",
"rufowl2",
"powowl1",
"barowl1",
"sumboo1",
"souboo8",
"souboo4",
"undtin1",
"baugua1",
"souboo5",
"souboo6",
"morepo2",
"norboo1",
"brnhao1",
"brnhao3",
"choboo1",
"andhao1",
"phihao1",
"minboo1",
"andgua1",
"sulboo1",
"cebboo1",
"romboo1",
"minboo2",
"lishao1",
"toghao1",
"ocbhao1",
"cinhao1",
"molhao3",
"hanboo2",
"margua1",
"molhao2",
"chihao1",
"junhao1",
"spehao1",
"nebhao1",
"balowl",
"colowl1",
"colowl3",
"elfowl",
"lowowl1",
"rumgua1",
"borowl",
"nswowl",
"uswowl1",
"bufowl1",
"burowl",
"spoowl1",
"litowl1",
"whbowl1",
"forowl1",
"solboo1",
"refgua1",
"solboo4",
"nohowl",
"eupowl1",
"pesowl1",
"recowl1",
"asbowl1",
"javowl1",
"junowl1",
"chbowl1",
"afbowl1",
"cregua1",
"albowl1",
"norpyo1",
"nopowl",
"norpyo3",
"norpyo4",
"crpowl",
"clopyo1",
"anpowl1",
"yupowl1",
"copowl1",
"caugua1",
"tapowl1",
"capowl1",
"supowl1",
"amapyo1",
"leapyo1",
"fepowl",
"pepowl1",
"aupowl1",
"cupowl1",
"mineao1",
"whwgua1",
"wfsowl2",
"resowl1",
"sasowl1",
"ansowl1",
"flsowl1",
"mosowl2",
"jasowl2",
"misowl1",
"lusowl1",
"misowl2",
"spigua1",
"torsco1",
"madsco1",
"maysco1",
"cosowl3",
"ansowl2",
"mohsco1",
"pesowl2",
"eursco1",
"eursco3",
"pasowl3",
"dulgua1",
"arasco1",
"afsowl1",
"afrsco3",
"afrsco2",
"orsowl",
"ryusco1",
"mosowl1",
"biasco1",
"susowl1",
"sulsco5",
"pabtin1",
"dulgua3",
"sansco1",
"masowl2",
"sesowl1",
"nicsco1",
"sisowl1",
"ensowl1",
"mesowl1",
"rasowl1",
"insowl1",
"cosowl1",
"whcgua1",
"jasowl1",
"susowl2",
"phsowl1",
"negsco1",
"evesco1",
"pasowl2",
"wasowl1",
"rinsco1",
"palowl2",
"nwfowl1",
"chbgua1",
"swfowl1",
"jamowl1",
"strowl1",
"loeowl",
"mleowl1",
"styowl1",
"sheowl",
"marowl2",
"feaowl1",
"snoowl1",
"whbgua1",
"grhowl",
"grhowl2",
"eueowl1",
"roeowl1",
"pheowl1",
"caeowl1",
"speowl2",
"graeao1",
"spoeao2",
"fraeao1",
"butpig1",
"useowl1",
"veeowl1",
"sheowl1",
"baeowl1",
"sbeowl1",
"dueowl1",
"akeowl1",
"pheowl2",
"blfowl1",
"pefowl1",
"rtpgua1",
"rufowl1",
"vefowl1",
"brfowl1",
"tafowl1",
"bufowl2",
"flaowl",
"prsowl",
"whsowl1",
"bssowl",
"whtsco1",
"bfpgua1",
"trsowl",
"besowl",
"pasowl4",
"wesowl1",
"easowl1",
"basowl",
"vesowl",
"versco5",
"koesco1",
"rufsco1",
"watgua1",
"cinsco1",
"clfsco1",
"mofsco1",
"versco2",
"foosco1",
"lotsco1",
"samsco1",
"persco1",
"tabsco1",
"bkcsco1",
"blagua1",
"speowl1",
"tabowl1",
"babowl1",
"creowl1",
"spwowl1",
"mowowl1",
"brwowl1",
"tawowl1",
"tawowl3",
"himowl1",
"siwgua1",
"humowl1",
"omaowl1",
"brdowl",
"barowl13",
"fulowl1",
"rubowl2",
"chaowl1",
"rulowl1",
"uraowl1",
"pedowl1",
"bratin1",
"higgua1",
"grgowl",
"afwowl1",
"motowl",
"bawowl1",
"bkbowl1",
"rubowl3",
"spemou2",
"rebmou1",
"whbmou1",
"blnmou1",
"horgua1",
"refmou1",
"cuckoo1",
"earque",
"pavque1",
"gohque1",
"whtque1",
"resque1",
"creque1",
"cubtro1",
"histro1",
"noccur1",
"lattro1",
"slttro1",
"buttro1",
"bkttro2",
"blttro1",
"blhtro1",
"cittro1",
"whttro1",
"baitro1",
"gnbtro1",
"crecur2",
"gartro1",
"viotro3",
"viotro2",
"blctro1",
"surtro1",
"blttro2",
"eletro",
"moutro1",
"coltro1",
"mastro1",
"salcur1",
"nartro1",
"bactro1",
"battro1",
"javtro1",
"sumtro1",
"maltro1",
"rentro1",
"diatro1",
"phitro1",
"whitro1",
"rabcur2",
"cirtro1",
"scrtro1",
"orbtro2",
"rehtro1",
"wartro1",
"hoopoe",
"eurhoo2",
"madhoo1",
"forwoo1",
"whhwoo1",
"helcur1",
"grewoo2",
"blbwoo2",
"viowoo1",
"viowoo3",
"blsbil1",
"cosbil1",
"absbil1",
"soghor1",
"trbhor1",
"wrbhor2",
"horcur2",
"drbhor1",
"srbhor1",
"rebhor1",
"vddhor1",
"jachor1",
"sybhor1",
"eybhor1",
"brahor1",
"crohor1",
"afphor1",
"grecur1",
"hemhor1",
"afghor1",
"rbdhor1",
"pabhor1",
"piphor1",
"truhor1",
"bnchor1",
"whthor1",
"bawhor2",
"sichor1",
"bubcur1",
"blchor1",
"yechor1",
"bldhor1",
"whchor3",
"whchor2",
"rhihor1",
"grehor1",
"rufhor1",
"palhor1",
"orphor1",
"gyltin1",
"yekcur1",
"maphor1",
"blahor1",
"maghor2",
"ceghor1",
"inghor2",
"ruchor1",
"brnhor1",
"buchor1",
"runhor1",
"blyhor1",
"blacur1",
"wrehor1",
"sumhor1",
"plphor1",
"knohor1",
"wrihor2",
"sulhor1",
"wrihor1",
"luzhor1",
"minhor2",
"minhor1",
"watcur1",
"samhor1",
"tarhor1",
"rucrol2",
"indrol2",
"indrol3",
"puwrol1",
"ratrol2",
"librol2",
"abyrol2",
"eurrol1",
"bafcur1",
"blbrol1",
"bltrol1",
"brbrol1",
"dollar1",
"purrol1",
"slgrol1",
"scagrr1",
"plgrol1",
"rhgrol1",
"ltgrol1",
"rebcur1",
"grbkin1",
"scakin1",
"spokin1",
"blckin2",
"ruckin1",
"hobkin1",
"bankin1",
"copkin1",
"bipkin1",
"nupkin1",
"whbgui1",
"lipkin1",
"bubpak1",
"bubpak2",
"rbpkin1",
"bhpkin1",
"lickin2",
"shbkoo1",
"laukoo1",
"blwkoo1",
"spakoo1",
"helgui",
"rubkoo1",
"whrkin1",
"stbkin1",
"bnwkin1",
"rudkin1",
"whtkin2",
"chbkin2",
"blckin1",
"gyhkin1",
"brhkin1",
"plugui1",
"strkin1",
"blbkin4",
"wookin1",
"mankin2",
"blbkin3",
"rulkin1",
"bawkin1",
"forkin1",
"nebkin1",
"ultkin1",
"cregui3",
"chbkin1",
"somkin1",
"colkin1",
"colkin9",
"colkin2",
"melkin1",
"packin1",
"talkin1",
"mickin5",
"beakin2",
"vulgui1",
"sackin1",
"cibkin1",
"chakin2",
"mankin1",
"tahkin1",
"rebkin2",
"yebkin1",
"moukin1",
"dwakin1",
"afpkin1",
"reltin1",
"stopar1",
"mapkin1",
"whbkin1",
"malkin1",
"malkin2",
"smbkin1",
"bubkin2",
"shbkin1",
"blekin1",
"comkin1",
"hackin1",
"nahfra2",
"bkbkin1",
"phikin1",
"sulkin1",
"varkin1",
"vardwk1",
"vardwk6",
"vardwk15",
"silkin1",
"azukin1",
"amakin1",
"bcwpar1",
"ampkin1",
"grnkin",
"garkin1",
"crekin1",
"giakin3",
"rinkin1",
"belkin1",
"piekin1",
"cubtod1",
"brbtod1",
"ltwpar1",
"nabtod1",
"jamtod1",
"purtod1",
"todmot1",
"bltmot1",
"rucmot1",
"bucmot1",
"bucmot2",
"bucmot3",
"bucmot4",
"bewpar1",
"higmot1",
"rufmot1",
"rucmot2",
"kebmot1",
"brbmot1",
"tubmot1",
"rbbeat1",
"bbbeat1",
"pbbeat1",
"bhbeat1",
"mouqua",
"bhbeat2",
"bumbee1",
"blbeat1",
"stbeat1",
"libeat1",
"bbbeat2",
"bubbee2",
"ccbeat1",
"rtbeat1",
"wfbeat1",
"scaqua",
"wtbeat1",
"bobeat1",
"grnbee1",
"grnbee2",
"grnbee3",
"bcbeat1",
"mabeat1",
"btbeat1",
"rabeat1",
"btbeat2",
"elequa",
"chbeat1",
"eubeat1",
"robeat1",
"ncbeat1",
"scbeat1",
"whejac1",
"purjac2",
"dubjac1",
"pahjac1",
"brojac2",
"calqua",
"whtjac1",
"thtjac1",
"yebjac1",
"bucjac1",
"rutjac1",
"grtjac1",
"cocjac2",
"whcjac1",
"blfjac1",
"purjac1",
"gamqua",
"brojac1",
"parjac1",
"grejac2",
"whnpuf2",
"guipuf1",
"bubpuf1",
"blbpuf1",
"brbpuf1",
"piepuf1",
"chcpuf1",
"sobkiw1",
"yeltin1",
"banqua1",
"spopuf1",
"socpuf1",
"colpuf1",
"barpuf1",
"whepuf1",
"strpuf1",
"wespuf1",
"spbpuf1",
"spbpuf3",
"rutpuf1",
"norbob",
"rutpuf3",
"crcpuf1",
"whcpuf1",
"sempuf1",
"blspuf1",
"runpuf1",
"whwpuf1",
"moupuf1",
"lanmon1",
"rubnun1",
"bltbob1",
"fucnun1",
"bronun1",
"gycnun1",
"rucnun1",
"whfnun2",
"blanun1",
"blfnun1",
"whfnun1",
"yebnun1",
"swwpuf1",
"crebob2",
"sccbar1",
"scbbar2",
"spcbar1",
"orfbar1",
"whmbar1",
"blgbar1",
"brcbar1",
"blsbar1",
"gilbar1",
"ficbar1",
"crebob1",
"letbar1",
"rehbar1",
"schbar1",
"verbar1",
"prbbar1",
"toubar1",
"emetou3",
"noremt1",
"emetou4",
"souemt1",
"mawqua1",
"emetou8",
"grbtou1",
"chttou3",
"chttou2",
"crrtou1",
"yebtou1",
"blbtou1",
"greara1",
"letara1",
"renara1",
"swwqua1",
"ivbara1",
"ivbara3",
"blnara1",
"cheara1",
"mabara1",
"colara1",
"colara4",
"colara5",
"fibara1",
"cucara1",
"bewqua1",
"saftou2",
"yeetou1",
"guitou1",
"goctou1",
"tattou1",
"goutou1",
"spbtou1",
"gybmot1",
"pbmtou1",
"homtou1",
"rfwqua1",
"bbmtou1",
"rebtou2",
"chbtou1",
"chbtou3",
"chotou1",
"kebtou1",
"toctou1",
"whttou1",
"bkmtou1",
"fitbar1",
"bfwqua1",
"grebar1",
"revbar1",
"brhbar1",
"linbar1",
"whcbar1",
"grebar3",
"brtbar1",
"gowbar2",
"recbar1",
"retbar1",
"blctin1",
"chwqua1",
"blbbar2",
"yefbar1",
"gotbar2",
"gotbar3",
"blbbar5",
"indbar1",
"chibar1",
"taibar2",
"bltbar2",
"tutbar1",
"dbwqua1",
"moubar1",
"moubar2",
"yecbar1",
"flfbar1",
"gonbar1",
"litbar1",
"blebar1",
"borbar1",
"crfbar3",
"crfbar1",
"rbwqua1",
"copbar1",
"brnbar2",
"soobar2",
"gytbar1",
"brnbar1",
"nafbar1",
"whebar1",
"whybar1",
"ancbar1",
"grebar2",
"tawqua1",
"spetin1",
"gretin2",
"moutin1",
"westin1",
"rertin1",
"yettin1",
"yertin1",
"reftin1",
"yeftin1",
"yesbar1",
"gowqua1",
"habbar1",
"refbar2",
"miobar1",
"piebar1",
"spfbar1",
"bltbar1",
"banbar1",
"viebar1",
"whhbar1",
"chabar1",
"venwoq1",
"refbar1",
"blbbar3",
"blcbar1",
"brbbar1",
"blbbar1",
"dotbar1",
"beabar1",
"yebbar1",
"crebar1",
"raybar1",
"bbwqua1",
"yebbar2",
"darbar1",
"darbar3",
"grbhon2",
"wahhon1",
"yefhon2",
"dwahon1",
"wilhon2",
"palhon1",
"leahon2",
"sfwqua1",
"thbhon1",
"y00400",
"spohon2",
"scthon1",
"yerhon1",
"grehon2",
"eurwry",
"runwry1",
"spepic1",
"babpic1",
"stwqua1",
"lafpic1",
"oripic1",
"gospic1",
"scapic1",
"ecupic1",
"whbpic2",
"arrpic1",
"spopic1",
"spcpic1",
"varpic1",
"spwqua1",
"whbpic1",
"ocepic2",
"occpic1",
"whwpic1",
"runpic1",
"rubpic1",
"ochpic1",
"motpic1",
"plbpic1",
"fibpic1",
"thitin1",
"sinqua1",
"olipic1",
"grapic1",
"chepic1",
"afrpic1",
"rufpic1",
"whbpic3",
"antpic1",
"gabwoo3",
"heswoo1",
"whiwoo1",
"monqua",
"lewwoo",
"guawoo1",
"purwoo1",
"rehwoo",
"acowoo",
"yetwoo2",
"yefwoo1",
"gonwoo1",
"beawoo2",
"blcwoo1",
"ocequa1",
"whfwoo1",
"hiswoo1",
"jamwoo1",
"gocwoo1",
"grbwoo1",
"yucwoo",
"recwoo1",
"gilwoo",
"hofwoo1",
"gofwoo",
"tafqua1",
"gofwoo2",
"rebwoo",
"weiwoo1",
"wilsap",
"yebsap",
"rensap",
"rebsap",
"cugwoo1",
"buswoo1",
"brewoo1",
"ferpar2",
"growoo1",
"fiswoo1",
"benwoo1",
"nubwoo1",
"gotwoo1",
"knywoo1",
"grbwoo2",
"ligwoo1",
"sulwoo2",
"bncwoo3",
"crepar1",
"gycwoo1",
"phiwoo1",
"bncwoo2",
"pygwoo1",
"ettwoo1",
"attwoo1",
"bkbwoo",
"arawoo1",
"brfwoo1",
"miswoo1",
"hilpar1",
"yecwoo1",
"beawoo1",
"gocwoo3",
"fibwoo1",
"ligwoo3",
"spbwoo2",
"abywoo1",
"carwoo1",
"gabwoo1",
"melwoo1",
"sicpar1",
"ellwoo1",
"grywoo1",
"gyhwoo1",
"oliwoo2",
"brbwoo1",
"nutwoo",
"labwoo",
"dowwoo",
"crbwoo3",
"leswoo1",
"chbpar2",
"litwoo2",
"dofwoo1",
"whswoo2",
"chewoo3",
"strwoo6",
"scbwoo3",
"yevwoo1",
"babwoo2",
"blcwoo3",
"rerwoo1",
"whnpar2",
"reswoo1",
"gocwoo2",
"yeewoo1",
"recwoo",
"smbwoo1",
"ariwoo",
"strwoo",
"haiwoo",
"whhwoo",
"rubwoo1",
"slbtin1",
"rutpar1",
"fubwoo2",
"frbwoo1",
"stbwoo4",
"darwoo1",
"himwoo1",
"sinwoo1",
"syrwoo1",
"whwwoo1",
"grswoo",
"okiwoo1",
"chhpar3",
"whbwoo1",
"ruwwoo1",
"stcwoo1",
"whtwoo2",
"litwoo1",
"yetwoo1",
"gogwoo1",
"whbwoo7",
"goowoo1",
"grcwoo1",
"haipar1",
"goowoo3",
"crmwoo2",
"blnwoo1",
"spbwoo1",
"grbwoo3",
"norfli",
"gilfli",
"ferfli1",
"chifli1",
"andfli1",
"taipar1",
"camfli1",
"cinwoo1",
"wavwoo1",
"scbwoo5",
"chcwoo1",
"chewoo2",
"pacwoo1",
"blcwoo4",
"blcwoo5",
"crcwoo2",
"whcpar1",
"ruhwoo1",
"caawoo1",
"rinwoo1",
"helwoo1",
"blbwoo3",
"linwoo1",
"pilwoo",
"whbwoo2",
"andwoo1",
"blawoo1",
"babpar1",
"powwoo1",
"crbwoo1",
"renwoo1",
"robwoo1",
"crcwoo1",
"pabwoo1",
"guawoo2",
"crbwoo2",
"magwoo1",
"ivbwoo",
"ornpar1",
"banwoo2",
"chtwoo1",
"greyel1",
"lesyel1",
"crwwoo1",
"stbwoo3",
"lacwoo1",
"sttwoo1",
"scbwoo1",
"japwoo1",
"rebpar5",
"eugwoo2",
"grnwoo3",
"levwoo1",
"recwoo2",
"blhwoo1",
"gyfwoo1",
"gyhwoo4",
"himfla1",
"comfla1",
"sptfla1",
"gybpar3",
"bkrfla1",
"bkrfla2",
"busfla1",
"luzfla1",
"yeffla1",
"rehfla1",
"javfla1",
"grefla1",
"crbfla1",
"whnwoo1",
"chbpar1",
"pahwoo1",
"bamwoo1",
"olbwoo2",
"marwoo1",
"baywoo1",
"orbwoo1",
"rufwoo2",
"burwoo1",
"babwoo3",
"bunwoo1",
"chotin1",
"snopar1",
"ashwoo1",
"soowoo1",
"sousow1",
"grswoo1",
"relser1",
"bllser1",
"blacar1",
"retcar2",
"carcar1",
"moucar1",
"blophe1",
"whtcar1",
"strcar1",
"y00678",
"yehcar1",
"chicar1",
"laufal1",
"baffal1",
"plffal1",
"liffal1",
"cryfof1",
"westra1",
"sbffal1",
"coffal1",
"buffal1",
"spwfal2",
"pygfal1",
"whrfal1",
"colfal1",
"bltfal1",
"whffal1",
"phifal1",
"sattra1",
"piefal2",
"leskes1",
"eurkes",
"eurkes1",
"madkes1",
"spokes1",
"auskes1",
"amekes",
"foxkes1",
"grykes1",
"blytra1",
"bankes1",
"renfal1",
"reffal1",
"amufal1",
"elefal1",
"soofal1",
"aplfal",
"merlin",
"batfal1",
"orbfal1",
"temtra1",
"eurhob",
"afrhob1",
"orihob1",
"aushob1",
"nezfal1",
"brofal1",
"gryfal1",
"lanfal1",
"lagfal1",
"sakfal1",
"verpar1",
"gyrfal",
"prafal",
"perfal",
"taifal1",
"kakapo2",
"kea1",
"nezkak1",
"cockat",
"rtbcoc1",
"glbcoc1",
"szepar1",
"ytbcoc1",
"whtblc1",
"slbblc1",
"palcoc1",
"gagcoc1",
"galah",
"pincoc1",
"lobcor1",
"wescor1",
"litcor2",
"himmon1",
"duccoc1",
"succoc",
"blecoc1",
"grepar",
"grypar1",
"refpar5",
"yefpar4",
"brnpar1",
"bnnpar2",
"meypar1",
"chimon1",
"ruepar1",
"brhpar2",
"senpar",
"rebpar1",
"litpar2",
"scspar1",
"bufpar1",
"sarpar2",
"brbpar1",
"gotpar2",
"vartin1",
"kokphe1",
"spwpar2",
"gyhpar1",
"moupar2",
"barpar1",
"rufpar1",
"andpar1",
"teppar1",
"monpar",
"monpar2",
"tuipar1",
"wiltur",
"plapar1",
"whwpar",
"yecpar",
"gycpar1",
"orcpar",
"cowpar1",
"gowpar2",
"recpar3",
"blbpar4",
"brhpar1",
"ocetur1",
"sahpar1",
"rofpar2",
"orcpar2",
"caipar2",
"balpar1",
"vulpar1",
"rufpar2",
"inwpar1",
"refpar2",
"blwpar1",
"rufgro",
"duspar1",
"rebpar2",
"schpar1",
"spfpar1",
"spfpar2",
"blhpar1",
"whcpar",
"brwpar1",
"shtpar2",
"yefpar5",
"hazgro1",
"fespar1",
"vinpar1",
"tucpar1",
"respar2",
"blbpar1",
"whfpar1",
"yebpar1",
"cubpar1",
"hispar1",
"licpar",
"saggro",
"relpar",
"relpar4",
"recpar",
"yelpar1",
"blcpar2",
"rebpar7",
"renpar1",
"yehpar",
"yenpar1",
"ywcpar",
"gusgro",
"bufpar",
"scnpar1",
"meapar1",
"meapar",
"kawpar1",
"imppar1",
"retpar1",
"orwpar",
"dubpar1",
"mexpar1",
"dusgro",
"buwpar2",
"buwpar1",
"buwpar3",
"grrpar1",
"spepar1",
"pacpar2",
"yefpar2",
"blhpar4",
"whbpar1",
"refpar3",
"soogro1",
"bltpar2",
"blwpar2",
"mabpar",
"peapar1",
"crbpar1",
"gncpar",
"pfrpar1",
"gybpar1",
"mafpar3",
"paipar1",
"shtgro",
"paipar6",
"sanpar2",
"bonpar1",
"rofpar3",
"sampar1",
"fispar1",
"matpar2",
"elopar1",
"whnpar1",
"blcpar1",
"rustin1",
"grpchi",
"brbpar2",
"reepar1",
"rohpar1",
"suwpar1",
"auspar1",
"slbpar1",
"burpar",
"hyamac1",
"indmac1",
"thbpar",
"lepchi",
"mafpar1",
"oltpar1",
"orfpar",
"pefpar1",
"brtpar1",
"cacpar1",
"duhpar",
"bkhpar",
"subpar1",
"janpar1",
"whtpta1",
"gocpar2",
"rebmac2",
"buhmac1",
"yecmac",
"buwmac1",
"baymac",
"chfmac1",
"milmac",
"scamac1",
"ragmac1",
"wilpta",
"goppar1",
"yeepar1",
"golpar3",
"resmac2",
"bucpar",
"grnpar",
"grnpar2",
"grnpar3",
"pacpar1",
"scfpar1",
"rocpta1",
"scfpar3",
"mitpar",
"rempar",
"crfpar",
"whepar2",
"cubpar2",
"hispar",
"pespar1",
"vaspar1",
"blapar1",
"sprgro",
"levpar1",
"seypar1",
"ycppar1",
"geppar1",
"bfppar1",
"fippar1",
"rbppar1",
"suppar1",
"regpar1",
"alepar1",
"wescap1",
"pakpar1",
"aukpar1",
"rewpar1",
"mirtai1",
"luzrat1",
"bhrtai1",
"bcrtai1",
"eclpar",
"recpar2",
"blcpar3",
"blbcap1",
"sinpar1",
"blrpar1",
"blnpar1",
"azrpar1",
"gyhpar2",
"slhpar1",
"blhpar3",
"plhpar1",
"rebpar4",
"lotpar2",
"blagro1",
"malpar1",
"laypar1",
"alepar2",
"rorpar",
"maupar1",
"brtpar2",
"patpar1",
"motpar1",
"rerpar1",
"bluebo1",
"caugro1",
"bluebo4",
"mulpar1",
"hoopar1",
"gospar1",
"recpar1",
"greros2",
"criros2",
"norros1",
"pahros1",
"easros1",
"bartin2",
"tibpar1",
"wesros1",
"polpar1",
"swipar1",
"crspar1",
"maspar2",
"respar1",
"horpar2",
"necpar1",
"chipar1",
"noipar1",
"grypar",
"yefpar3",
"malpar2",
"refpar4",
"gropar1",
"nigpar2",
"boupar2",
"blwpar3",
"elepar1",
"turpar1",
"sccpar1",
"daupar1",
"plflor1",
"reflor2",
"reflor1",
"failor1",
"joslor1",
"paplor1",
"paplor3",
"duclor1",
"meelor1",
"pallor1",
"reephe1",
"collor1",
"blclor2",
"ultlor1",
"kuhlor1",
"blulor1",
"yeblor2",
"orblor1",
"yeblor1",
"publor1",
"blclor1",
"mikphe1",
"varlor1",
"puclor1",
"litlor1",
"duslor1",
"carlor1",
"brolor1",
"blalor1",
"yeslor1",
"gollor1",
"muslor1",
"humphe1",
"yaglor2",
"ornlor1",
"pohlor1",
"scblor1",
"railor7",
"olhlor1",
"railor3",
"budger",
"lafpar1",
"edfpar1",
"golphe",
"safpar1",
"obfpar1",
"vehpar1",
"cehpar1",
"phihap1",
"bchpar1",
"suhpar1",
"mohpar1",
"sulhap1",
"sahpar2",
"laaphe1",
"paphap1",
"pyghap1",
"gyhlov1",
"rehlov1",
"blwlov1",
"peflov",
"fislov1",
"yeclov",
"lillov1",
"blclov1",
"rinphe1",
"riflem1",
"soiwre1",
"sapayo1",
"schasi1",
"velasi1",
"sunasi1",
"yebasi1",
"grabro1",
"lotbro1",
"dusbro1",
"rinphe2",
"visbro1",
"watbro1",
"sibbro1",
"barbro1",
"banbro1",
"baybro1",
"gyhbro1",
"rusbro1",
"afrbro1",
"grebro1",
"smbtin1",
"chephe1",
"whibro1",
"earpit1",
"giapit1",
"runpit1",
"schpit1",
"blnpit1",
"blrpit1",
"banpit3",
"banpit4",
"blhpit1",
"whieap2",
"blupit1",
"babpit1",
"rebpit1",
"sulpit3",
"siapit1",
"molpit1",
"sompit1",
"pappit1",
"loupit1",
"neipit1",
"whieap1",
"blcpit1",
"garpit1",
"bkhpit1",
"blbpit1",
"afrpit1",
"grbpit1",
"indpit1",
"blwpit1",
"manpit1",
"hoopit2",
"brephe1",
"faipit1",
"noipit1",
"ivbpit1",
"elepit2",
"elepit7",
"elepit6",
"blfpit1",
"azbpit1",
"suppit1",
"raipit1",
"blephe1",
"tatlea1",
"soalea1",
"shblea1",
"sctlea1",
"bltlea1",
"grtlea1",
"rublea1",
"coamin1",
"slbmin1",
"commin1",
"swiphe1",
"punmin1",
"cammin2",
"thbmin1",
"rubmin1",
"gramin1",
"shbmin1",
"dawmin1",
"sptwoo1",
"oliwoo1",
"lotwoo1",
"bulphe1",
"tyrwoo1",
"whcwoo1",
"rudwoo1",
"tawwoo1",
"plbwoo1",
"plwwoo1",
"webwoo1",
"citwoo1",
"lobwoo1",
"nobwoo1",
"kalphe",
"amabaw1",
"blbwoo1",
"hofwoo2",
"plawoo1",
"babwoo1",
"rebwoo1",
"uniwoo1",
"rebwoo4",
"stbwoo1",
"mouwoo1",
"silphe",
"whtwoo1",
"grrwoo1",
"strwoo2",
"leswoo2",
"leswoo4",
"chrwoo1",
"ocewoo1",
"ocewoo2",
"elewoo1",
"spiwoo1",
"compea",
"butwoo1",
"cocwoo1",
"ivbwoo1",
"blswoo1",
"spowoo1",
"olbwoo1",
"stbwoo2",
"zimwoo2",
"rebscy1",
"blbscy1",
"bartin1",
"grepea1",
"cubscy1",
"brbscy1",
"grescy1",
"scbwoo4",
"whswoo1",
"sthwoo1",
"nabwoo1",
"spcwoo1",
"monwoo1",
"scawoo1",
"scbpar1",
"scawoo2",
"linwoo3",
"linwoo4",
"inawoo1",
"ducwoo1",
"slbxen1",
"plaxen1",
"strxen1",
"potpal1",
"rutxen1",
"crhpar1",
"whttre2",
"stbear2",
"rocear1",
"batear1",
"crachi1",
"buftuf1",
"buftuf3",
"strtuf1",
"ruwbar1",
"bolear1",
"redspu1",
"chaear1",
"wibhor1",
"palhor2",
"palhor4",
"palhor5",
"pabhor2",
"leshor1",
"rufhor2",
"crehor1",
"shtstr1",
"paispu1",
"wrlrus1",
"cubree1",
"strear1",
"sctear1",
"pafear1",
"bubear2",
"lotcin1",
"blacin1",
"buwcin1",
"corcin1",
"ceyspu1",
"chwcin1",
"crwcin1",
"olrcin1",
"gyfcin1",
"stbcin1",
"roycin1",
"whbcin1",
"whwcin1",
"dabcin1",
"surcin1",
"palpep1",
"seacin1",
"ducfog1",
"wcfgle1",
"grexen1",
"pabtre1",
"crytre1",
"swfgle",
"rurfog1",
"alfgle1",
"bcfgle1",
"bopphe1",
"crfgle1",
"mofgle1",
"stfgle1",
"rutfog1",
"whbfog1",
"obfgle2",
"bbfgle1",
"rumfog1",
"wtfgle1",
"lifgle1",
"mapphe1",
"rnfgle1",
"gufgle1",
"perrec1",
"bolrec1",
"chwhoo1",
"bffgle",
"chwfog1",
"cangro1",
"ccfgle1",
"hhfgle1",
"gepphe1",
"rufgle1",
"samfog1",
"unitre1",
"flatre1",
"rubtre1",
"stbtre1",
"blbtre1",
"strtre1",
"stctre1",
"ccfgle2",
"nibkiw1",
"tattin1",
"grypep3",
"brfgle1",
"btfgle1",
"butfog4",
"strwoo1",
"strwoo5",
"obfgle3",
"parfog1",
"perfog1",
"wefgle1",
"spobar1",
"grypep2",
"whtbar1",
"rudtre1",
"fudtre1",
"peatre1",
"thtray1",
"demwir1",
"tatspi1",
"bctspi1",
"tutspi1",
"pmtspi1",
"mopphe1",
"sttspi2",
"rctspi1",
"wbtspi1",
"sttspi1",
"antspi1",
"artspi1",
"ruftho1",
"ruftho3",
"stftho1",
"littho1",
"btpphe1",
"chbtho1",
"spbtho1",
"frbtho1",
"gretho2",
"oretho1",
"orbtho1",
"whbspi2",
"firgat1",
"lalbru1",
"crbcan1",
"mobpar1",
"crbcan4",
"crbcan5",
"bercan1",
"shbcan1",
"cipcan1",
"hudcan1",
"auscan1",
"lifcan1",
"mascan1",
"juncan1",
"chbpar3",
"sctcan1",
"stbcan1",
"puncan1",
"sttcan1",
"corcan1",
"itaspi1",
"shbcan2",
"bltthi1",
"punthi1",
"vilthi2",
"taibap1",
"cancan1",
"rufcan1",
"maqcan1",
"eyrthi1",
"ocbthi1",
"perthi1",
"whcthi1",
"mocthi1",
"pilgra1",
"orfplu2",
"grejun1",
"dobgra1",
"equgra1",
"rorbar1",
"strsof1",
"orisof1",
"deasof1",
"plasof1",
"rumsof1",
"stbree2",
"sutspi1",
"redjun",
"marspi2",
"licspi1",
"rubspi4",
"rubspi5",
"parspi1",
"crespi1",
"stcspi2",
"bolspi1",
"olispi1",
"palspi1",
"grejun2",
"gyhspi1",
"crcspi1",
"refspi1",
"tepspi1",
"stcspi1",
"asbspi1",
"licspi5",
"spespi1",
"scaspi1",
"dutcan1",
"rewtin1",
"ceyjun1",
"patcan2",
"stecan1",
"caccan1",
"bcwspi1",
"caacac1",
"rufcac2",
"brncac1",
"whtcac2",
"yecspi2",
"rawspi2",
"forfra2",
"whbspi1",
"chospi2",
"occspi1",
"gybspi1",
"plcspi1",
"whlspi1",
"marspi3",
"grespi2",
"necspi3",
"necspi1",
"crefra2",
"rubspi3",
"slaspi1",
"sitspi1",
"resspi2",
"rucspi1",
"bahspi1",
"pinspi1",
"dusspi1",
"mccspi1",
"cabspi1",
"gryfra",
"cibspi1",
"spispi1",
"dabspi1",
"pabspi1",
"sofspi1",
"azaspi1",
"apuspi1",
"whwspi1",
"rubspi2",
"hotspi1",
"swafra1",
"blhspi1",
"ruhspi1",
"rufspi1",
"bltspi1",
"stbspi1",
"rudspi1",
"chtspi1",
"rurant1",
"chsant1",
"aswant1",
"chifra1",
"wibant1",
"spwant2",
"rusant1",
"rufant12",
"dowant1",
"blabus1",
"rebbus1",
"ronbus1",
"chtant1",
"brbant2",
"blkfra",
"wheant1",
"rubsti1",
"madant1",
"fooant1",
"ornant1",
"rutant3",
"stbant2",
"yapant1",
"gybant1",
"bltant2",
"paifra1",
"mouant",
"pygant1",
"guista1",
"amasta1",
"pacant",
"cheant1",
"klaant1",
"stcant4",
"yetant1",
"sclant1",
"coqfra2",
"whfant2",
"whfant6",
"slaant1",
"risant1",
"salant1",
"lowant1",
"batant3",
"iheant1",
"uniant1",
"alaant1",
"whtfra2",
"plwant2",
"gryant1",
"leaant1",
"stcant3",
"orbant1",
"bawant1",
"nabant1",
"blhant4",
"whfant1",
"whfant4",
"huatin1",
"rewfra2",
"serant1",
"blbant2",
"rubant4",
"sinant1",
"parant1",
"banant2",
"sttant1",
"pltant1",
"rubant3",
"dutant2",
"finfra2",
"satant1",
"cinant1",
"blsant1",
"peaant1",
"bahant1",
"caaant1",
"blcant2",
"ajpant1",
"mapant1",
"crbant1",
"moofra2",
"astant1",
"sptant1",
"dugant1",
"todant1",
"spbant4",
"rorant1",
"pecant1",
"labant1",
"ancant1",
"yebant2",
"gywfra1",
"ruwant3",
"ruwant4",
"spbant5",
"plaant1",
"stcant1",
"spcant1",
"rubant2",
"bicant4",
"pluant3",
"whsant4",
"orrfra2",
"colant1",
"blbant1",
"batant2",
"barant1",
"chaant1",
"bacant2",
"linant1",
"chbant2",
"blhant2",
"blaant1",
"shefra1",
"cocant1",
"blgant2",
"casant1",
"whsant2",
"uniant2",
"plwant1",
"mocant1",
"uplant1",
"wesant1",
"norsla1",
"tibsno1",
"natsla1",
"bolsla1",
"plasla1",
"soosla1",
"amaant2",
"acrant1",
"stbant1",
"varant1",
"ruwant2",
"rucant1",
"altsno1",
"blcant4",
"sicant1",
"gloant1",
"whbant2",
"fasant1",
"bamant1",
"greant1",
"latant1",
"tufant1",
"bltant3",
"causno1",
"undant2",
"fulant1",
"spbant3",
"giaant2",
"spfant1",
"whpant1",
"whmant2",
"oceant1",
"bicant2",
"whcant1",
"cassno1",
"rutant4",
"whtant1",
"lunant2",
"baeant1",
"harant1",
"whbant5",
"chcant1",
"hacant1",
"bsbeye1",
"rwbeye1",
"tactin1",
"himsno",
"pafant1",
"scbant3",
"scbant8",
"ferant1",
"berant1",
"rutant1",
"ocrant1",
"dutant1",
"scaant2",
"strant2",
"sespar1",
"samant2",
"klaant2",
"lotant1",
"sthant1",
"guiwaa1",
"imewaa1",
"perwaa1",
"yebwaa1",
"ronwaa1",
"spiwaa1",
"sanpar1",
"manwaa1",
"yebant3",
"chtant2",
"zimant1",
"wilant1",
"parant2",
"blaant4",
"blaant5",
"dusant1",
"blaant2",
"broqua1",
"manant1",
"rdjant1",
"gryant2",
"magant1",
"banant1",
"jetant1",
"ribant1",
"febant1",
"whiant1",
"scaant3",
"snmqua2",
"whbant4",
"squant1",
"blcant3",
"batant1",
"spoant1",
"spbant6",
"dobant2",
"silant1",
"pluant1",
"slcant3",
"blbqua1",
"spwant3",
"humant1",
"brhant1",
"rufant4",
"rorant2",
"cauant2",
"chbant1",
"gyhant1",
"sttant3",
"esmant1",
"comqua1",
"dumant3",
"dumant1",
"whbant1",
"bltant1",
"whlant1",
"blfant2",
"whbant6",
"asbant1",
"bacant1",
"wesfie1",
"japqua",
"whbfie9",
"eaafie1",
"fbfeye1",
"wsfeye1",
"sleant1",
"blhant3",
"allant1",
"whsant1",
"goeant1",
"sooant1",
"harqua1",
"immant1",
"zelant1",
"rucant2",
"blfant1",
"bkfant2",
"rufant3",
"blhant1",
"rubant1",
"shtant1",
"strant3",
"raiqua1",
"sucant1",
"rutant2",
"schant1",
"barant2",
"undant1",
"giaant1",
"greant2",
"varant2",
"mouant1",
"scaant1",
"orntin1",
"stuqua1",
"plbant1",
"ocsant1",
"eluant1",
"chcant2",
"watant1",
"samant1",
"cunant1",
"sthant2",
"gynant1",
"jocant1",
"barpar2",
"chnant1",
"pabant1",
"whtant2",
"yebant1",
"whbant3",
"rutant5",
"bayant1",
"rawant1",
"rufant5",
"rufant6",
"arapar1",
"rufant7",
"bicant3",
"chaant4",
"equant1",
"rufant8",
"cheant2",
"chaant5",
"panant1",
"rufant9",
"oxaant1",
"relpar1",
"rufant10",
"punant1",
"rufant11",
"tawant1",
"brbant1",
"antant1",
"rufant2",
"stcant2",
"spoant6",
"spoant5",
"chukar",
"alfant1",
"masant1",
"thiant1",
"whlant2",
"amaant1",
"whbant7",
"spbant1",
"thlant2",
"thlant3",
"tepant1",
"rocpar2",
"ocbant1",
"scbant2",
"hooant1",
"perant1",
"ocfant1",
"rubant5",
"rubant7",
"slcant2",
"slcant5",
"crfant1",
"phipar1",
"rufgna3",
"chbgna1",
"hoogna1",
"astgna1",
"rufgna2",
"slagna1",
"chcgna1",
"blcgna1",
"blbgna1",
"blcant1",
"przpar1",
"rucant3",
"ocetap1",
"cthhue1",
"bthhue1",
"moutur1",
"whttap1",
"chutap1",
"cregal1",
"sangal1",
"rubtap1",
"jubqua1",
"spobam1",
"slabri1",
"strbri1",
"ocftap1",
"asctap1",
"whbtap1",
"bahtap1",
"martap1",
"diatap2",
"bratap1",
"robqua1",
"roctap1",
"platap1",
"sertap1",
"moctap1",
"dustap1",
"magtap1",
"anctap1",
"whwtap1",
"partap4",
"partap2",
"chitin1",
"pabqua1",
"partap1",
"whbtap2",
"zimtap1",
"puntap1",
"diatap1",
"viltap1",
"amptap1",
"miltap1",
"nebtap1",
"tritap1",
"harfra3",
"boltap1",
"whctap1",
"samtap1",
"lottap1",
"ruvtap1",
"blatap2",
"laftap1",
"juntap1",
"unitap1",
"tsctap1",
"camfra2",
"blatap1",
"siftap1",
"nartap2",
"tactap1",
"chotap1",
"upmtap1",
"stitap1",
"alptap1",
"ecutap1",
"cartap1",
"hanfra2",
"mattap1",
"brrtap1",
"pertap1",
"mertap1",
"chutap2",
"spitap2",
"colcre1",
"colcre2",
"olccre1",
"marcre1",
"chnfra2",
"elecre1",
"grhpip1",
"wibpip1",
"bkcpip1",
"platyr2",
"yuntyr1",
"roltyr3",
"roltyr1",
"gretyr1",
"reityr1",
"chnfra3",
"scltyr1",
"gyctyr1",
"sohtyr1",
"plctyr1",
"blctyr1",
"ashtyr1",
"tartyr1",
"yectyr1",
"forela1",
"gryela3",
"ercfra",
"fooela1",
"pacela1",
"yecela1",
"greela",
"yebela1",
"carela1",
"larela1",
"norela1",
"whcela1",
"whcela4",
"djifra1",
"smbela1",
"oliela1",
"slaela1",
"mobela1",
"broela1",
"plcela1",
"lesela1",
"cooela1",
"rucela1",
"mouela1",
"swifra2",
"higela2",
"higela3",
"greela1",
"sieela3",
"sieela2",
"graela1",
"yebtyr1",
"brctyr",
"whltyr1",
"nobtyr",
"ahafra2",
"sobtyr1",
"suifly1",
"whttyr2",
"whttyr1",
"bubtyr1",
"ruwtyr1",
"subtyr1",
"whbtyr1",
"bkctit1",
"pcttyr1",
"brutin1",
"gysfra1",
"abttyr1",
"ybttyr1",
"tuttyr1",
"jfttyr1",
"agitit1",
"unstit1",
"tortyr1",
"rivtyr1",
"sootyr1",
"y01036",
"jacfra2",
"whbtyr2",
"gyctyr2",
"moctyr7",
"moctyr6",
"yeltyr1",
"beatac1",
"gybtac1",
"dindor2",
"credor1",
"subdor1",
"rebfra1",
"wardor1",
"ticdor1",
"boptyr1",
"hfptyr1",
"rhptyr1",
"rinant2",
"souant1",
"tacpyt1",
"rsptyr1",
"gawtyr2",
"capfra2",
"lewtyr1",
"grwtyr1",
"paltyr2",
"paltyr3",
"paltyr4",
"paltyr5",
"boltyr1",
"rebtyr2",
"mistyr1",
"chityr1",
"natfra2",
"slftyr1",
"guityr1",
"goftyr1",
"goftyr5",
"chotyr1",
"goftyr4",
"pertyr1",
"vabtyr1",
"chabrt1",
"mfbtyr1",
"hilfra2",
"spbtyr1",
"vebtyr2",
"anbtyr1",
"sobtyr2",
"moctyr2",
"alatyr1",
"restyr1",
"bahtyr1",
"yegtyr1",
"olgtyr1",
"dosfra2",
"ecutyr1",
"blftyr1",
"rubtyr1",
"rultyr1",
"ciftyr1",
"migtyr1",
"saptyr1",
"oustyr1",
"sdmtyr2",
"bartyr1",
"scafra2",
"stnfly1",
"olsfly2",
"ocbfly1",
"mccfly1",
"mccfly3",
"gyhfly1",
"secfly1",
"slcfly1",
"rubfly2",
"incfly1",
"heufra1",
"chafly3",
"nosfly1",
"amsfly1",
"sosfly1",
"slbtyr1",
"platyr1",
"amatyr1",
"pattyr3",
"flafly2",
"orcfly1",
"clafra1",
"unafly1",
"rorfly1",
"olcfly1",
"brcfly1",
"hanfly1",
"orbfly1",
"ornfly1",
"mcrtyr1",
"shttyr1",
"drbpyt1",
"andtin1",
"harfra4",
"bnbpyt1",
"flapyt1",
"snttyr1",
"yuttyr1",
"acrtot1",
"bbttyr1",
"wettyr1",
"whbtot1",
"zittyr1",
"erttyr1",
"swafra2",
"jottyr1",
"snttyr2",
"hattyr1",
"pvttyr1",
"pettyr1",
"btttyr1",
"bbttyr2",
"cbttyr1",
"kattyr1",
"btttyr2",
"yenspu1",
"fotpyt1",
"eaptyr1",
"wbptyr1",
"bcptyr1",
"stptyr1",
"norben1",
"souben1",
"scptyr1",
"lcptyr1",
"dbptyr1",
"gybfra1",
"heptyr1",
"peptyr1",
"rcttyr1",
"johtot1",
"wcttyr1",
"bawtyr1",
"buctof1",
"rudtof1",
"ocftof1",
"smftof1",
"renfra1",
"ruftof1",
"shtfly1",
"gowtof1",
"bkbtof1",
"blctyr2",
"sptfly1",
"gyhtof1",
"cotfly1",
"matfly1",
"patfly1",
"sponig1",
"ybtfly1",
"bhtfly1",
"brofly1",
"ruftwi1",
"eyrfla1",
"olifla1",
"pacfla1",
"fubfla1",
"yeofly1",
"orefly1",
"whtnig3",
"yemfly1",
"yemfly2",
"gycfly1",
"yebfly3",
"yebfly4",
"cicspa1",
"sttspa1",
"whtspa1",
"gocspa1",
"yetspa1",
"dianig1",
"whcspa1",
"ruwspa1",
"cinmat1",
"cinfly2",
"clifly1",
"eulfly1",
"gybfly1",
"tacfly1",
"blbfly1",
"belfly1",
"papnig1",
"pilfly1",
"easpho",
"blkpho",
"saypho",
"tuffly",
"olifly2",
"olsfly",
"grepew",
"darpew1",
"smcpew1",
"malnig1",
"ochpew1",
"wewpew",
"eawpew",
"tropew3",
"tropew2",
"whtpew1",
"blapew1",
"cubpew1",
"jampew1",
"leapew1",
"cubtin1",
"grenig1",
"yebfly",
"acafly",
"wilfly",
"aldfly",
"whtfly1",
"leafly",
"hamfly",
"dusfly",
"gryfly",
"pinfly1",
"nacnig1",
"pasfly",
"corfly",
"yelfly1",
"bubfly",
"blcfly1",
"verfly",
"brufly1",
"drwtyr1",
"yebtyr2",
"sbgtyr1",
"leanig1",
"wfgtyr1",
"ongtyr1",
"plcgrt1",
"rngtyr1",
"dafgrt1",
"wbgtyr1",
"cibgrt1",
"pugtyr1",
"andneg1",
"ausneg1",
"sacnig1",
"spetyr1",
"bbbtyr1",
"andtyr2",
"cintyr1",
"whwblt1",
"hubtyr1",
"ruttyr1",
"rivtyr2",
"ambtyr1",
"crbtyr1",
"lesnig",
"vebtyr1",
"rrbtyr1",
"rwbtyr1",
"whrmon2",
"whimon1",
"fiediu1",
"grymon1",
"bkcmon1",
"salmon1",
"chvtyr2",
"comnig",
"stbtyr1",
"rbbtyr1",
"smbtyr1",
"smbtyr2",
"bkbsht1",
"lessht1",
"wtstyr1",
"gresht1",
"stttyr2",
"shtgrt1",
"antnig",
"piwtyr1",
"bbwtyr1",
"mawtyr1",
"whmtyr1",
"bawmon3",
"cottyr1",
"stttyr1",
"tumtyr2",
"crocht1",
"crocht3",
"shtnig1",
"gobcht1",
"yebcht1",
"jelcht1",
"slbcht2",
"slbcht3",
"rbctyr1",
"bbctyr1",
"dorcht1",
"wbctyr1",
"pictyr1",
"rubnig1",
"pattyr2",
"lottyr1",
"stftyr1",
"cattyr",
"pirfly1",
"whbfly1",
"rumfly1",
"socfly1",
"grcfly1",
"ducfly2",
"batnig1",
"grekis",
"leskis1",
"whrfly",
"yetfly2",
"thsfly2",
"lebfly2",
"gobfly1",
"gocfly1",
"baifly1",
"subfly",
"whbnot1",
"bahnig1",
"strfly1",
"bobfly1",
"sulfly1",
"varfly",
"croslf1",
"sntkin1",
"whtkin1",
"trokin",
"coukin",
"caskin",
"blanig1",
"thbkin",
"weskin",
"sctfly",
"fotfly",
"easkin",
"grykin",
"giakin1",
"logkin",
"gramou1",
"pabmou1",
"pygnig1",
"rufmou1",
"sibsir1",
"whrsir1",
"todsir1",
"siryst3",
"rufcas2",
"astcas2",
"ruffly1",
"yucfly1",
"sadfly1",
"compau",
"ducfly",
"swafly1",
"venfly1",
"panfly1",
"shcfly1",
"apifly1",
"paefly1",
"socfly2",
"astfly",
"nutfly",
"scrnig1",
"grcfly",
"bncfly",
"galfly1",
"rutfly1",
"lasfly",
"stofly1",
"purfly1",
"lahfla2",
"flafly1",
"rutfla1",
"samnig1",
"dutfla1",
"rutatt1",
"cinatt1",
"ochatt1",
"cibatt1",
"ducatt1",
"gyhatt1",
"brratt1",
"scafru1",
"fitfru1",
"litnig1",
"scbfru1",
"hanfru1",
"rebfru1",
"blcfru1",
"orbfru1",
"masfru1",
"gobfru1",
"barfru1",
"batfru1",
"gabfru1",
"siwnig1",
"gytpih1",
"olipih2",
"hoober2",
"bkhber1",
"andcot1",
"gcoroc1",
"gurcot1",
"bnrcot1",
"whccot1",
"rutpla1",
"whwnig1",
"perpla1",
"whtpla1",
"swtcot1",
"bavcot1",
"chbcot1",
"reccot1",
"chccot1",
"crifru1",
"putfru1",
"rerfru1",
"bawnig1",
"banumb1",
"lowumb1",
"amaumb1",
"capuch1",
"rufpih1",
"rocpih1",
"scrpih1",
"civpih1",
"bagcot1",
"gywcot1",
"okbkiw1",
"lesnot1",
"bawnig3",
"chcpih1",
"duspih1",
"scwpih1",
"whibel2",
"thwbel",
"batbel1",
"beabel1",
"pltcot1",
"spacot1",
"lovcot1",
"swtnig1",
"blucot1",
"turcot1",
"pubcot1",
"putcot1",
"blfcot1",
"pomcot1",
"whtcot1",
"whwcot1",
"bltcot1",
"snocot1",
"lytnig1",
"dwtman1",
"titman1",
"sctman1",
"sbtman1",
"pbtman1",
"witman1",
"sdmman1",
"yehman2",
"jetman2",
"araman1",
"whtnig1",
"helman1",
"lotman1",
"latman1",
"blbman1",
"yunman1",
"swtman1",
"pitman1",
"gowman1",
"whtman1",
"whrman1",
"sptnig1",
"oliman2",
"blaman1",
"greman2",
"blcman1",
"sncman1",
"gocman2",
"opcman1",
"orbman1",
"whfman1",
"blrman1",
"latnig1",
"orcman3",
"yecman2",
"flcman2",
"whbman1",
"whcman1",
"gocman1",
"orcman1",
"crhman1",
"witman2",
"batman1",
"sctnig2",
"clwman1",
"strman2",
"strman5",
"paiman1",
"ficman1",
"whcman2",
"schman1",
"recman1",
"rotman1",
"gohman1",
"lotnig2",
"rehman1",
"sharpb1",
"royfly1",
"royfly2",
"royfly3",
"royfly5",
"tabfly1",
"surfly1",
"whifly1",
"bltfly1",
"leapau1",
"rutfly2",
"blctit1",
"blttit1",
"mastit1",
"varsch1",
"thlsch7",
"thlsch2",
"thlsch8",
"thlsch4",
"thlsch3",
"chopoo1",
"gresch2",
"spemou1",
"cinmou1",
"butpur1",
"duspur1",
"whbpur1",
"shlcot1",
"shlcot2",
"whnxen1",
"grbbec1",
"darnot1",
"earpoo1",
"gnbbec2",
"barbec1",
"slabec1",
"cinbec2",
"chcbec1",
"cinbec1",
"whwbec1",
"blcbec1",
"bawbec1",
"grcbec1",
"yucpoo1",
"glbbec1",
"oncbec1",
"pitbec1",
"crebec1",
"rotbec",
"jambec1",
"alblyr1",
"suplyr1",
"rusbir1",
"nosbir1",
"ocepoo1",
"ocbcat1",
"taccat1",
"grecat1",
"spocat2",
"huocat1",
"bkccat1",
"norcat1",
"arfcat1",
"spocat1",
"tobcat2",
"compoo",
"arcbow1",
"vogbow2",
"macbow2",
"strbow1",
"golbow1",
"flabow2",
"regbow1",
"satbow1",
"wesbow1",
"grebow1",
"chwwid",
"spobow1",
"fabbow1",
"whttre3",
"paptre1",
"whbtre2",
"ruftre3",
"brotre2",
"bkttre1",
"walfai1",
"empfai1",
"rufnig1",
"lovfai1",
"varfai1",
"varfai5",
"blbfai1",
"rewfai1",
"supfai1",
"splfai1",
"pucfai2",
"whsfai1",
"rebfai1",
"granig3",
"whwfai1",
"orcfai1",
"souemu1",
"malemu1",
"rucemu1",
"blagra1",
"whtgra1",
"shtgra1",
"rufgra1",
"rusgra1",
"tacnig1",
"strgra2",
"eyrgra1",
"thbgra1",
"thbgra4",
"dusgra1",
"daehon1",
"grshon1",
"easspi1",
"wesspi1",
"grbhon1",
"yucnig1",
"soomel1",
"olshon1",
"rushon1",
"rubhon1",
"blbhon1",
"cricha1",
"yelcha1",
"whfcha1",
"bouhon1",
"rubhon2",
"sitnig1",
"ruthon1",
"gryhon1",
"babhon1",
"brbhon1",
"lobhon2",
"tawstr1",
"arfhon1",
"smohon1",
"spahon1",
"barhon2",
"sponot1",
"bucnig",
"nehhon1",
"tachon1",
"plahon1",
"marhon1",
"sthhon1",
"piehon1",
"tui1",
"nezbel1",
"blahon1",
"whsfri1",
"easwpw1",
"sermyz1",
"whcmyz1",
"ashmyz1",
"dusmyz4",
"dusmyz1",
"redmyz1",
"blamyz1",
"crhmyz1",
"alomyz1",
"rehmyz1",
"souwpw1",
"summyz1",
"rotmyz2",
"moumyz1",
"banmyz1",
"sulmyz1",
"scamyz1",
"necmyz1",
"carmyz1",
"micmyz1",
"scbmyz1",
"purnig1",
"ebomyz1",
"scnmyz1",
"yevmyz1",
"soomyz1",
"orbmyz1",
"bkbmyz1",
"recmyz1",
"meyfri1",
"litfri1",
"gryfri1",
"dusnig1",
"timfri1",
"dusfri1",
"serfri1",
"bkffri1",
"bkffri2",
"helfri1",
"helfri3",
"helfri4",
"nebfri1",
"whnfri1",
"bronig1",
"sicfri1",
"noifri1",
"necfri1",
"spohon3",
"machon2",
"tabhon1",
"strhon1",
"paihon1",
"whshon1",
"crehon2",
"rennig1",
"nehhon2",
"whchon2",
"sunhon1",
"olihon1",
"brohon1",
"dabhon1",
"siehon1",
"whthon2",
"serhon1",
"yeehon1",
"grynig2",
"blchon1",
"banhon1",
"sacmel1",
"chagih1",
"duegih1",
"crohon1",
"easwah1",
"norwah1",
"weswah1",
"kanhon1",
"grynig1",
"whehon1",
"yethon1",
"blfhon1",
"blchon2",
"stbhon2",
"brhhon1",
"whthon1",
"whnhon2",
"whnhon3",
"blhhon1",
"palnig1",
"whghon1",
"yelhon1",
"pubhon1",
"yeshon1",
"lewhon1",
"whfhon1",
"yethon3",
"pughon1",
"forhon1",
"moumel1",
"dwatin1",
"eurnig1",
"scrhon1",
"mimhon1",
"taghon1",
"grahon2",
"grahon5",
"grahon3",
"yeghon1",
"whlhon1",
"stbhon3",
"varhon1",
"somnig1",
"manhon1",
"sinhon1",
"orchon1",
"yethon2",
"fushon1",
"gyhhon1",
"yephon1",
"whphon1",
"yefhon1",
"blthon1",
"rucnig1",
"obshon1",
"bruwat1",
"litwat1",
"redwat1",
"yelwat1",
"reghon1",
"spchon1",
"brihon1",
"eunhon1",
"cibmel1",
"egynig1",
"vogmel1",
"yebmel1",
"huomel1",
"belmel1",
"ornmel1",
"belmin1",
"noimin1",
"yetmin1",
"blemin1",
"easbri1",
"syknig1",
"wesbri1",
"rufbri1",
"spopar1",
"fospar1",
"rebpar6",
"strpar1",
"dwawhi1",
"fernwr1",
"scrubt2",
"weebil1",
"nubnig1",
"strfie1",
"ruffie3",
"ruffie2",
"chrhea1",
"shyhea1",
"pilotb1",
"redthr1",
"spewar3",
"rocwar1",
"rumwar1",
"golnig1",
"momwar1",
"yetscr1",
"pabscr1",
"vogscr1",
"bufscr1",
"papscr1",
"gygscr1",
"labscr2",
"becscr1",
"larscr1",
"jernig1",
"whbscr3",
"tasscr1",
"athscr1",
"whbscr1",
"bimwar1",
"broger1",
"gryger1",
"noiger1",
"chiger2",
"fatger1",
"latnig2",
"brbger1",
"gobger1",
"rusger1",
"manger1",
"plager1",
"wesger1",
"dusger1",
"labger1",
"biager1",
"yebger1",
"meenig1",
"gnbger1",
"whtger1",
"faiger1",
"moutho1",
"brotho1",
"inltho1",
"tastho1",
"chrtho1",
"burtho1",
"westho1",
"elctin1",
"andnig1",
"slbtho2",
"yertho1",
"yeltho1",
"mouger1",
"strtho1",
"slbtho1",
"souwhi1",
"chbwhi1",
"banwhi1",
"negbab1",
"phinig1",
"gycbab1",
"halbab1",
"whbbab3",
"chcbab2",
"norlog1",
"soulog1",
"chowch1",
"lorsat1",
"cresat1",
"obsber1",
"sulnig1",
"lebber1",
"blaber1",
"fatber1",
"satber1",
"spober1",
"dwahon2",
"pyghon1",
"yeblon1",
"slclon1",
"titber1",
"dosnig1",
"creber1",
"kokako3",
"kokako4",
"saddle2",
"saddle3",
"stitch1",
"easwhi1",
"weswhi1",
"weswhi4",
"chiwed2",
"bksnig1",
"chiwed1",
"spjbab1",
"blujeb1",
"blujeb2",
"cbjbab1",
"spqthr1",
"chequt1",
"copqut1",
"ciqthr1",
"nulqut1",
"finnig1",
"cbqthr1",
"chbqut1",
"paqthr1",
"ruwbat1",
"shtbat1",
"darbat1",
"capbat10",
"woobat1",
"chibat1",
"senbat1",
"monnig1",
"gyhbat1",
"palbat1",
"pribat1",
"bkhbat2",
"bkhbat1",
"pygbat1",
"angbat1",
"verbat1",
"itubat1",
"weabat1",
"indnig1",
"fepbat1",
"whtshr1",
"weawae1",
"chweye1",
"baweye1",
"btweye1",
"wfweye1",
"btweye2",
"ybweye1",
"rcweye1",
"madnig1",
"bnweye1",
"jaweye1",
"fibbus1",
"monbus1",
"gyhbus1",
"grbbus1",
"mtkbus1",
"macbus2",
"blfbus1",
"olibus1",
"swanig1",
"gygbus1",
"subbus1",
"focbus2",
"dohbus1",
"bokmak1",
"ropbus1",
"martch2",
"brctch1",
"thstch1",
"soutch1",
"quctin1",
"planig1",
"bkctch1",
"labpuf1",
"pifpuf1",
"reepuf1",
"blbpuf2",
"norpuf1",
"pripuf1",
"soobou1",
"mosbou1",
"mosbou4",
"stsnig1",
"wisbou1",
"fuebou1",
"slcbou1",
"luebus1",
"gabbus1",
"renbus1",
"trobou2",
"trobou1",
"zanbou1",
"soubou1",
"savnig1",
"gabbou1",
"turbou1",
"comgon1",
"papgon1",
"blhgon1",
"crbgon1",
"yebbou1",
"brubru1",
"yebboa1",
"blbboa1",
"frenig1",
"retvan1",
"resvan1",
"hobvan1",
"bervan1",
"lafvan1",
"vadvan1",
"polvan1",
"sibvan1",
"whhvan1",
"chavan2",
"bonnig1",
"bluvan3",
"rufvan1",
"helvan1",
"tylvan1",
"nuthat2",
"darnew1",
"comnew1",
"arcnew1",
"retnew1",
"warfly1",
"salnig1",
"crobab1",
"whihel1",
"chbhel1",
"rubhel1",
"rethel1",
"anghel1",
"chfhel1",
"bwfshr1",
"bwfshr2",
"larwoo1",
"batnig2",
"malwoo1",
"comwoo1",
"srlwoo1",
"ruwphi2",
"mabphi2",
"afrshf1",
"bawfly1",
"borbri1",
"ashwoo2",
"whbwoo4",
"lotnig1",
"fijwoo1",
"whbwoo8",
"grewoo1",
"maswoo1",
"whbwoo5",
"blfwoo1",
"duswoo1",
"lowpel1",
"moupel1",
"blabut1",
"sltnig1",
"ausmag2",
"grybut1",
"sibbut1",
"blbbut1",
"piebut1",
"hoobut1",
"tagbut1",
"piecur1",
"blacur2",
"grycur1",
"sqtnig1",
"motwhi1",
"comior1",
"whtior1",
"greior2",
"greior1",
"whbmin3",
"whbmin2",
"fiemin1",
"smamin1",
"gycmin1",
"puntin1",
"stwnig1",
"sunmin1",
"shbmin2",
"flomin1",
"lotmin1",
"scamin3",
"scamin1",
"ashmin1",
"ryumin1",
"brrmin1",
"rosmin1",
"pewnig1",
"ashcus2",
"ashcus3",
"whbcus2",
"grycus1",
"stbcus1",
"hoocus1",
"cercus1",
"piecus1",
"grocus1",
"yeecus1",
"oilbir1",
"bkfcus1",
"boycus1",
"burcus1",
"walcus1",
"melcus3",
"melcus1",
"babcus1",
"javcus1",
"larcus1",
"slacus1",
"grepot1",
"whrcus1",
"suncus1",
"whbcus1",
"molcus1",
"blkcus1",
"rescus1",
"petcus1",
"putcus1",
"golcus1",
"mcgcus1",
"lotpot1",
"neccus1",
"whwcus1",
"blacus1",
"bkbcus2",
"sumcus1",
"kaicus1",
"gyhcus1",
"bkbcus1",
"cicada7",
"soicus1",
"norpot1",
"sulcus2",
"papcus1",
"cicada1",
"mancic1",
"cicada4",
"cicada5",
"cicada3",
"negcus1",
"halcus1",
"pygcus1",
"compot1",
"blucus1",
"poltri1",
"whwtri2",
"lottri1",
"whwtri1",
"rubtri1",
"blbtri1",
"bkbtri2",
"whbtri1",
"vartri1",
"andpot1",
"bawtri1",
"pietri1",
"whrtri1",
"bkwcus1",
"bkhcus1",
"indcus1",
"lescus1",
"maucus1",
"reucus1",
"yellow3",
"whwpot1",
"whiteh1",
"pipipi1",
"varsit8",
"blksit1",
"watplo1",
"runwhi1",
"crepit1",
"crebel1",
"cresht1",
"mabwhi1",
"rufpot1",
"sansht1",
"blapit1",
"oliwhi1",
"relwhi1",
"gilwhi1",
"manwhi1",
"grbwhi1",
"whvwhi1",
"biawhi1",
"ruswhi1",
"pattin1",
"marfro1",
"brbwhi1",
"yebwhi1",
"subwhi1",
"borwhi1",
"vogwhi1",
"grywhi2",
"sclwhi1",
"rubwhi1",
"yetwhi1",
"golwhi2",
"papfro1",
"bkcwhi1",
"golwhi1",
"weswhi2",
"biswhi1",
"oriwhi1",
"louwhi1",
"renwhi1",
"necwhi2",
"necwhi3",
"fijwhi2",
"tawfro1",
"temwhi1",
"bltwhi1",
"bohwhi1",
"batwhi1",
"lorwhi1",
"regwhi1",
"gobwhi1",
"rufwhi1",
"blhwhi1",
"whbwhi1",
"soifro1",
"walwhi1",
"drawhi1",
"drawhi3",
"whbwhi2",
"mornin1",
"whbpit1",
"ruspit1",
"bowsht1",
"rufsht2",
"litshr5",
"larfro1",
"litshr6",
"litshr1",
"litshr4",
"litshr3",
"litshr2",
"grysht1",
"yebshr1",
"magshr1",
"whrshr1",
"whcshr1",
"dulfro1",
"tigshr1",
"soushr3",
"buhshr1",
"brnshr",
"rebshr1",
"isashr1",
"rutshr2",
"burshr1",
"babshr1",
"gybshr1",
"phifro1",
"gycshr1",
"macshr1",
"legshr2",
"logshr",
"norshr4",
"norshr1",
"ibgshr1",
"chgshr1",
"gybfis1",
"lotfis1",
"goufro1",
"taifis1",
"somfis1",
"norfis1",
"soufis1",
"wooshr1",
"masshr1",
"grsbab1",
"besbab1",
"bhsbab1",
"himshb1",
"ceyfro1",
"whbshb1",
"dalshb1",
"clishb1",
"whbyuh1",
"rubpep1",
"blbpep1",
"cssvir1",
"grsvir1",
"ybsvir1",
"scsvir1",
"hodfro1",
"gyegre1",
"rucgre1",
"oligre1",
"ashgre1",
"brhgre1",
"lecgre2",
"scrgre1",
"gycgre1",
"tacgre1",
"lesgre1",
"horscr1",
"shtfro3",
"ducgre1",
"bucgre1",
"gofgre1",
"rungre1",
"golvir1",
"yegvir",
"reevir1",
"yucvir",
"bkwvir",
"chivir1",
"shtfro2",
"norvir1",
"tepgre1",
"phivir",
"warvir",
"brcvir1",
"hutvir",
"gryvir",
"yetvir",
"yewvir1",
"chovir1",
"javfro3",
"buhvir",
"casvir",
"plsvir",
"flbvir1",
"manvir1",
"thbvir2",
"cozvir1",
"stavir1",
"whevir",
"thbvir",
"javfro2",
"jamvir1",
"cubvir1",
"belvir",
"purvir1",
"bkcvir1",
"dwavir1",
"slavir1",
"grnfig1",
"wetfig1",
"ausfig1",
"palfro1",
"varpit2",
"varpit4",
"varpit3",
"hoopit1",
"serori1",
"halori1",
"olbori1",
"timori1",
"timori3",
"burori3",
"sunfro1",
"broori1",
"greori1",
"bacori1",
"marori2",
"blhori1",
"datori1",
"phiori1",
"whlori1",
"grhori1",
"wbhori1",
"feonig1",
"abhori1",
"dahori1",
"bltori1",
"blwori1",
"afgori2",
"ingori1",
"eugori2",
"slbori1",
"brodro1",
"lrtdro1",
"spaown1",
"crbdro1",
"grtdro1",
"srldro1",
"anddro1",
"suldro1",
"spadro1",
"ritdro1",
"hacdro1",
"tabdro1",
"hacdro9",
"molown1",
"balica1",
"ashdro1",
"whbdro1",
"maydro1",
"credro1",
"vemdro6",
"fotdro5",
"bladro1",
"vemdro5",
"fotdro4",
"waonig1",
"cstdro1",
"shidro1",
"wstdro1",
"shadro1",
"blufan1",
"visblf1",
"blhfan1",
"visfan1",
"whtfan1",
"spbfan1",
"norscr1",
"moonig1",
"whbfan1",
"whbfan2",
"piefan1",
"phipif1",
"spofan1",
"wilwag1",
"brcfan1",
"citfan1",
"norfan1",
"whwfan1",
"barown1",
"sotfan1",
"bltfan1",
"wbtfan1",
"blafan1",
"chbfan1",
"frifan1",
"gryfan1",
"nezfan1",
"manfan1",
"brofan1",
"auonig1",
"dusfan1",
"renfan1",
"strfan1",
"kanfan1",
"rutfan1",
"bacfan1",
"dimfan1",
"palfan1",
"stbfan1",
"cibfan1",
"cretre1",
"rubfan2",
"pelfan1",
"lotfan1",
"rubfan1",
"matfan1",
"manfan2",
"ruffan1",
"arafan1",
"papdro1",
"silkta2",
"gyrtre1",
"blnmon1",
"pabmon1",
"shcmon1",
"celmon1",
"afcfly1",
"bhcfly1",
"rvpfly1",
"bhpfly1",
"batpaf1",
"afpfly1",
"whitre1",
"aspfly1",
"blypaf1",
"amupaf1",
"japfly1",
"blpfly1",
"rupfly1",
"mapfly1",
"sepfly1",
"mapfly2",
"elepai5",
"moutre1",
"elepai4",
"elepai",
"rarmon1",
"tahmon2",
"marmon2",
"iphmon2",
"fatmon1",
"vanmon1",
"slamon1",
"bubmon1",
"spfswi1",
"soushr2",
"fijshr1",
"bktshr1",
"renshr1",
"blamon1",
"spwmon1",
"blbmon2",
"flomon1",
"blcmon1",
"spemon1",
"whcswi1",
"whtmon1",
"whtmon2",
"biamon1",
"hoomon1",
"bltmon1",
"bawmon1",
"kulmon1",
"whcmon2",
"islmon1",
"blfmon1",
"whfswi1",
"blwmon1",
"boumon1",
"chbmon1",
"whcmon1",
"yapmon1",
"whemon1",
"whnmon1",
"loemon1",
"golmon1",
"rucmon1",
"liskiw1",
"souscr1",
"sooswi1",
"frimon1",
"frnmon1",
"piemon1",
"maglar1",
"torlar1",
"ocefly1",
"palfly3",
"pohfly1",
"molfly1",
"biafly1",
"rotswi1",
"leafly2",
"stbfly1",
"ochfly1",
"melfly1",
"vanfly1",
"chtfly1",
"brbfly1",
"satfly1",
"shifly1",
"papfly1",
"blkswi",
"resfly1",
"blamag1",
"blkmag2",
"sibjay1",
"sicjay1",
"gryjay",
"blcjay2",
"whcjay2",
"turjay1",
"beajay1",
"whcswi2",
"azhjay1",
"bltjay1",
"dwajay1",
"whtjay1",
"sitjay1",
"bucjay1",
"sabjay",
"yucjay1",
"pubjay1",
"viojay1",
"grdswi1",
"azujay1",
"purjay1",
"cucjay1",
"tufjay1",
"blcjay1",
"whtjay2",
"cayjay1",
"aznjay1",
"plcjay1",
"whnjay1",
"tepswi1",
"grnjay",
"brnjay",
"btmjay",
"wtmjay1",
"blujay",
"stejay",
"mexjay4",
"mexjay3",
"unijay1",
"cowscj1",
"chcswi1",
"wooscj2",
"issjay",
"flsjay",
"pinjay",
"eurjay1",
"blhjay1",
"lidjay1",
"azwmag2",
"azwmag3",
"ceymag1",
"whcswi",
"formag1",
"gobmag1",
"rbbmag",
"whwmag1",
"yebmag1",
"ruftre2",
"bortre1",
"grytre1",
"whbtre1",
"coltre1",
"bisswi1",
"andtre1",
"rattre1",
"hootre1",
"rattre2",
"eurmag1",
"eurmag3",
"eurmag5",
"eurmag6",
"orimag1",
"bkbmag1",
"whnswi1",
"yebmag",
"stbcro1",
"mogjay1",
"xigjay1",
"tugjay1",
"irgjay1",
"clanut",
"redcro",
"redcro9",
"whwcro",
"maggoo1",
"pltswi1",
"hiscro",
"mouser1",
"eurgol",
"citfin1",
"corfin1",
"fifser1",
"eurser1",
"syrser1",
"comcan",
"capcan1",
"tenswi1",
"yeccan1",
"bkhcan2",
"tibser1",
"lawgol",
"amegfi",
"lesgol",
"eursis",
"antsis1",
"pinsis",
"blhsis1",
"draswi1",
"blcsis2",
"yebsis1",
"olisis1",
"hoosis1",
"safsis1",
"yefsis1",
"blasis1",
"yersis1",
"thbsis1",
"andsis1",
"gloswi1",
"eleeup1",
"anteup1",
"goreup1",
"blnchl1",
"chbchl1",
"yecchl1",
"blcchl1",
"gobchl1",
"jameup1",
"orceup1",
"satswi1",
"plueup1",
"puteup1",
"fineup1",
"vefeup1",
"trieup1",
"screup3",
"screup1",
"yeceup1",
"gobeup1",
"whveup1",
"cavswi3",
"gnteup1",
"vioeup1",
"yeteup1",
"thbeup1",
"spceup1",
"olbeup1",
"fuveup1",
"taceup1",
"orbeup1",
"brgeup1",
"cavswi2",
"goseup1",
"rubeup1",
"chbeup1",
"mcclon",
"laplon",
"smilon",
"chclon",
"snobun",
"austhr1",
"lawthr1",
"pygswi2",
"slathr3",
"crbthr1",
"trithr1",
"marthr2",
"blbthr1",
"bkbthr3",
"yelthr1",
"whtrob1",
"whtthr2",
"whnrob1",
"seyswi1",
"rubrob",
"pavthr1",
"pabthr1",
"cocthr1",
"hauthr1",
"rubthr1",
"clcrob",
"baerob1",
"ecuthr1",
"hauthr3",
"masswi1",
"unithr1",
"ficale3",
"ficale2",
"kasrob2",
"fosrob1",
"besrob1",
"misrob1",
"blsrob1",
"rutscr1",
"kasrob1",
"wfwduc1",
"indswi1",
"bbsrob1",
"rbsrob1",
"brsrob1",
"indrob1",
"rutsha2",
"mamrob1",
"semrob1",
"phimar1",
"andsha1",
"whrsha2",
"phiswi1",
"whbsha1",
"vissha1",
"whvsha1",
"blasha1",
"afffly1",
"wbffly1",
"gyttif1",
"grytif1",
"angslf1",
"wheslf1",
"molswi3",
"abyslf1",
"yebfly2",
"nobfly1",
"sobfly1",
"palfly2",
"chafly2",
"grafly1",
"marfly1",
"fisfly1",
"silver1",
"mouswi2",
"spofly1",
"spofly3",
"gamfly1",
"gysfly1",
"dasfly",
"asbfly",
"bnsfly1",
"subfly2",
"brbfly2",
"ferfly1",
"whrswi2",
"ashfly1",
"swafly3",
"casfly1",
"olifly1",
"chafly1",
"afdfly1",
"ligfly2",
"yeffly1",
"dubfly2",
"tesfly1",
"ausswi1",
"ussfly1",
"whgfly1",
"rubfly3",
"habfly1",
"pabfly2",
"wbbfly1",
"pacblf1",
"hibfly1",
"larblf1",
"pabfly1",
"himswi2",
"tibfly3",
"tibfly4",
"lobblf1",
"bobfly2",
"butfly1",
"butfly2",
"mabfly1",
"mabfly2",
"subfly4",
"subfly1",
"monswi2",
"tibfly2",
"blffly1",
"matfly2",
"whtfly2",
"flojuf2",
"flojuf1",
"bncjuf1",
"fucjuf1",
"gycjuf1",
"chtjuf1",
"uniswi1",
"fujnil1",
"rubnil1",
"ruvnil1",
"vivnil3",
"larnil1",
"smanil1",
"bawfly2",
"zapfly1",
"dubfly3",
"verfly4",
"palswi2",
"islfly1",
"nilfly2",
"indfly1",
"eurrob1",
"bubwre1",
"supwre1",
"fabwre1",
"lobwre1",
"grywre1",
"rivwre1",
"bbwduc",
"palswi1",
"baywre1",
"stbwre1",
"sttwre1",
"carwre",
"winwre4",
"winwre3",
"pacwre1",
"clawre1",
"houwre",
"houwre5",
"caiswi1",
"socwre2",
"rubwre2",
"ochwre1",
"mouwre1",
"samwre1",
"tepwre1",
"timwre1",
"whbwre1",
"wbwwre1",
"gbwwre1",
"atiswi1",
"gybwow3",
"bwwwre1",
"munwow1",
"nigwre1",
"scbwre1",
"fluwre1",
"wibwre1",
"chbwre1",
"muswre2",
"sonwre1",
"marswi2",
"lobgna4",
"lobgna5",
"tafgna1",
"colgna1",
"guigna2",
"guigna3",
"sltgna1",
"guigna4",
"iqugna1",
"inagna1",
"ednswi1",
"trogna1",
"melbla1",
"cubbla",
"rusbla",
"brebla",
"comgra",
"nicgra1",
"cargra1",
"gragra1",
"botgra",
"gerswi1",
"grtgra",
"rebgra1",
"vefgra1",
"oribla1",
"mougra1",
"gotgra1",
"ausbla1",
"schbla1",
"forbla1",
"chobla1",
"scaswi1",
"bolbla1",
"bawcow4",
"bawcow3",
"yewbla2",
"paebla2",
"unibla2",
"chcbla2",
"yehbla2",
"sacbla2",
"baymar1",
"papnee1",
"yermar1",
"ovenbi1",
"woewar1",
"louwat",
"norwat",
"gowwar",
"buwwar",
"bawwar",
"prowar",
"swawar",
"whrnee1",
"crcwar",
"fltwar1",
"tenwar",
"orcwar",
"colwar",
"lucwar",
"naswar",
"virwar",
"conwar",
"gycyel",
"motspi1",
"masyel2",
"masyel3",
"masyel4",
"masyel5",
"macwar",
"mouwar",
"kenwar",
"olcyel1",
"blpyel1",
"belyel1",
"spwduc1",
"sirnee1",
"bahyel1",
"altyel1",
"comyel",
"hooyel1",
"elwwar1",
"arrwar1",
"hoowar",
"amered",
"kirwar",
"camwar",
"sabspi1",
"cerwar",
"norpar",
"tropar",
"magwar",
"babwar",
"bkbwar",
"yelwar1",
"yelwar",
"chswar",
"bkpwar",
"whtnee",
"btbwar",
"palwar",
"olcwar1",
"pinwar",
"yerwar",
"audwar",
"yerwar2",
"yetwar",
"yetwar3",
"vitwar1",
"sibnee1",
"prawar",
"adewar1",
"grawar",
"btywar",
"towwar",
"herwar",
"comchi1",
"ibechi2",
"eacwar1",
"ijlwar1",
"grrswi1",
"phlwar1",
"letwar1",
"yetwow1",
"brwwar1",
"rfwwar1",
"lauwow1",
"bcwwar1",
"ugawow1",
"whswar1",
"gycwar2",
"barswi",
"goswar1",
"gycwar1",
"whiwar2",
"biawar1",
"pltwar1",
"marwar4",
"grnwar1",
"grewar2",
"grewar3",
"emlwar1",
"leaswi1",
"lblwar1",
"salwar1",
"pllwar1",
"arcwar3",
"arcwar2",
"arcwar1",
"chcwar2",
"sunwar1",
"yebwar2",
"limlew1",
"corswi",
"subwar3",
"yevwar1",
"weclew1",
"blylew1",
"clalew1",
"harlew1",
"klolew1",
"halwar1",
"davlew1",
"gyhwar2",
"parswi1",
"mouwar2",
"mouwar4",
"tilwar2",
"rolwar1",
"sclwar1",
"sulwar1",
"sulwar3",
"kullew1",
"islwar1",
"isllew9",
"chiswi",
"isllew10",
"grawar1",
"malbrw1",
"subbrw1",
"anbwar1",
"gcbwar1",
"mohbrw1",
"barwar2",
"cvswar1",
"grswar2",
"wiwduc1",
"vauswi",
"leswar1",
"maswar1",
"sebwar1",
"grrwar1",
"orrwar1",
"clrwar1",
"aurwar1",
"miller",
"sairew1",
"narwar1",
"chaswi2",
"carrew1",
"chiwar1",
"marrew2",
"tahrew1",
"marwar2",
"turwar1",
"cirwar2",
"rimrew1",
"bbrwar1",
"mouwar1",
"sicswi1",
"aquwar1",
"sedwar1",
"blutit",
"azutit2",
"grotit1",
"gretit1",
"gretit4",
"gretit2",
"grbtit1",
"whwtit2",
"shtswi1",
"yeltit2",
"blltit1",
"indtit1",
"yectit1",
"whsblt1",
"whwblt3",
"soublt1",
"cartit2",
"whbtit5",
"whbblt1",
"whtswi",
"dustit2",
"rubtit3",
"rettit2",
"stbtit2",
"somtit4",
"miotit2",
"ashtit2",
"grytit1",
"euptit1",
"bhptit1",
"whtswi1",
"wcptit1",
"chptit1",
"yeptit1",
"mcptit1",
"afptit1",
"soptit1",
"verdin",
"yesnic1",
"easnic1",
"yetnic1",
"andswi1",
"bearee1",
"grhlar1",
"beelar1",
"sphlar12",
"gralar2",
"shclar1",
"klblar6",
"benlar1",
"elblar1",
"y00415",
"anpswi",
"agular1",
"thblar1",
"deslar1",
"batlar1",
"rutlar2",
"beslar1",
"madlar1",
"bcslar1",
"cbslar1",
"ascspl1",
"ftpswi1",
"chhspl1",
"gybspl1",
"fislar1",
"sablar2",
"piblar3",
"foxlar1",
"faclar8",
"karlar2",
"ferlar2",
"dunlar5",
"gstswi1",
"barlar4",
"rudlar1",
"liblar1",
"eaclar1",
"caclar1",
"rewlar1",
"runlar1",
"flalar1",
"retale1",
"brcale1",
"fuwduc",
"lstswi1",
"whcale1",
"wbrcha1",
"arrcha1",
"ofrcha1",
"carcha1",
"wtrcha1",
"gywroc1",
"bsrcha1",
"rurcha1",
"wbrcha2",
"afpswi1",
"rcrcha1",
"chrcha1",
"whrcha1",
"scrcha1",
"wcrcha1",
"swyrob1",
"whsrob1",
"obfrob1",
"shtaka2",
"bocaka11",
"malpas1",
"equaka1",
"shaaka1",
"rubaka1",
"usaaka1",
"iriaka1",
"copthr1",
"rtpthr1",
"spmthr1",
"gresho1",
"bagbab2",
"aspswi1",
"gousho1",
"rubsho1",
"lessho1",
"whbsho4",
"whbsho5",
"whbsho6",
"whbsho10",
"eyjfly1",
"minjuf1",
"inbrob1",
"alpswi1",
"sibrob",
"rutrob1",
"ryurob2",
"ryurob3",
"japrob2",
"blueth",
"whbred1",
"thrnig1",
"comnig1",
"whtrob3",
"motswi2",
"himrub1",
"chirub1",
"sibrub",
"fireth1",
"whtrob2",
"whbsho1",
"whbsho3",
"wbbrob1",
"rbbrob1",
"cobrob1",
"aleswi1",
"refblu",
"himblu1",
"gobrob1",
"litfor1",
"chnfor1",
"blbfor1",
"slbfor1",
"whcfor1",
"whcfor3",
"spofor1",
"comswi",
"ceywht1",
"shwthr1",
"borwht1",
"chwwht1",
"mawthr2",
"mawthr1",
"fowthr1",
"blwthr1",
"blfrob1",
"korfly1",
"plaswi1",
"narfly1",
"narfly2",
"narfly3",
"slbfly1",
"mugfly",
"pybfly1",
"rugfly1",
"sapfly1",
"ultfly1",
"lipfly1",
"nyaswi1",
"slbfly2",
"snbfly1",
"rutfly6",
"taifly1",
"rebfly",
"kasfly1",
"semfly1",
"eupfly1",
"colfly1",
"anglar1",
"plwduc1",
"palswi3",
"monlar2",
"latlar1",
"sinbus6",
"sinbus1",
"burbus1",
"benbus1",
"indbus3",
"indbus2",
"jerbus2",
"gillar1",
"afrswi1",
"frilar1",
"whtlar1",
"woolar1",
"stalar2",
"shtlar1",
"piblar1",
"whwlar1",
"razsky1",
"orisky1",
"skylar",
"madswi1",
"tawlar1",
"sunlar1",
"lablar1",
"thelar1",
"crelar1",
"mallar1",
"crelar3",
"horlar",
"temlar1",
"humlar1",
"fowswi1",
"sstlar4",
"blalar2",
"blalar4",
"reclar1",
"gstlar1",
"bimlar1",
"callar1",
"blalar1",
"tiblar1",
"duplar1",
"braswi1",
"dunlar1",
"dunlar4",
"lstlar2",
"sstlar1",
"mstlar1",
"tstlar1",
"sanlar1",
"somgre1",
"slbgre1",
"golgre1",
"fotswi",
"blcbul1",
"combri2",
"gntbri1",
"gyhbri1",
"lesbri2",
"lesbri3",
"yetgre1",
"spogre1",
"swagre1",
"joygre1",
"saaswi1",
"yengre1",
"yebgre1",
"simgre1",
"hongre1",
"sjogre1",
"camgre2",
"shegre1",
"easmog4",
"easmog3",
"easmog5",
"blyswi1",
"easmog1",
"stcgre3",
"stcgre4",
"stcgre1",
"wesbeg1",
"easbeg1",
"retgre1",
"whbgre1",
"yebgre3",
"litgre2",
"cooswi1",
"yewgre1",
"plagre2",
"grygre1",
"ansgre1",
"tingre1",
"whtgre2",
"xavgre1",
"ictgre1",
"terbro1",
"caogre1",
"darswi1",
"norbro1",
"gyogre1",
"fisgre1",
"cabgre1",
"cabgre3",
"leaflo1",
"yesbul1",
"gyhgre1",
"toogre1",
"baugre1",
"wawduc1",
"litswi1",
"paogre1",
"habbul1",
"hobbul1",
"yebbul2",
"gytbul1",
"ochbul3",
"whtbul1",
"ochbul2",
"putbul1",
"strbul2",
"houswi1",
"finbul1",
"olibul1",
"buvbul1",
"chabul1",
"cacbul1",
"gyebul1",
"crsbul1",
"ashbul1",
"cinbul1",
"chebul1",
"horswi1",
"yebbul3",
"sunbul1",
"sunbul2",
"strbul1",
"moubul2",
"phibul1",
"minbul1",
"stbbul1",
"golbul3",
"sulgob1",
"whrswi1",
"golbul4",
"visbul1",
"yelbul1",
"yelbul4",
"brebul1",
"reubul1",
"madbul1",
"maubul1",
"blabul1",
"sqtbul1",
"critop1",
"combul1",
"mohbul1",
"seybul1",
"pubbul1",
"bawbul2",
"yewbul1",
"gyhbul1",
"blhbul1",
"andbul1",
"spebul1",
"fietop1",
"gybbul1",
"scbbul1",
"blcbul2",
"bkcbul1",
"bkcbul4",
"bafbul1",
"crefin1",
"colfin1",
"crvbul1",
"olwbul1",
"whnjac1",
"reebul1",
"asfbul1",
"whbbul2",
"ayebul1",
"stebul2",
"sttbul1",
"flabul1",
"flabul3",
"yetbul1",
"yeebul1",
"blkjac1",
"brbbul1",
"livbul1",
"stybul1",
"rewbul",
"yevbul1",
"revbul",
"sohbul1",
"whebul1",
"gchwar",
"btnwar",
"whtsic1",
"citwar1",
"samwar1",
"whswar2",
"flawar1",
"whbwar2",
"palwar1",
"blcwar2",
"burwar1",
"rivwar1",
"twbwar1",
"butsic1",
"twbwar2",
"gobwar3",
"gobwar4",
"whlwar1",
"gytwar1",
"gagwar2",
"rucwar1",
"fatwar",
"rucwar",
"rucwar4",
"lewduc1",
"sabher1",
"blcwar1",
"pirwar1",
"gobwar1",
"gcrwar",
"thswar5",
"thswar9",
"thbwar2",
"thswar2",
"thswar1",
"canwar",
"hobher2",
"wlswar",
"refwar",
"redwar1",
"pihwar1",
"paired",
"sltred",
"brcred1",
"yecred1",
"whfred2",
"gofred1",
"broher",
"spered1",
"colred1",
"parred1",
"tepred1",
"duftan1",
"olbtan1",
"olgtan1",
"rbptan1",
"flctan",
"heptan2",
"rubher",
"heptan",
"sumtan",
"rottan1",
"scatan",
"westan",
"whwtan1",
"rehtan1",
"rehtan2",
"rcatan1",
"rtatan1",
"batbar1",
"bcatan1",
"soatan1",
"cratan1",
"olitan1",
"cartan2",
"lestan",
"ocbtan1",
"yelgro",
"bltgro1",
"gobgro1",
"patbar1",
"blbgro2",
"robgro",
"bkhgro",
"rebcha1",
"grtcha1",
"robcha1",
"norcar",
"vercar1",
"pyrrhu",
"blfgro1",
"brther2",
"yeggro1",
"crcgro",
"rabgro1",
"blusee1",
"blusee4",
"blbsee3",
"dickci",
"glbgro1",
"bubgro1",
"bubgro2",
"duther1",
"ultgro1",
"blubun",
"blugrb1",
"indbun",
"lazbun",
"varbun",
"paibun",
"robbun1",
"orbbun1",
"plushc1",
"stther1",
"cocfin2",
"brotan1",
"yesgro2",
"hootan1",
"chttan1",
"blbtan2",
"whctan1",
"scttan1",
"blmfin1",
"grpfin1",
"lither2",
"ptpfin1",
"lesgrf1",
"wtgfin1",
"grifin1",
"rbifin1",
"gywinf1",
"bbifin1",
"liifin1",
"mosfin1",
"blufin1",
"grskiw1",
"whbduc1",
"lither3",
"btsfin1",
"casfin1",
"grehon1",
"gochon2",
"sawtan1",
"baytan2",
"surtan1",
"scbtan2",
"yebtan1",
"guitan1",
"minher1",
"ruhtan1",
"swatan1",
"purhon1",
"relhon1",
"shbhon2",
"shihon1",
"scbdac1",
"sctdac1",
"bludac1",
"bagtan2",
"cither1",
"gycfin1",
"blcfin1",
"whbtan1",
"codfin1",
"yelcar1",
"diatan1",
"magtan2",
"blftan1",
"cintan1",
"reccar",
"bkther1",
"reccar2",
"crfcar1",
"yebcar",
"reccar3",
"reccar4",
"dottan1",
"ruttan1",
"spotan1",
"spetan1",
"yebtan2",
"stther2",
"gontan1",
"azrtan1",
"gagtan1",
"bugtan",
"saytan1",
"glatan1",
"azstan1",
"yewtan1",
"goctan2",
"paltan1",
"gycher1",
"blhtan1",
"siltan1",
"gnttan1",
"blctan1",
"gohtan1",
"blntan1",
"mastan1",
"blbtan1",
"chbtan1",
"scrtan1",
"redher1",
"bubtan2",
"babtan1",
"bestan1",
"spctan1",
"blbtan3",
"megtan1",
"bahtan1",
"ruwtan1",
"goetan1",
"sactan1",
"bubher1",
"flftan1",
"blwtan1",
"gagtan2",
"goltan1",
"emetan1",
"sittan1",
"sectan1",
"grhtan2",
"rentan1",
"brbtan1",
"socher1",
"gietan1",
"plctan1",
"turtan1",
"partan1",
"opctan1",
"oprtan1",
"ibesee1",
"piecro1",
"wiltit1",
"combul2",
"plaher1",
"scbcup3",
"slbtes1",
"pfbwar1",
"webwar1",
"humwar1",
"savwar1",
"blackc1",
"grewhi1",
"cacwre",
"sohwre1",
"cabgoo1",
"scther1",
"blbwre1",
"pltwre1",
"shttre1",
"woothr",
"spnthr1",
"sponit2",
"obnthr1",
"bhnthr1",
"sbnthr1",
"swathr",
"pabher1",
"bbnthr1",
"herthr",
"runthr1",
"rcnthr1",
"gycthr",
"bicthr",
"veery",
"lotthr1",
"alpthr1",
"himthr1",
"whbher1",
"sicthr1",
"lobthr1",
"dasthr1",
"evethr1",
"scathr2",
"scathr8",
"scathr4",
"scathr5",
"scathr6",
"sacthr2",
"whwher1",
"rutthr1",
"oltthr1",
"sibthr1",
"piethr1",
"grygrt1",
"spgthr1",
"spwthr1",
"crgthr1",
"abgthr1",
"orgthr1",
"greher1",
"orbthr1",
"rubthr2",
"grothr1",
"chithr2",
"sonthr1",
"fieldf",
"eacaka1",
"gabaka1",
"comcha",
"blucha2",
"tabher1",
"brambl",
"baygro1",
"colgro1",
"spwgro1",
"whwgro1",
"evegro",
"hoogro1",
"hawfin",
"pingro",
"brrbun1",
"koeher1",
"cabbun1",
"blhbun1",
"rehbun1",
"bkfbun1",
"leasal1",
"sibtan2",
"drasee1",
"crbgna1",
"masgna1",
"cubgna1",
"nebher1",
"trogna2",
"buggna",
"bktgna",
"calgna",
"bkcgna",
"whlgna2",
"whcnut1",
"prznut1",
"gianut1",
"whbnut",
"stbher1",
"beanut1",
"blunut1",
"vefnut1",
"yebnut1",
"subnut1",
"pygnut",
"bnhnut",
"bnhnut2",
"yunnut1",
"algnut1",
"mexher1",
"krunut1",
"rebnut",
"cornut1",
"snbnut1",
"rocnut1",
"pernut1",
"whbnut1",
"whtnut1",
"eurnut2",
"chvnut1",
"brant",
"lobher",
"chbnut2",
"chbnut3",
"chbnut4",
"wallcr1",
"eurtre1",
"eurtre3",
"brncre",
"battre1",
"ruftre4",
"bnttre1",
"lother1",
"bnttre2",
"sictre1",
"spocre3",
"spocre2",
"grycat",
"blacat1",
"normoc",
"tromoc",
"bahmoc",
"chimoc1",
"grbher1",
"lotmoc1",
"chbmoc1",
"patmoc1",
"whbmoc1",
"brbmoc1",
"galmoc1",
"hoomoc1",
"chamoc2",
"socmoc1",
"sagthr",
"grflan1",
"brnthr",
"lobthr",
"grathr1",
"benthr",
"ocethr1",
"cubthr",
"calthr",
"crithr",
"lecthr",
"blumoc",
"blflan1",
"bawmoc1",
"scbthr",
"peethr1",
"brotre1",
"metsta1",
"sinsta1",
"tansta1",
"atosta1",
"rensta1",
"lotsta1",
"webhum3",
"whesta2",
"brwsta1",
"sacsta1",
"ruwsta1",
"strsta1",
"asgsta1",
"molsta1",
"shtsta1",
"micsta1",
"polsta1",
"webhum1",
"samsta1",
"rarsta1",
"yefmyn1",
"lotmyn1",
"golmyn1",
"sulmyn1",
"apomyn2",
"coleto1",
"whnmyn1",
"baemyn1",
"hyavis1",
"fibmyn1",
"spwsta1",
"gocmyn1",
"ceymyn1",
"sohmyn1",
"whvmyn1",
"cremyn",
"junmyn1",
"colmyn1",
"banmyn1",
"hoovis2",
"commyn",
"vibsta1",
"rebsta1",
"whcsta1",
"bkcsta1",
"blwwar1",
"manrew1",
"labrew1",
"padwar1",
"blrwar1",
"brvear1",
"eurwar1",
"afrwar1",
"marwar3",
"thbwar1",
"afywar1",
"moywar1",
"boowar1",
"sykwar2",
"eaowar1",
"weowar1",
"rebgoo1",
"grnvie1",
"paywar1",
"upcwar1",
"oltwar1",
"melwar1",
"ictwar1",
"simgrw1",
"gybbab2",
"grgwar1",
"sakwar1",
"margra1",
"lesvio1",
"pagwar1",
"migwar",
"plewar1",
"lanwar",
"baswar1",
"eurwar2",
"brbwar2",
"cogwar1",
"chbwar1",
"frbwar1",
"spvear1",
"ltbwar1",
"cbbwar2",
"cbbwar4",
"talgrw1",
"cbbwar3",
"spobuw1",
"spobuw2",
"spobuw3",
"taibuw1",
"rubwar1",
"wvvear1",
"dabwar1",
"jabwar1",
"sicbuw1",
"benbuw1",
"flrgra1",
"spibir1",
"fernbi1",
"litgra1",
"malia1",
"broson1",
"tobhum1",
"bubbus1",
"rufson1",
"tawgra2",
"tawgra3",
"guathi2",
"necgra1",
"lolwar1",
"strgra1",
"ceybuw1",
"brtgra2",
"horsun2",
"brigra2",
"fatgra1",
"knswar1",
"banscw1",
"afswar1",
"camscw1",
"olbsun4",
"apbsun2",
"flbsun2",
"sousun2",
"pucfai1",
"madsun1",
"seysun2",
"humsun2",
"anjsun2",
"maysun2",
"lobsun2",
"gyhsun2",
"moasun1",
"linsun1",
"mansun1",
"bkefai1",
"mewsun2",
"mousun1",
"bohsun1",
"elesun1",
"lovsun1",
"hansun1",
"gousun1",
"grtsun1",
"fotsun1",
"bltsun1",
"whtgol1",
"eacsun1",
"magsun1",
"wecsun1",
"scasun1",
"temsun1",
"fitsun1",
"punsun1",
"litspi1",
"ortspi1",
"palspi2",
"tepgol1",
"thbspi1",
"lobspi1",
"spespi2",
"yeespi1",
"nafspi1",
"gybspi2",
"stbspi2",
"borspi1",
"strspi1",
"whispi1",
"hawgoo",
"grtgol1",
"cinwhe1",
"palroc1",
"rocpet1",
"whrsno1",
"tibsno2",
"whwsno1",
"blwsno1",
"runsno1",
"pedsno1",
"blasno1",
"fitawl1",
"yespet1",
"yetpet1",
"buspet1",
"chspet1",
"capspa1",
"chespa1",
"shrspa1",
"kerspa2",
"grrspa1",
"gyhspa1",
"ruthum1",
"swaspa2",
"swaspa1",
"pabspa1",
"sghspa2",
"sinspa1",
"russpa2",
"eutspa",
"saxspa1",
"plbspa1",
"socspa1",
"jamman1",
"spaspa1",
"itaspa1",
"houspa",
"somspa1",
"cavspa1",
"desspa3",
"argspa2",
"sugspa1",
"desspa1",
"wbbwea1",
"bltman1",
"rbbwea1",
"whbwea1",
"wbswea1",
"ccswea1",
"dsswea1",
"cbswea1",
"rutwea1",
"gyhsow1",
"bcswea1",
"socwea1",
"grtman1",
"scawea1",
"spfwea1",
"growea1",
"bagwea1",
"banwea1",
"berwea2",
"slbwea1",
"litwea1",
"spewea1",
"bknwea2",
"gnbman",
"strwea1",
"blbwea1",
"capwea1",
"bocwea1",
"afgwea1",
"hogwea1",
"orawea1",
"hemwea1",
"gopwea1",
"tagwea1",
"verman1",
"sbtwea1",
"kilwea1",
"ruewea1",
"lesmaw1",
"afmwea",
"kamwea1",
"vimwea1",
"spewea2",
"vilwea1",
"viewea3",
"antman2",
"blhwea1",
"gobwea1",
"cinwea1",
"chewea1",
"yemwea1",
"nelwea1",
"sakwea1",
"asgwea2",
"comwea1",
"strwea2",
"greman1",
"baywea1",
"forwea1",
"usawea1",
"brcwea1",
"bawwea1",
"recmal2",
"bltmal1",
"balmal2",
"revmal1",
"gramal1",
"cangoo",
"grtcar1",
"rehmal1",
"cremal1",
"rehwea1",
"carque1",
"rehque1",
"rebque1",
"redfod1",
"rehfod1",
"forfod1",
"maufod1",
"putcar1",
"seyfod1",
"rodfod1",
"yecbis",
"blabis1",
"zanbis1",
"blwbis1",
"redbis",
"orabis1",
"yelbis1",
"fatwid1",
"ortsun1",
"yeswid2",
"marwid1",
"whwwid1",
"recwid3",
"lotwid1",
"picmun1",
"moufir1",
"diafir1",
"reefir1",
"beafir1",
"amtsun3",
"crifin1",
"rebfir1",
"paifir1",
"stafin1",
"plhfin1",
"dobfin1",
"lotfin1",
"bltfin1",
"gyhsil1",
"broman1",
"amtsun2",
"magman1",
"bawman1",
"bawman3",
"madmun1",
"afrsil1",
"indsil",
"sthmun2",
"nutman",
"bltmun1",
"blfmun1",
"amtsun4",
"whrmun",
"dusmun1",
"javmun1",
"trimun",
"chemun",
"whhmun1",
"blbmun1",
"snmmun1",
"gybmun1",
"gycmun1",
"gorsun1",
"gyhmun1",
"hoomun1",
"neimun1",
"motmun1",
"yermun1",
"chbmun1",
"blamun1",
"bismun1",
"goufin3",
"tabpar1",
"tousun1",
"retpar3",
"fijpar1",
"reepar2",
"blfpar3",
"fepoli1",
"yebwax2",
"swewax1",
"swewax3",
"grbtwi1",
"ducwin1",
"litsun1",
"abcwin1",
"rfcwin1",
"refant1",
"whbneg2",
"chbneg1",
"gyhneg1",
"rerwax1",
"blcwax1",
"lavwax",
"bltwax1",
"putsun1",
"cinwax1",
"blcwax2",
"kanwax1",
"orcwax",
"anawax1",
"fabwax1",
"comwax",
"bkrwax",
"crrwax1",
"arawax1",
"bargoo",
"roysun1",
"quailf1",
"locust3",
"cutthr1",
"rehfin1",
"zebwax2",
"redava",
"purgre2",
"viewax1",
"bubcor1",
"reccor",
"grbfir1",
"blccor1",
"grablu1",
"wesblu1",
"rehblu1",
"lessee1",
"blbsee1",
"grwpyt1",
"orwpyt1",
"rewpyt1",
"dybtwi1",
"juffir1",
"dustwi1",
"pettwi1",
"rebfir2",
"afffin",
"jamfir1",
"malfir1",
"rocfir1",
"blbfir1",
"babfir1",
"brnfir1",
"gretho1",
"bkffir1",
"vilind",
"purind1",
"bakind1",
"varind1",
"greind1",
"pawind1",
"jopind1",
"camind1",
"pitwhy",
"wictho2",
"stbwhy1",
"sttwhy1",
"rottan2",
"crebun1",
"slabun1",
"corbun1",
"yellow2",
"pinbun",
"rocbun1",
"godbun1",
"bkbtho1",
"meabun1",
"chbbun1",
"gyhbun1",
"cinbun1",
"ortbun1",
"crebun2",
"cirbun1",
"houbun2",
"houbun3",
"lalbun1",
"ratcoq2",
"cibbun1",
"gosbun1",
"capbun1",
"tribun1",
"chebun2",
"litbun",
"yebbun1",
"rusbun",
"yetbun1",
"sombun1",
"tufcoq1",
"gobbun1",
"tibbun1",
"yelbun1",
"grybun",
"palbun",
"ocrbun1",
"reebun",
"garwar1",
"abycat1",
"busbla1",
"doecoq1",
"afhbab1",
"afhbab3",
"barwar1",
"laywar2",
"banwar2",
"ruvwar2",
"smawhi1",
"leswhi4",
"humwhi1",
"brnwar1",
"fricoq1",
"yemwar1",
"reswar1",
"weowar2",
"eaowar2",
"afdwar1",
"asdwar1",
"triwar1",
"menwar1",
"ruewar1",
"cypwar1",
"cacgoo1",
"ruccoq1",
"sarwar1",
"subwar6",
"subwar8",
"easwar1",
"spewar2",
"marwar1",
"darwar1",
"lottit1",
"lottit5",
"blttit2",
"fescoq3",
"whttit1",
"bkbtit3",
"bkbtit4",
"bkbtit6",
"sootit1",
"pygtit1",
"wbtwar1",
"bushti",
"woowar",
"eabwar1",
"blccoq1",
"bubwar1",
"astwar2",
"yebwar3",
"brlwar1",
"chilew1",
"parwar1",
"siclew1",
"ganlew1",
"palwar5",
"tylwar1",
"whccoq1",
"yeswar1",
"radwar1",
"subwar2",
"y00989",
"smowar1",
"duswar",
"pllwar2",
"butwar1",
"wlwwar",
"mouchi2",
"ecupie1",
"caichi1",
"gyhcaf1",
"citcaf1",
"afbfly1",
"wtbfly1",
"wbcfly1",
"wtcfly1",
"fictit1",
"yebtit3",
"sultit1",
"spehum1",
"bkbtit2",
"ruvtit2",
"coatit2",
"yebtit4",
"eletit2",
"paltit2",
"cretit2",
"gyctit1",
"britit",
"oaktit",
"lotsyl1",
"juntit1",
"tuftit",
"blctit4",
"vartit1",
"vartit4",
"vartit2",
"vartit3",
"whbtit4",
"somtit3",
"gyhchi",
"vitsyl1",
"chbchi",
"borchi2",
"mexchi",
"carchi",
"bkcchi",
"mouchi",
"pedtit1",
"bkbtit1",
"martit2",
"sictit1",
"vensyl1",
"castit2",
"afbtit2",
"redwin",
"eurbla",
"yemthr1",
"islthr24",
"gywbla1",
"eurbla2",
"ticthr1",
"blbthr2",
"retcom1",
"japthr1",
"gybthr1",
"eyethr",
"palthr1",
"gysthr1",
"brhthr1",
"izuthr1",
"islthr1",
"tibbla1",
"whbthr2",
"bahgoo",
"brtcom1",
"rinouz1",
"datthr1",
"retthr1",
"dusthr2",
"dusthr1",
"chethr1",
"whcbla1",
"amerob",
"blarob1",
"rucrob1",
"gybcom1",
"soorob1",
"relthr1",
"whcthr1",
"forthr1",
"mourob1",
"paethr1",
"lasthr1",
"chbthr2",
"plbthr2",
"chithr1",
"andhil3",
"slathr2",
"glbthr1",
"blhthr1",
"grethr1",
"trepip",
"olbpip",
"pecpip",
"rospip1",
"retpip",
"amepip",
"whshil1",
"watpip1",
"rocpip1",
"nilpip1",
"uplpip1",
"berpip1",
"strpip1",
"yetpip1",
"shtpip1",
"buspip1",
"sokpip1",
"ecuhil1",
"malpip1",
"yebpip2",
"alppip1",
"sprpip",
"yelpip2",
"yelpip3",
"shbpip1",
"shbpip3",
"chapip1",
"corpip1",
"buthil1",
"ocbpip1",
"helpip1",
"parpip1",
"przros1",
"brobul1",
"rehbul1",
"gyhbul2",
"gyhbul5",
"whcbul1",
"eurbul",
"andhil2",
"eurbul1",
"crwfin2",
"crwfin1",
"trufin2",
"monfin2",
"spefin1",
"gonfin1",
"dabros1",
"plmfin1",
"bhmfin1",
"blbhil1",
"asrfin1",
"gcrfin",
"bkrfin",
"bcrfin",
"comros",
"scafin1",
"strros1",
"greros1",
"blyros1",
"remros1",
"wethil1",
"bearos1",
"chbros1",
"pirros1",
"pibros2",
"darros1",
"spwros2",
"spwros3",
"vinros2",
"vinros3",
"sinros1",
"mouavo1",
"palros3",
"tibros1",
"lotros1",
"palros2",
"thbros1",
"whbros1",
"cwbros1",
"refros1",
"crbfin3",
"mauala",
"empgoo",
"blttra1",
"akikik",
"nihfin",
"palila",
"iiwi",
"crehon",
"apapan",
"akiapo",
"maupar",
"aniani",
"hawcre",
"grttra1",
"akekee",
"akepa1",
"hawama",
"oahama",
"kauama",
"purfin",
"casfin",
"houfin",
"rcgspa1",
"cantow",
"blbtho1",
"whttow1",
"abetow",
"caltow",
"wegspa1",
"pregrs1",
"pregrs2",
"russpa1",
"rucspa",
"oaxspa1",
"gnttow",
"pubtho1",
"spotow",
"eastow",
"coltow1",
"rcbfin1",
"wnbfin1",
"yetfin1",
"yegfin1",
"mobfin1",
"moubru2",
"tebfin1",
"brttho1",
"smbfin1",
"obbfin1",
"yehbrf1",
"dhbfin1",
"wrbfin1",
"whbfin1",
"rebfin1",
"tribrf1",
"trbfin1",
"slbfin2",
"rabtho1",
"pnbfin1",
"yebbrf1",
"yebbru1",
"wwbfin1",
"phbfin1",
"bcbfin1",
"rbbfin1",
"apubrf1",
"cuzbrf1",
"bkfbrf1",
"ructho1",
"rnbfin1",
"fhbfin1",
"ysbfin1",
"wectan1",
"eactan1",
"bcptan1",
"grtwar1",
"whwwar1",
"purtan1",
"wesspi",
"blmtho1",
"hisspi",
"purspi",
"wrenth1",
"yehwar1",
"oriwar1",
"yebcha",
"yehbla",
"boboli",
"wesmea",
"easmea",
"bufhel1",
"rebbla1",
"whbbla2",
"permea1",
"lotmea1",
"pammea1",
"yebcac1",
"yewcac1",
"chhoro1",
"ruboro1",
"dugoro1",
"bubhel1",
"creoro1",
"greoro1",
"olioro1",
"monoro1",
"blaoro1",
"bauoro2",
"sobcac1",
"gowcac1",
"selcac1",
"ecucac1",
"soucas1",
"rosgoo",
"gnbhel1",
"yercac1",
"scrcac1",
"moucac1",
"batoro1",
"casoro2",
"rercac1",
"scoori",
"yebori1",
"audori",
"jamori1",
"tyrmet1",
"oraori1",
"altori",
"yelori1",
"bulori",
"stbori",
"blbori1",
"balori",
"yetori1",
"spbori",
"wheori1",
"permet1",
"camtro1",
"ventro1",
"orbtro3",
"bawori1",
"bkvori",
"hooori",
"bkcori",
"orcori",
"orcori3",
"graori2",
"virmet1",
"graori3",
"marori1",
"graori4",
"graori1",
"orcori1",
"epaori4",
"epaori1",
"jambla1",
"yesbla1",
"tasbla",
"vitmet1",
"tribla",
"rewbla",
"resbla1",
"scrcow1",
"giacow",
"shicow",
"brocow",
"brocow2",
"bnhcow",
"scrbla1",
"nebmet1",
"armbab1",
"bacbab1",
"blabab2",
"dusbab2",
"sopbab1",
"harbab1",
"bklbab1",
"bkfbab1",
"norpib1",
"taihwa1",
"fitmet1",
"lenlau1",
"whclau2",
"blhlau1",
"whnlau1",
"grylau1",
"ruclau3",
"suklau1",
"ruclau1",
"spolau1",
"gialau1",
"scamet1",
"barlau1",
"wynlau1",
"ruvlau1",
"whclau1",
"chhlau1",
"runlau1",
"chblau1",
"whblau1",
"maslau1",
"gnlthr",
"blamet1",
"pedlau1",
"chibub1",
"chibab2",
"giabab1",
"tibbab1",
"whtlau1",
"ruclau2",
"gyslau",
"ruslau1",
"spothr1",
"grepuf1",
"dapthr1",
"gycill1",
"capsug1",
"gursug1",
"phifab1",
"ruckin",
"firecr1",
"firecr3",
"gockin",
"flamec1",
"snogoo",
"butpuf1",
"goldcr1",
"spwbab1",
"yebhyl1",
"souhyl1",
"whhwre1",
"babwre1",
"grbwre1",
"stbwre2",
"faswre1",
"giawre1",
"hoapuf1",
"bicwre1",
"runwre1",
"runwre3",
"runwre4",
"spowre1",
"bouwre1",
"yucwre1",
"thlwre1",
"gymwre1",
"tobwre1",
"blbpuf3",
"rocwre",
"canwre",
"sumwre1",
"navwre1",
"rufwre1",
"shawre1",
"perwre1",
"fulwre1",
"sedwre1",
"merwre1",
"glopuf2",
"apowre1",
"sedwre",
"marwre",
"bewwre",
"zapwre1",
"bltwre1",
"incwre1",
"mouwre2",
"whiwre1",
"corwre1",
"bltpuf1",
"hapwre1",
"spbwre1",
"rubwre1",
"spbwre2",
"banwre1",
"rawwre1",
"antwre2",
"nicwre1",
"sinwre1",
"plawre1",
"cobpuf1",
"plawre3",
"istwre1",
"cibwar1",
"gryemt1",
"bretai1",
"jrswar1",
"afbwar1",
"wwswar1",
"grswar1",
"hirwar2",
"savpuf1",
"bkcdon",
"whtoxy1",
"lobber1",
"crywar1",
"wetjer2",
"thamno2",
"spetet1",
"apptet1",
"dustet1",
"gyctet1",
"gobpuf1",
"yeboxy1",
"ranwar1",
"comjer1",
"grejer1",
"sttjer1",
"refcis1",
"sincis1",
"whicis1",
"tricis1",
"chacis1",
"blcpuf1",
"bubcis1",
"huncis1",
"chucis1",
"bllcis1",
"rolcis2",
"ratcis1",
"borcis1",
"chucis2",
"ashcis1",
"grycis1",
"embpuf1",
"rehcis2",
"waicis1",
"waicis2",
"wincis2",
"wincis3",
"wincis4",
"wincis6",
"wincis5",
"chicis1",
"carcis1",
"gragoo",
"marspa1",
"tincis1",
"stocis1",
"abecis1",
"crocis1",
"dorcis1",
"tincis3",
"sifcis1",
"rufcis1",
"foxcis1",
"pipcis2",
"shisun1",
"tabcis1",
"zitcis1",
"soccis1",
"madcis2",
"descis1",
"clocis1",
"clscis1",
"pepcis1",
"paccis1",
"wiscis1",
"pubsun1",
"gohcis1",
"socwar2",
"strpri2",
"strpri8",
"bropri1",
"brnpri2",
"brnpri3",
"hilpri1",
"hilpri2",
"gycpri1",
"blhsun1",
"rufpri2",
"rufpri1",
"gybpri1",
"grapri1",
"delpri1",
"junpri1",
"bawpri1",
"yebpri1",
"ashpri1",
"tafpri1",
"broinc1",
"plapri1",
"palpri1",
"rivpri1",
"blcpri1",
"karpri1",
"drapri1",
"banpri1",
"banpri3",
"rewpri1",
"refwar2",
"broinc2",
"whcpri2",
"silpri2",
"nampri1",
"robpri1",
"minmib1",
"grnlon1",
"blcapa2",
"ruwapa1",
"crilon1",
"bubwar2",
"blainc1",
"batapa2",
"batapa3",
"batapa4",
"rudapa1",
"yebapa2",
"yebapa1",
"masapa1",
"blfapa1",
"bltapa1",
"whwapa1",
"colinc1",
"blcapa1",
"blhapa1",
"chiapa1",
"chtapa3",
"chaapa1",
"shaapa2",
"butapa1",
"karapa1",
"gosapa1",
"gryapa1",
"vitsta1",
"brhapa1",
"ruewar2",
"oriwar2",
"gycwar3",
"grbcam1",
"gnbcam3",
"yebcam1",
"olgcam1",
"grywrw1",
"miowrw2",
"raista1",
"miowrw3",
"bawwar1",
"kopwar1",
"bkcruw1",
"bkfruw1",
"mrmwar1",
"mrmwar3",
"comtai1",
"dantai1",
"camtai1",
"swagoo1",
"whtsta1",
"phitai1",
"ruftai1",
"gybtai1",
"ruttai1",
"ashtai1",
"olbtai1",
"whetai1",
"whbtai1",
"yebtai1",
"lobtai1",
"dussta1",
"afrtai2",
"whtwar1",
"yebere1",
"salere1",
"yevere1",
"senere1",
"grbere1",
"greere1",
"yerere1",
"bunere1",
"buwsta1",
"rucere1",
"turere1",
"blnere1",
"shtwhy1",
"eapwhy1",
"nopwhy1",
"ltpwhy1",
"btpwhy1",
"oliwar",
"alpacc1",
"gobsta1",
"himacc1",
"robacc1",
"rubacc1",
"sibacc",
"broacc1",
"radacc2",
"bltacc1",
"monacc1",
"dunnoc1",
"japacc1",
"mouvel1",
"mabacc1",
"forwag1",
"eaywag1",
"eaywag",
"citwag",
"capwag1",
"madwag1",
"grywag",
"mouwag1",
"whiwag",
"swbhum1",
"afpwag1",
"mekwag1",
"japwag1",
"whbwag1",
"shalon1",
"abylon1",
"fuelon2",
"ortlon1",
"yetlon1",
"panlon1",
"gresap1",
"rotlon1",
"ricpip1",
"oripip1",
"auspip2",
"auspip3",
"afrpip1",
"moupip1",
"blypip1",
"tawpip1",
"lobpip1",
"butcor1",
"lobpip7",
"woopip1",
"bufpip1",
"plbpip1",
"lolpip1",
"meapip1",
"dausta1",
"chcsta1",
"whssta2",
"chtsta2",
"chbcor1",
"whhsta2",
"malsta1",
"brasta1",
"whfsta2",
"rossta2",
"eursta",
"sposta1",
"watsta1",
"bbgsta1",
"phgsta1",
"vepcor1",
"ctgsta1",
"capgls1",
"gbesta1",
"lbesta1",
"btgsta1",
"spgsta1",
"pugsta1",
"ruegls1",
"ltgsta1",
"gobsta5",
"taibeg1",
"boorat1",
"megsta1",
"bugsta1",
"stgsta1",
"supsta1",
"hilsta1",
"shesta1",
"chbsta1",
"ashsta2",
"fissta1",
"afpsta1",
"boorat2",
"whcsta3",
"madsta1",
"vibsta2",
"rewsta1",
"slbsta1",
"chwsta1",
"walsta1",
"somsta1",
"trista1",
"pawsta1",
"rubrat1",
"brcsta1",
"whbsta1",
"neusta1",
"stusta1",
"kensta1",
"natsta1",
"shasta2",
"abbsta2",
"whcsta2",
"magsta1",
"whthil2",
"babsta1",
"stsrha2",
"stbrha1",
"yeboxp1",
"reboxp1",
"moublu",
"wesblu",
"easblu",
"fifthr1",
"rufthr1",
"whthil3",
"wtathr1",
"rtathr1",
"boucha1",
"brbsol1",
"slcsol1",
"towsol",
"puaioh",
"omao",
"cubsol1",
"rutsol1",
"pubwhi1",
"blfsol1",
"varsol1",
"andsol1",
"sulthr1",
"fruith1",
"purcoc1",
"grecoc1",
"varthr",
"aztthr",
"rubsol1",
"ruvwhi1",
"blasol1",
"whesol1",
"misthr1",
"afrthr1",
"abythr1",
"olithr2",
"kurthr1",
"comthr1",
"abethr1",
"karthr1",
"vebbri1",
"chibla1",
"balwar1",
"fitmyz1",
"rutbab1",
"gobful1",
"yeebab1",
"jerbab1",
"beibab1",
"speful1",
"indful1",
"pitbri1",
"chiful1",
"whbful1",
"ludful1",
"sttful2",
"sttful1",
"taiful1",
"wrenti",
"reepar3",
"blbpar2",
"spbpar1",
"ruwbri1",
"grepar1",
"bropar1",
"thtpar1",
"gyhpar3",
"bkhpar1",
"ruhpar3",
"ruhpar2",
"shtpar3",
"fulpar1",
"bltpar1",
"pifgoo",
"bltbri1",
"golpar2",
"spepar2",
"gyhpar4",
"brwpar2",
"vitpar1",
"astpar1",
"rutpar2",
"whcyuh1",
"chcyuh1",
"stryuh1",
"goujew1",
"indyuh1",
"blcyuh1",
"taiyuh1",
"whiyuh1",
"buryuh1",
"whnyuh1",
"sttyuh1",
"ruvyuh1",
"fltbab1",
"vispyb1",
"fabbri1",
"pygbab1",
"rucbab3",
"bkcbab3",
"pasbab1",
"chfbab1",
"nesbab1",
"giweye1",
"gyhwhe1",
"pyweye1",
"minwhe1",
"grcbri1",
"sthwhe1",
"whbwhe1",
"dacwhe1",
"timwhe1",
"flowhe1",
"ysweye1",
"bonhon1",
"ceywhe1",
"yelwhe1",
"bkcwhe1",
"empbri1",
"wbweye1",
"whbwhe3",
"brrwhe9",
"cfweye1",
"swiwhe1",
"mouble1",
"warwhe1",
"loweye2",
"coweye1",
"reuwhe1",
"vifbri1",
"mauwhe1",
"maswhe2",
"maswhe3",
"afywhe1",
"afywhe3",
"brrwhe8",
"heuwhe2",
"brrwhe3",
"abywhe1",
"anweye1",
"brarub1",
"brrwhe4",
"afywhe2",
"capwhe6",
"capwhe2",
"afywhe4",
"peweye1",
"seywhe1",
"madwhe1",
"kirwhe1",
"maywhe1",
"giahum1",
"bcweye2",
"bkrwhe1",
"bfweye1",
"wtweye1",
"crtwhe2",
"crtwhe1",
"burwhe1",
"serwhe1",
"asbwhe1",
"ayweye3",
"maghum1",
"silver3",
"humwhe1",
"sanwhe2",
"evweye1",
"baweye2",
"sacwhe1",
"capwhe3",
"capwhe8",
"yfweye1",
"vanwhe1",
"maghum2",
"laweye1",
"bhweye1",
"biweye1",
"gytwhe2",
"gytwhe1",
"yapwhe1",
"duweye1",
"koswhe1",
"rotwhe1",
"ytweye1",
"tunbeg1",
"fithum1",
"ngweye1",
"ambwhe1",
"grkwhe1",
"spweye2",
"likwhe1",
"gaweye1",
"soiwhe3",
"brweye1",
"ciweye1",
"loweye1",
"lobsta1",
"kulwhe1",
"llweye1",
"sbweye1",
"slweye1",
"gnbwhe1",
"chcbab1",
"tabbab1",
"dafbab1",
"gyftib1",
"gyctib1",
"plcsta",
"sttbab1",
"bostib1",
"fbtbab1",
"brtbab1",
"golbab1",
"chwbab1",
"chwbab3",
"crcbab1",
"rufbab2",
"blcbab2",
"stbsta1",
"rucbab1",
"bucbab1",
"rtwbab1",
"miswrb1",
"bwwbab1",
"patwrb1",
"ltwbab1",
"chhwrb1",
"tbwbab1",
"gybwrb1",
"bltsta2",
"blalau1",
"bahlau1",
"cobscb1",
"rbsbab1",
"sbsbab1",
"sbsbab3",
"taiscb1",
"wbsbab1",
"insbab1",
"srlscb1",
"wbmgem1",
"chbscb2",
"chbscb1",
"lasbab1",
"rcsbab1",
"spbscb1",
"bksscb1",
"gysscb1",
"sbsbab2",
"bltbab1",
"whbbab2",
"buthum",
"chrbab1",
"gytbab1",
"gyhbab1",
"nonbab1",
"soobab1",
"wbwbab1",
"chbbab1",
"whnbab1",
"whbbab1",
"sntbab1",
"amthum1",
"spnbab1",
"rurgra1",
"chigra1",
"lawbab1",
"mawbab1",
"btwbab1",
"socbab1",
"gybbab1",
"sccbab1",
"rucbab2",
"gtmgem1",
"moubab1",
"palbab1",
"whhbab2",
"colbab1",
"yetful1",
"ruwful1",
"bkcful1",
"gofful2",
"rutful1",
"rucful1",
"gbmgem1",
"dusful1",
"putbab1",
"bncbab1",
"marbab2",
"bkcbab1",
"blcbab1",
"shtbab1",
"ashbab1",
"sptbab1",
"bubbab1",
"gwfgoo",
"ptmgem",
"sumbab1",
"tembab1",
"whcbab1",
"ferbab1",
"sulbab1",
"ruvpri1",
"swapri1",
"broill1",
"pabill1",
"pabill3",
"whtmog2",
"mouill1",
"blaill1",
"scbill1",
"thrbab1",
"puvill1",
"ruwill1",
"stwbab3",
"abbbab1",
"horbab2",
"mowbab1",
"gathum1",
"stwbab1",
"limwrb4",
"limwrb2",
"limwrb3",
"rbwbab1",
"stwbab2",
"bowbab1",
"fawbab1",
"eywbab1",
"lbwbab1",
"amewoo1",
"sumwrb1",
"whtwrb1",
"namscb1",
"stsbab1",
"brcful1",
"bkbful1",
"broful1",
"javful1",
"nepful1",
"gycful3",
"pucwoo1",
"gycful5",
"gycful1",
"gycful4",
"mouful1",
"strlau2",
"cutia1",
"viecut1",
"scalau1",
"brclau1",
"blwlau1",
"oashum1",
"strlau1",
"bhulau1",
"strlau3",
"varlau1",
"blflau1",
"whwlau1",
"prhlau1",
"elllau1",
"retlau1",
"chclau2",
"shtwoo1",
"asslau1",
"rewlau1",
"sielau1",
"mallau1",
"bkclau1",
"bkclau2",
"kerlau2",
"lotsib1",
"whesib1",
"rufsib1",
"pershe2",
"beasib1",
"grysib1",
"blbsib1",
"blhsib1",
"hotbar1",
"taibar1",
"sttbar1",
"strbar1",
"blwmin1",
"chtmin1",
"matwoo1",
"rufbar1",
"spebar1",
"bkcbar1",
"retmin1",
"rubsib1",
"stelio1",
"reflio2",
"reflio3",
"lagbab2",
"ashlau1",
"putwoo1",
"slbbab1",
"rufbab3",
"orbbab1",
"junbab2",
"yebbab1",
"rufcha2",
"scacha1",
"irabab1",
"combab1",
"combab3",
"lwfgoo",
"chiwoo1",
"fulcha1",
"arabab1",
"strbab1",
"whtbab1",
"spibab1",
"capbab1",
"wtmbab1",
"brobab1",
"whrbab2",
"hipbab1",
"sltwoo1",
"scabab2",
"tanfin1",
"ytbtan1",
"shbbut1",
"atbtan1",
"scbtan1",
"cobtan1",
"tabtan1",
"dubtan1",
"tumspa1",
"whbwoo6",
"stcspa2",
"stcspa3",
"ruwspa",
"citspa1",
"sthspa1",
"blcspa1",
"brispa1",
"botspa",
"casspa",
"bacspa",
"litwoo5",
"graspa",
"graspa1",
"yebspa1",
"olispa",
"grbspa1",
"blsspa1",
"tocspa1",
"fisspa",
"bktspa",
"larspa",
"gorwoo2",
"larbun",
"chispa",
"clcspa",
"bkcspa",
"fiespa",
"brespa",
"worspa",
"sthbrf4",
"sthbrf5",
"sthbrf3",
"samwoo2",
"sthbrf1",
"sthbrf8",
"sthbrf2",
"orbspa1",
"blcspa2",
"gowspa1",
"pecspa1",
"safspa1",
"hacspa1",
"sabspa4",
"spthum1",
"sabspa1",
"gsbfin1",
"ccbfin",
"soffin1",
"olifin1",
"foxsp2",
"foxsp3",
"foxsp4",
"foxspa",
"amtspa",
"sleshe1",
"voljun1",
"daejun",
"yeejun",
"yeejun2",
"rucspa1",
"whcspa",
"gocspa",
"harspa",
"whtspa",
"sagspa1",
"mexshe1",
"belspa2",
"strspa1",
"vesspa",
"lecspa",
"seaspa",
"nstspa",
"sstspa",
"baispa",
"henspa",
"savspa",
"luchum",
"simspa1",
"sonspa",
"linspa",
"swaspa",
"laffin1",
"zapspa1",
"whcbul2",
"whsbul1",
"blfbul1",
"combul4",
"cosswa1",
"beahum1",
"combul5",
"combul6",
"capbul1",
"afrmar2",
"sqtsaw1",
"blksaw1",
"whhsaw1",
"banmar1",
"bramar1",
"masmar1",
"bkchum",
"banswa",
"pasmar1",
"plamar1",
"gytmar1",
"treswa",
"vigswa",
"whrswa1",
"chiswa1",
"tumswa1",
"manswa1",
"rthhum",
"whwswa1",
"whbswa2",
"blcswa1",
"whtswa1",
"bawswa1",
"blcswa2",
"tahswa2",
"pafswa1",
"brbswa1",
"andswa2",
"verhum1",
"nrwswa",
"srwswa1",
"brcmar1",
"permar1",
"purmar",
"soumar",
"gybmar",
"cubmar",
"carmar1",
"gyrswa1",
"beehum1",
"whbswa3",
"eurcrm1",
"rocmar1",
"rocmar2",
"duscrm1",
"barswa1",
"piwswa1",
"pebswa1",
"pacswa1",
"pacswa3",
"annhum",
"welswa1",
"whtswa3",
"witswa1",
"wtbswa1",
"barswa",
"angswa1",
"recswa1",
"ethswa1",
"cohmar1",
"comhom2",
"coshum",
"nephom1",
"ashmar1",
"rucswa2",
"mosswa2",
"lessts1",
"grests1",
"rerswa1",
"srlswa1",
"strswa2",
"rubswa1",
"calhum",
"cliswa",
"cavswa",
"chcswa2",
"preswa2",
"retswa2",
"soaswa2",
"sttswa2",
"faimar2",
"tremar2",
"chicup1",
"rufhum",
"taiwrb1",
"immwrb1",
"pywbab1",
"mogwar1",
"capgra1",
"damroc1",
"yellon1",
"kemlon1",
"grylon1",
"pullon1",
"allhum",
"krelon1",
"norcro1",
"refcro1",
"capcro1",
"reccro1",
"grecro1",
"lebcro1",
"whbcro2",
"viswar1",
"yebwar1",
"dwacas1",
"blkswa",
"brthum",
"rufwar1",
"blfwar1",
"moutai2",
"ruhtai2",
"brbwar1",
"phbwar1",
"jabwar",
"manbuw1",
"pabwar1",
"tabwar1",
"bumhum",
"shawar1",
"odedi1",
"fibwar1",
"bfbwar1",
"yebbuw2",
"ybbwar1",
"subwar4",
"abbwar1",
"gybtes1",
"ructes1",
"withum1",
"javtes1",
"cetwar1",
"ccbwar1",
"gysbuw1",
"chhtes1",
"asistu1",
"borstu1",
"timstu1",
"neuwar1",
"stswar1",
"scihum1",
"yelfly2",
"chcfly1",
"livfly1",
"grehyl1",
"tithyl1",
"atlfly1",
"rucfly3",
"barfly1",
"rucfly1",
"furfly1",
"dushum1",
"palfly1",
"sumfly1",
"blbfly2",
"lisfly1",
"rutfly7",
"bunfly1",
"lomfly1",
"damfly1",
"alsred1",
"rubred2",
"cubeme1",
"bucred1",
"blared1",
"comred2",
"hodred1",
"whtred1",
"daured1",
"moured1",
"whwred2",
"blfred1",
"plured1",
"pureme1",
"whcred1",
"wwccha1",
"carthr1",
"serthr1",
"strthr1",
"mirthr1",
"rtrthr1",
"lirthr1",
"burthr",
"cbrthr1",
"blhhum1",
"bcrthr1",
"wtrthr1",
"litrot1",
"forrot2",
"whinch1",
"whbbus4",
"whtbus1",
"caisto1",
"stonec4",
"sibsto1",
"brbhum",
"stonec7",
"afrsto1",
"stonec6",
"reusto1",
"whtsto2",
"piebus1",
"jerbus1",
"grybus1",
"timbus1",
"busbus1",
"brbhum2",
"siccha1",
"karcha1",
"moocha1",
"moccha1",
"soocha1",
"noacha1",
"soacha1",
"ruecha1",
"mouwhe1",
"whbcha2",
"blnswa2",
"goceme1",
"ruacha1",
"norwhe",
"norwhe3",
"capwhe1",
"rebwhe2",
"heuwhe1",
"isawhe1",
"hoowhe1",
"deswhe1",
"bkewhe1",
"caneme1",
"bkewhe2",
"cypwhe1",
"piewhe1",
"wfbcha1",
"rerwhe1",
"blacks1",
"famcha1",
"brtcha1",
"somcha1",
"indcha1",
"weseme1",
"varwhe1",
"blawhe1",
"mouwhe4",
"whtwhe1",
"humwhe2",
"finwhe1",
"mouwhe6",
"mouwhe2",
"mouwhe7",
"mouwhe5",
"rebeme1",
"retwhe3",
"retwhe2",
"hercha1",
"grcfly3",
"whtdip1",
"brodip1",
"amedip",
"whcdip1",
"rutdip1",
"philea1",
"blteme1",
"yetlea1",
"leglea1",
"blwlea1",
"borlea1",
"jerlea1",
"goflea1",
"orblea1",
"orblea3",
"olbflo1",
"yebflo2",
"chieme1",
"crbflo1",
"palflo1",
"yerflo1",
"scbflo2",
"speflo1",
"gorflo1",
"thbflo1",
"thbflo3",
"whiflo1",
"yevflo1",
"glbeme1",
"yebflo1",
"whtflo1",
"yesflo1",
"olcflo1",
"bicflo1",
"resflo1",
"rekflo1",
"sccflo1",
"cebflo1",
"orbflo1",
"shteme1",
"whbflo1",
"pabflo1",
"plaflo1",
"plaflo2",
"andflo1",
"pygflo1",
"crcflo1",
"flbflo2",
"flbflo3",
"ashflo1",
"whehum",
"olcflo2",
"recflo1",
"louflo1",
"rebflo1",
"midflo1",
"motflo1",
"blfflo1",
"recflo2",
"mistle1",
"gysflo1",
"xanhum",
"blsflo1",
"fibflo2",
"blbflo1",
"scbflo1",
"schflo1",
"rucsun2",
"sctsun2",
"gyhsun1",
"plbsun1",
"ancsun1",
"mutswa",
"wetsab1",
"plasun1",
"pltsun2",
"retsun3",
"mobsun1",
"wvbsun1",
"kvbsun1",
"ligsun2",
"gresun1",
"grnsun2",
"bansun1",
"lotsab1",
"colsun2",
"pygsun2",
"nivsun2",
"amasun2",
"reisun2",
"orbsun2",
"gnhsun1",
"btbsun2",
"camsun2",
"buhsun1",
"rufsab1",
"eaosun1",
"mocsun2",
"butsun2",
"carsun2",
"gntsun1",
"amesun2",
"sccsun2",
"hunsun2",
"socsun2",
"pursun3",
"emchum1",
"crbsun2",
"putsun3",
"vahsun1",
"blksun1",
"cotsun2",
"bocsun2",
"tacsun1",
"brosun1",
"malsun1",
"retsun2",
"vihhum1",
"gowsun2",
"olbsun3",
"tinsun2",
"miosun3",
"miosun2",
"sdcsun3",
"neesun2",
"stusun1",
"mdcsun3",
"ndcsun2",
"anchum1",
"gdcsun2",
"regsun2",
"edcsun3",
"edcsun4",
"morsun2",
"lovsun3",
"beasun2",
"marsun2",
"shesun2",
"recsun2",
"samblo1",
"bkbsun1",
"pubsun4",
"tsasun1",
"pemsun2",
"ortsun3",
"palsun2",
"shisun3",
"splsun2",
"johsun2",
"supsun2",
"tolblo1",
"ruwsun2",
"oussun2",
"whbsun2",
"varsun2",
"dussun2",
"urssun2",
"batsun2",
"copsun2",
"pursun4",
"yebdac1",
"plover3",
"turdac1",
"blfdac1",
"bkfdac1",
"blldac1",
"whbdac1",
"mccfin1",
"bltsal1",
"orisal1",
"grwsal1",
"grasal2",
"plover4",
"grasal4",
"grasal3",
"strsal1",
"butsal1",
"blwsal1",
"blhsal1",
"blcsal1",
"bltgro2",
"slcgro1",
"massal1",
"truswa",
"gybsab1",
"thbsal1",
"gobsal1",
"banana",
"yefgra1",
"orange1",
"purbul1",
"grabul1",
"cubbul1",
"yesgra1",
"barbul1",
"gybsab4",
"leabul1",
"bkfgra",
"soogra2",
"ducgra2",
"warfin1",
"grywaf1",
"vegfin2",
"metfin1",
"woofin1",
"smtfin1",
"gybsab5",
"latfin1",
"smgfin1",
"larcaf2",
"lagfin1",
"cocfin3",
"megfin1",
"blbgra1",
"bawtan1",
"cobtan2",
"ructan4",
"rubsab1",
"grhtan1",
"blgtan1",
"flctan1",
"whstan1",
"yectan1",
"pilfin1",
"recfin1",
"fuctan1",
"tactan1",
"whltan1",
"whtsab1",
"restan1",
"ructan1",
"crbfin1",
"wwstan1",
"fustan1",
"btstan1",
"wtstan1",
"crctan1",
"flrtan1",
"flrtan3",
"lazsab1",
"y00599",
"bratan1",
"crbtan1",
"mactan1",
"bkbtan1",
"lessee2",
"linsee1",
"whcsee2",
"whcsee1",
"varsee3",
"viosab1",
"grysee1",
"wibsee1",
"whnsee1",
"caqsee1",
"bawsee1",
"docsee1",
"yebsee1",
"dubsee1",
"tbsfin1",
"cbsfin",
"bubsab1",
"nisfin1",
"gbsfin1",
"lbsfin1",
"bbsfin1",
"slcsee1",
"temsee1",
"bufsee1",
"plusee1",
"trosee1",
"rucsee1",
"napsab1",
"whtsee1",
"whbsee1",
"pabsee1",
"chtsee1",
"chbsee1",
"rubsee1",
"capsee1",
"batsee2",
"tabsee1",
"datsee1",
"brtplu1",
"pebsee1",
"rursee1",
"chesee1",
"marsee1",
"blbsee2",
"cinfin1",
"slbfin3",
"gyhbut1",
"blhhem1",
"bowfin1",
"tunswa",
"whvplu1",
"barwaf1",
"barwaf2",
"cowfin1",
"tumfin1",
"comfin1",
"rubhem1",
"gychem1",
"blchem1",
"bkchem2",
"orbhem1",
"crowoo1",
"olehem1",
"blehem1",
"bkehem3",
"bkehem1",
"fuhtan1",
"bubtan1",
"orhtan1",
"chhtan1",
"raytan1",
"suphem1",
"fotwoo1",
"ructan3",
"brftan1",
"bcwfin1",
"ltrfin1",
"whrtan1",
"rswfin1",
"cbmfin1",
"bbbtan1",
"pardus2",
"rrwfin1",
"lotwoo2",
"gytwaf1",
"rbwfin2",
"ptwfin1",
"riwfin1",
"thshem1",
"bcwfin2",
"ciwfin1",
"pebcon1",
"biccon1",
"chvcon1",
"vicwoo2",
"whecon1",
"capcon1",
"giacon1",
"blbcon1",
"whbcon1",
"tamcon1",
"rubcon1",
"cincon1",
"styfin1",
"sutfin1",
"snowca1",
"bryfin1",
"saffin",
"ofyfin1",
"gryfin1",
"chyfin1",
"payfin1",
"gryfin3",
"monyef1",
"gryfin2",
"rayfin1",
"whteme1",
"puyfin1",
"gyhsif1",
"pasfin1",
"bhsfin1",
"pesfin1",
"wilfin3",
"nigfin3",
"cawfin1",
"yebfin1",
"absfin1",
"mexwoo1",
"plsfin1",
"unifin1",
"slafin1",
"pebfin1",
"tildac1",
"rbsfin1",
"wtsfin1",
"wwdfin1",
"shtfin1",
"batsee1",
"stthum1",
"plcsee1",
"parsee1",
"debflo1",
"bluflo1",
"masflo1",
"indflo1",
"rusflo1",
"slaflo1",
"cibflo1",
"mouflo1",
"blbhum1",
"gloflo1",
"chbflo1",
"greflo1",
"whsflo1",
"gybflo1",
"bktflo1",
"merflo1",
"blkflo1",
"vertan1",
"pumtan2",
"whoswa",
"scbhum1",
"yettan1",
"goctan3",
"yestan1",
"goctan4",
"fabtan1",
"baytan3",
"rubsal1",
"bbmtan1",
"cbmtan1",
"homtan1",
"bufhum1",
"blctan2",
"mamtan1",
"grgtan1",
"bcmtan1",
"gbmtan1",
"bwmtan1",
"bcmtan2",
"bkcmot1",
"sbmtan1",
"lamtan1",
"tumhum1",
"glgtan1",
"multan1",
"oretan1",
"orttan1",
"ygbtan1",
"gortan1",
"mobtan1",
"goctan1",
"eurnut1",
"rebcho1",
"spthum2",
"yebcho1",
"piapia1",
"eurjac",
"daujac1",
"houcro1",
"neccro1",
"bancro1",
"slbcro1",
"slbcro4",
"slbcro3",
"mashum1",
"pipcro1",
"flocro1",
"marcro1",
"lobcro1",
"guacro1",
"boucro1",
"brhcro1",
"grycro1",
"capcro2",
"rook1",
"swthum1",
"amecro",
"tamcro",
"sincro1",
"fiscro",
"palcro2",
"cupcro1",
"jamcro1",
"cubcro1",
"whncro1",
"carcro1",
"somhum1",
"hoocro1",
"colcro1",
"labcro1",
"labcro4",
"labcro3",
"torcro2",
"torcro3",
"litcro1",
"forrav1",
"litrav1",
"olshum1",
"ausrav1",
"brnrav1",
"somcro2",
"comrav",
"chirav",
"fatrav1",
"whnrav1",
"thbrav1",
"whwcho1",
"apostl1",
"stream2",
"lesmel1",
"gremel1",
"bucifr1",
"parcro1",
"paradi3",
"glmman2",
"crcman2",
"cucman1",
"truman1",
"lotpar1",
"stream3",
"splast1",
"ritast1",
"prsast1",
"huoast1",
"wespar1",
"carpar1",
"lawpar1",
"wahpar1",
"kospar1",
"grsbop1",
"freduc1",
"vichum",
"vosbop1",
"parrif1",
"vicrif1",
"magrif3",
"magrif2",
"blasic1",
"brosic1",
"blbsic1",
"pabsic1",
"mbopar2",
"grfhum1",
"wbopar1",
"kbopar1",
"walsta2",
"twwbop1",
"gbopar2",
"rbopar1",
"lbopar1",
"rbopar2",
"ebopar1",
"bbopar1",
"azchum1",
"whfrob1",
"payrob1",
"whbrob1",
"yelrob1",
"gybrob1",
"olyrob1",
"hoorob1",
"dusrob1",
"whwrob2",
"smorob2",
"buvhum1",
"bugrob1",
"whrrob2",
"manrob1",
"blcrob1",
"blsrob2",
"whbrob2",
"bltrob1",
"gyhrob1",
"ashrob2",
"gyhrob2",
"berhum",
"papscr2",
"nosrob1",
"sosrob1",
"lebfly3",
"gobfly2",
"jacwin1",
"torfly1",
"yebrob1",
"yelfly4",
"olifly3",
"blthum1",
"canfly2",
"garrob1",
"rosrob1",
"pinrob1",
"snmrob1",
"alprob1",
"flarob1",
"pacrob3",
"pacrob1",
"pacrob2",
"snbhum1",
"scarob2",
"recrob1",
"tomtit1",
"nezrob2",
"nezrob3",
"charob1",
"grbrob1",
"grgrob1",
"legrob1",
"whnroc1",
"stvhum2",
"rufroc1",
"orbroc1",
"marbab1",
"bohwax",
"japwax1",
"cedwax",
"bayfly1",
"grsfly1",
"ltsfly1",
"phaino",
"inchum1",
"hypoco1",
"palmch1",
"olfwhi1",
"yebfan1",
"faifly1",
"eurgre1",
"origre",
"origre6",
"yebgre4",
"viegre2",
"chbhum1",
"bkhgre1",
"desfin2",
"gowgro2",
"orifin1",
"afrcit1",
"wescit1",
"soucit1",
"blfcan1",
"papcan1",
"forcan1",
"flystd1",
"grbhum1",
"whrsee",
"bltcan1",
"yerser1",
"reisee2",
"olrser1",
"yetser1",
"salser1",
"yefcan",
"whbcan1",
"ankser2",
"corhum1",
"yemser1",
"capsis2",
"drasis2",
"norgrc1",
"sougrc1",
"yelcan1",
"brican1",
"sthsee2",
"sthsee3",
"blesee1",
"cinhum1",
"brrsee1",
"whtcan1",
"thbsee1",
"strsee1",
"tansee1",
"procan1",
"twite1",
"eurlin1",
"yemlin1",
"comred",
"bubhum",
"lesred1",
"hoared",
"parcro2",
"scocro1",
"reblei",
"melthr",
"rtlhum",
"honeme1",
"manhum1",
"amahum1",
"andeme1",
"shghum1",
"flistd1",
"gotsap1",
"vereme1",
"sathum1",
"sabhum1",
"humsap2",
"blhsap1",
"whceme1",
"plbeme1",
"whthum2",
"glteme1",
"falstd1",
"saseme1",
"rutsap1",
"gilhum1",
"whbhum1",
"gawhum1",
"blchum1",
"chahum1",
"puchum1",
"whbeme1",
"bltgol1"
] |
procodomatic/food_classifier
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# procodomatic/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5588
- Validation Loss: 4.4765
- Train Accuracy: 0.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 4.5588 | 4.4765 | 0.0 | 0 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"apple_pie",
"baby_back_ribs",
"bruschetta",
"waffles",
"caesar_salad",
"cannoli",
"caprese_salad",
"carrot_cake",
"ceviche",
"cheesecake",
"cheese_plate",
"chicken_curry",
"chicken_quesadilla",
"baklava",
"chicken_wings",
"chocolate_cake",
"chocolate_mousse",
"churros",
"clam_chowder",
"club_sandwich",
"crab_cakes",
"creme_brulee",
"croque_madame",
"cup_cakes",
"beef_carpaccio",
"deviled_eggs",
"donuts",
"dumplings",
"edamame",
"eggs_benedict",
"escargots",
"falafel",
"filet_mignon",
"fish_and_chips",
"foie_gras",
"beef_tartare",
"french_fries",
"french_onion_soup",
"french_toast",
"fried_calamari",
"fried_rice",
"frozen_yogurt",
"garlic_bread",
"gnocchi",
"greek_salad",
"grilled_cheese_sandwich",
"beet_salad",
"grilled_salmon",
"guacamole",
"gyoza",
"hamburger",
"hot_and_sour_soup",
"hot_dog",
"huevos_rancheros",
"hummus",
"ice_cream",
"lasagna",
"beignets",
"lobster_bisque",
"lobster_roll_sandwich",
"macaroni_and_cheese",
"macarons",
"miso_soup",
"mussels",
"nachos",
"omelette",
"onion_rings",
"oysters",
"bibimbap",
"pad_thai",
"paella",
"pancakes",
"panna_cotta",
"peking_duck",
"pho",
"pizza",
"pork_chop",
"poutine",
"prime_rib",
"bread_pudding",
"pulled_pork_sandwich",
"ramen",
"ravioli",
"red_velvet_cake",
"risotto",
"samosa",
"sashimi",
"scallops",
"seaweed_salad",
"shrimp_and_grits",
"breakfast_burrito",
"spaghetti_bolognese",
"spaghetti_carbonara",
"spring_rolls",
"steak",
"strawberry_shortcake",
"sushi",
"tacos",
"takoyaki",
"tiramisu",
"tuna_tartare"
] |
DBD-research-group/EfficientNet-B1-BirdSet-XCM
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"gretit1",
"eurbla",
"coatit2",
"pavpig2",
"rebpig1",
"plupig2",
"rudpig",
"eutdov",
"blgdov1",
"ruqdov",
"whtdov",
"grfdov1",
"moudov",
"houspa",
"gycwor1",
"amgplo",
"killde",
"amewoo",
"sposan",
"solsan",
"ribgul",
"barpet",
"hawpet1",
"greibi1",
"grswoo",
"grbher3",
"coohaw",
"gryhaw2",
"reshaw",
"hawhaw",
"amapyo1",
"fepowl",
"trsowl",
"tabsco1",
"brdowl",
"comnig1",
"blttro1",
"gnbtro1",
"viotro3",
"blctro1",
"coltro1",
"garkin1",
"rinkin1",
"belkin1",
"bucmot2",
"bucmot4",
"dunnoc1",
"higmot1",
"blfjac1",
"wespuf1",
"blfnun1",
"gilbar1",
"letbar1",
"rehbar1",
"kebtou1",
"whttou1",
"acowoo",
"skylar",
"yetwoo2",
"hofwoo1",
"rebwoo",
"wilsap",
"yebsap",
"dowwoo",
"litwoo2",
"haiwoo",
"whhwoo",
"whtwoo2",
"comrav",
"gogwoo1",
"norfli",
"scbwoo5",
"rinwoo1",
"linwoo1",
"pilwoo",
"renwoo1",
"blacar1",
"yehcar1",
"laufal1",
"eurgre1",
"baffal1",
"coffal1",
"buffal1",
"orcpar",
"cowpar1",
"blhpar1",
"whcpar",
"brwpar1",
"whfpar1",
"meapar",
"eurgol",
"orfpar",
"duhpar",
"rebmac2",
"crfpar",
"oliwoo1",
"plbwoo1",
"citwoo1",
"lobwoo1",
"amabaw1",
"strwoo2",
"trepip",
"elewoo1",
"butwoo1",
"stbwoo2",
"sthwoo1",
"strxen1",
"crfgle1",
"chwfog1",
"btfgle1",
"azaspi1",
"spwant2",
"comcha",
"eurnut2",
"bltant2",
"pygant1",
"whfant2",
"lowant1",
"gryant1",
"pltant1",
"dutant2",
"barant1",
"plwant1",
"fasant1",
"eurjay1",
"greant1",
"bsbeye1",
"gryant2",
"pluant1",
"goeant1",
"rucant2",
"blfant1",
"rufant3",
"astgna1",
"wibpip1",
"houwre",
"yectyr1",
"forela1",
"yebela1",
"whltyr1",
"rinant2",
"goftyr1",
"whbtot1",
"ruftof1",
"cotfly1",
"yemfly1",
"eursta",
"gycfly1",
"gocspa1",
"whcspa1",
"eulfly1",
"easpho",
"olsfly",
"wewpew",
"eawpew",
"aldfly",
"hamfly",
"cetwar1",
"dusfly",
"pasfly",
"pirfly1",
"rumfly1",
"socfly1",
"grcfly1",
"bobfly1",
"trokin",
"easkin",
"gramou1",
"barswa",
"whrsir1",
"ducfly",
"grcfly",
"ducatt1",
"brratt1",
"putfru1",
"scrpih1",
"blfcot1",
"lotman1",
"batman1",
"eurlin1",
"royfly1",
"mastit1",
"cinmou1",
"whwbec1",
"blcbec1",
"rotbec",
"ducgre1",
"reevir1",
"hutvir",
"yetvir",
"lottit1",
"buhvir",
"casvir",
"elepai",
"blcjay1",
"grnjay",
"brnjay",
"blujay",
"stejay",
"clanut",
"amegfi",
"carcro1",
"pinsis",
"puteup1",
"yeceup1",
"yeteup1",
"blbthr1",
"whnrob1",
"hauthr1",
"clcrob",
"bubwre1",
"muswre2",
"martit2",
"melbla1",
"rusbla",
"brebla",
"comgra",
"grtgra",
"ovenbi1",
"louwat",
"norwat",
"buwwar",
"bawwar",
"comchi1",
"eurbul",
"tenwar",
"orcwar",
"naswar",
"macwar",
"kenwar",
"hoowar",
"amered",
"babwar",
"bkbwar",
"yelwar",
"sarwar1",
"chswar",
"yerwar",
"btywar",
"towwar",
"herwar",
"thelar1",
"crelar1",
"btnwar",
"rucwar1",
"rucwar",
"eugori2",
"wlswar",
"sltred",
"scatan",
"westan",
"rcatan1",
"robgro",
"bkhgro",
"bubgro2",
"lazbun",
"bugtan",
"spofly1",
"blctan1",
"scrtan1",
"partan1",
"woothr",
"obnthr1",
"herthr",
"veery",
"evegro",
"strsal1",
"sibtan2",
"sonspa",
"buggna",
"whbnut",
"rebnut",
"brncre",
"grycat",
"rocpet1",
"comwax",
"easwar1",
"darwar1",
"tuftit",
"shttre1",
"chbchi",
"bkcchi",
"mouchi",
"amepip",
"gcrfin",
"palila",
"iiwi",
"apapan",
"hawcre",
"akepa1",
"firecr1",
"hawama",
"purfin",
"casfin",
"houfin",
"wegspa1",
"pregrs2",
"gnttow",
"eastow",
"boboli",
"ruboro1",
"eurmag1",
"olioro1",
"monoro1",
"sobcac1",
"yercac1",
"yebori1",
"balori",
"bnhcow",
"ruckin",
"gockin",
"runwre1",
"eurser1",
"thlwre1",
"rocwre",
"whiwre1",
"rubwre1",
"rawwre1",
"plawre1",
"moublu",
"easblu",
"towsol",
"omao",
"mallar3",
"andsol1",
"warwhe1",
"ccbfin",
"foxspa",
"whcspa",
"whtspa",
"vesspa",
"linspa",
"swaspa",
"treswa",
"eurrob1",
"woolar1",
"jabwar",
"grasal3",
"butsal1",
"blhsal1",
"yefgra1",
"flrtan1",
"amecro",
"cedwax",
"yefcan",
"reblei",
"cretit2",
"melthr",
"eucdov",
"zitcis1",
"spotow",
"bcnher",
"cirbun1",
"bewwre",
"rewbla",
"rucspa1",
"sonthr1",
"norcar",
"combuz1",
"grywag",
"cowpig1",
"amerob",
"melwar1",
"swathr",
"comyel",
"grekis",
"daejun",
"blackc1",
"carwre",
"warvir",
"eurwry",
"roahaw",
"cangoo",
"banana",
"gretin1",
"cintin1",
"littin1",
"undtin1",
"blutit",
"bartin2",
"horscr1",
"hawgoo",
"snogoo",
"tunswa",
"wooduc",
"grhcha1",
"specha3",
"colcha1",
"spigua1",
"redcro",
"mouqua",
"calqua",
"stwqua1",
"wiltur",
"kalphe",
"blkfra",
"chukar",
"ercfra",
"compau",
"compot1",
"winwre4",
"annhum",
"buvhum1",
"stvhum2",
"rtlhum",
"andeme1",
"strcuc1",
"squcuc1",
"yebcuc",
"scapig2",
"batpig1"
] |
DBD-research-group/EfficientNet-B1-BirdSet-XCL
|
# EfficientNet (trained on XCL from BirdSet)
Efficient trained on the XCL dataset from BirdSet, covering 9736 bird species from Xeno-Canto. Please refer to the [BirdSet Paper](https://arxiv.org/pdf/2403.10380) and the
[BirdSet Repository](https://github.com/DBD-research-group/BirdSet/tree/main) for further information.
## How to use
The BirdSet data needs a custom processor that is available in the BirdSet repository. The model does not have a processor available.
The model accepts a mono image (spectrogram) as input (e.g., `torch.Size([16, 1, 256, 417])`)
- The model is trained on 5-second clips of bird vocalizations.
- num_channels: 1
- pretrained checkpoint: google/efficientnet-b1
- sampling_rate: 32_000
- normalize spectrogram: mean: -4.268, std: 4.569 (from esc-50)
- spectrogram: n_fft: 2048, hop_length: 2048, power: 2.0
- melscale: n_mels: 256, n_stft: 1025
- dbscale: top_db: 80
See [model implementation](https://github.com/DBD-research-group/BirdSet/blob/main/birdset/modules/models/birdset_models/efficientnet_bs.py).
Run in [Google Colab](https://colab.research.google.com/drive/15Y4k8kvUV8k7Jay76r_X-wnKi6p8v-7N?usp=sharing):
```python
from transformers import EfficientNetForImageClassification
import torch
import torchaudio
from torchvision import transforms
import requests
import torchaudio
import io
# download the audio file of a bird sound: Common Craw
url = "https://xeno-canto.org/704485/download"
response = requests.get(url)
audio, sample_rate = torchaudio.load(io.BytesIO(response.content))
print("Original shape and sample rate: ", audio.shape, sample_rate)
# crop to 5 seconds
audio = audio[:, : 5 * sample_rate]
# resample to 32kHz
resample = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=32000)
audio = resample(audio)
print("Resampled shape and sample rate: ", audio.shape, 32000)
CACHE_DIR = "../../data_birdset" # Change this to your own cache directory
# Load the model
model = EfficientNetForImageClassification.from_pretrained(
"DBD-research-group/EfficientNet-B1-BirdSet-XCL",
num_channels=1,
cache_dir=CACHE_DIR,
ignore_mismatched_sizes=True,
)
class PowerToDB(torch.nn.Module):
"""
A power spectrogram to decibel conversion layer. See birdset.datamodule.components.augmentations
"""
def __init__(self, ref=1.0, amin=1e-10, top_db=80.0):
super(PowerToDB, self).__init__()
# Initialize parameters
self.ref = ref
self.amin = amin
self.top_db = top_db
def forward(self, S):
# Convert S to a PyTorch tensor if it is not already
S = torch.as_tensor(S, dtype=torch.float32)
if self.amin <= 0:
raise ValueError("amin must be strictly positive")
if torch.is_complex(S):
magnitude = S.abs()
else:
magnitude = S
# Check if ref is a callable function or a scalar
if callable(self.ref):
ref_value = self.ref(magnitude)
else:
ref_value = torch.abs(torch.tensor(self.ref, dtype=S.dtype))
# Compute the log spectrogram
log_spec = 10.0 * torch.log10(
torch.maximum(magnitude, torch.tensor(self.amin, device=magnitude.device))
)
log_spec -= 10.0 * torch.log10(
torch.maximum(ref_value, torch.tensor(self.amin, device=magnitude.device))
)
# Apply top_db threshold if necessary
if self.top_db is not None:
if self.top_db < 0:
raise ValueError("top_db must be non-negative")
log_spec = torch.maximum(log_spec, log_spec.max() - self.top_db)
return log_spec
# Initialize preprocessors
spectrogram_converter = torchaudio.transforms.Spectrogram(
n_fft=2048, hop_length=256, power=2.0
)
mel_converter = torchaudio.transforms.MelScale(
n_mels=256, n_stft=1025, sample_rate=32_000
)
powerToDB = PowerToDB(top_db=80)
def preprocess(audio, sample_rate_of_audio):
"""
Preprocess the audio to the format that the model expects
- Resample to 32kHz
- Convert to melscale spectrogram n_fft: 2048, hop_length: 256, power: 2. melscale: n_mels: 256, n_stft: 1025
- Normalize the melscale spectrogram with mean: -4.268, std: 4.569 (from AudioSet)
"""
spectrogram = spectrogram_converter(audio)
spectrogram = spectrogram.to(torch.float32)
melspec = mel_converter(spectrogram)
dbscale = powerToDB(melspec)
normalized_dbscale = transforms.Normalize((-4.268,), (4.569,))(dbscale)
# add batch dimension if needed
if normalized_dbscale.dim() == 3:
normalized_dbscale = normalized_dbscale.unsqueeze(0)
return normalized_dbscale
preprocessed_audio = preprocess(audio, sample_rate)
print("Preprocessed_audio shape:", preprocessed_audio.shape)
logits = model(preprocessed_audio).logits
print("Logits shape: ", logits.shape)
top5 = torch.topk(logits, 5)
print("Top 5 logits:", top5.values)
print("Top 5 predicted classes:")
print([model.config.id2label[i] for i in top5.indices.squeeze().tolist()])
```
## Model Source
- **Repository:** [BirdSet Repository](https://github.com/DBD-research-group/BirdSet/tree/main)
- **Paper [optional]:** [BirdSet Paper](https://arxiv.org/pdf/2403.10380)
## Citation
|
[
"ostric2",
"grerhe1",
"norcas1",
"whhstd1",
"whcsap1",
"vibhum1",
"bucsap1",
"grbtur1",
"bfgbir1",
"grygab1",
"wbgbir1",
"wesple1",
"easple1",
"puctur2",
"torduc1",
"ruwtur2",
"prrtur1",
"whctur1",
"viotur1",
"rostur1",
"yebtur1",
"whctur2",
"rectur1",
"guitur1",
"livtur1",
"spwgoo1",
"schtur1",
"knytur1",
"blbtur1",
"fistur1",
"hartur1",
"grebus1",
"arabus1",
"korbus1",
"houbus1",
"ludbus1",
"comduc3",
"whbbus2",
"blubus1",
"karbus1",
"ruebus1",
"savbus1",
"bucbus1",
"recbus1",
"blabus3",
"whqbus1",
"bkbbus1",
"comduc2",
"lesflo2",
"litbus1",
"guicuc1",
"greani1",
"smbani",
"grbani",
"strcuc1",
"phecuc1",
"pavcuc1",
"legcuc1",
"buwgoo1",
"greroa",
"lesroa1",
"rvgcuc1",
"scgcuc1",
"bagcuc1",
"rwgcuc1",
"rbgcuc1",
"buhcou1",
"piecou1",
"grbcou1",
"egygoo",
"biacou1",
"rufcou1",
"grbcou2",
"blfcou1",
"blhcou1",
"shtcou1",
"baycou1",
"gabcou1",
"bltcou1",
"sencou1",
"origoo1",
"blhcou2",
"cotcou1",
"whbcou1",
"whbcou3",
"grecou1",
"madcou1",
"golcou1",
"blacou1",
"phicou1",
"lescou1",
"andgoo1",
"viocou1",
"lebcou1",
"phecou2",
"andcou1",
"bogcuc1",
"sugcuc1",
"cbgcuc1",
"crecou1",
"blucou1",
"reccou1",
"uplgoo1",
"refcou1",
"coqcou1",
"runcou1",
"giacou1",
"rebcou1",
"rafmal1",
"yellow5",
"yellow6",
"sirmal1",
"rebmal2",
"emu1",
"kelgoo1",
"yebmal1",
"chbmal2",
"chbmal1",
"blfmal1",
"blbmal1",
"grbmal1",
"recmal1",
"scfmal1",
"chwcuc1",
"grscuc1",
"ashgoo1",
"levcuc1",
"piecuc1",
"litcuc2",
"dwacuc1",
"asccuc1",
"squcuc1",
"blbcuc1",
"dabcuc1",
"yebcuc",
"pebcuc1",
"ruhgoo1",
"mancuc",
"bkbcuc",
"gyccuc",
"chbcuc4",
"babcuc4",
"grelic1",
"purlic1",
"hislic1",
"thbcuc1",
"dwakoe1",
"radshe1",
"asikoe2",
"bkbkoe1",
"asikoe3",
"lotkoe1",
"chbcuc2",
"asecuc1",
"viocuc1",
"didcuc1",
"klacuc1",
"yetcuc1",
"comshe",
"afecuc1",
"lobcuc1",
"hobcuc1",
"blecuc1",
"rtbcuc1",
"shbcuc1",
"webcuc1",
"libcuc1",
"palcuc1",
"whckoe1",
"rudshe",
"chbcuc3",
"fatcuc1",
"babcuc2",
"placuc1",
"placuc3",
"brucuc1",
"brucuc2",
"molcuc1",
"dltcuc1",
"oltcuc1",
"soashe1",
"bltcuc1",
"phidrc1",
"asidrc3",
"asidrc2",
"asidrc4",
"mohcuc1",
"larhac2",
"larhac1",
"cohcuc1",
"nohcuc1",
"ausshe1",
"phhcuc1",
"malhac1",
"hodhac1",
"blacuc1",
"reccuc1",
"lescuc1",
"suhcuc1",
"indcuc1",
"madcuc1",
"afrcuc1",
"parshe1",
"himcuc1",
"oricuc2",
"suncuc2",
"comcuc",
"whbmes2",
"bromes1",
"tibsan1",
"palsan1",
"pitsan1",
"namsan1",
"pieduc1",
"chbsan",
"sposan1",
"blbsan1",
"yetsan1",
"crosan1",
"blfsan1",
"madsan1",
"licsan1",
"paisan1",
"fobsan1",
"grytin1",
"musduc",
"dobsan1",
"bursan1",
"rocpig",
"hilpig1",
"snopig1",
"spepig1",
"whcpig1",
"stodov1",
"pabpig1",
"cowpig1",
"whwduc1",
"tropig1",
"bolpig1",
"laupig1",
"afepig1",
"rampig1",
"compig1",
"spwpig1",
"aswpig1",
"niwpig1",
"siwpig1",
"harduc1",
"jawpig1",
"metpig1",
"whhpig1",
"yelpig1",
"delpig1",
"brnpig1",
"satpig1",
"lemdov2",
"whcpig2",
"scnpig1",
"wooduc",
"scapig2",
"picpig2",
"baepig2",
"spwpig3",
"batpig1",
"pavpig2",
"rebpig1",
"perpig2",
"plapig",
"plupig2",
"manduc",
"rudpig",
"shbpig",
"duspig2",
"matdov1",
"pinpig2",
"eutdov",
"dutdov1",
"adtdov1",
"ortdov",
"eucdov",
"manduc1",
"eurcod2",
"afcdov1",
"wwcdov1",
"afmdov1",
"reedov1",
"rindov",
"vindov1",
"recdov1",
"spodov",
"laudov1",
"afrpyg1",
"bacdov1",
"sbcdov1",
"sulcud1",
"rucdov1",
"engcud1",
"barcud1",
"timcud1",
"tancud1",
"ducdov1",
"phcdov1",
"copgoo1",
"brcdov1",
"ancdov1",
"bbcdov1",
"macdov1",
"licdov1",
"grcdov2",
"picdov1",
"crcdov1",
"wfcdov1",
"slacud1",
"grnpyg1",
"eswdov1",
"bbwdov1",
"bswdov1",
"tamdov1",
"bhwdov1",
"namdov1",
"emedov2",
"emedov3",
"stedov1",
"negbro1",
"bratea1",
"combro1",
"brubro1",
"crepig1",
"spipig2",
"squpig1",
"parpig1",
"tbgpig2",
"wonpig1",
"diadov1",
"zebdov",
"soltin1",
"rintea1",
"peadov1",
"bardov2",
"basdov1",
"incdov",
"scadov1",
"cogdov",
"pbgdov1",
"ecgdov1",
"rugdov",
"pigdov1",
"creduc1",
"crgdov1",
"blgdov1",
"mcgdov1",
"begdov1",
"bwgdov1",
"gsgdov1",
"ltgdov1",
"bhqdov1",
"sapqud1",
"sapqud2",
"speduc2",
"ruqdov",
"viqdov1",
"wfqdov",
"kwqdov",
"brqdov1",
"obqdov1",
"whtdov",
"latdov1",
"grfdov1",
"gyhdov1",
"baitea",
"paldov1",
"gredov1",
"cardov1",
"grcdov1",
"ocbdov1",
"toldov1",
"tuqdov1",
"bfqdov1",
"pbqdov1",
"wfqdov1",
"gargan",
"wtqdov1",
"liqdov1",
"chqdov1",
"rcqdov1",
"moudov",
"eardov1",
"zendov",
"whwdov",
"wepdov1",
"nicpig1",
"hottea1",
"sugdov1",
"mantho1",
"minblh1",
"nebhea1",
"wbgdov1",
"wtgdov1",
"frgdov1",
"scgdov1",
"phepig1",
"vicpig1",
"puntea1",
"whedov1",
"amedov1",
"daedov2",
"cihpig1",
"ligpig1",
"pinpig3",
"orbpig1",
"pomgrp2",
"pomgrp5",
"pomgrp4",
"siltea1",
"pomgrp1",
"thbpig1",
"gycpig1",
"sugpig2",
"flgpig1",
"timgrp1",
"lagpig1",
"yefpig1",
"brgpig1",
"madgrp1",
"redsho1",
"madgrp2",
"afrgrp1",
"gnspig1",
"yevpig1",
"wetpig1",
"whbpig1",
"whigrp1",
"whgpig1",
"bbfdov1",
"shbtre1",
"cintea",
"rnfdov1",
"phfdov1",
"fbfdov1",
"ybfdov2",
"refdov1",
"jafdov1",
"macfrd3",
"bcfdov1",
"sbfdov1",
"wofdov1",
"blatin1",
"buwtea",
"psfdov1",
"tafdov1",
"offdov1",
"wafdov1",
"sufdov1",
"mcfdov1",
"pucfrd1",
"kosfrd1",
"pafdov1",
"cifdov1",
"capsho1",
"mafdov2",
"rcfdov1",
"gygfrd1",
"mafdov1",
"atfdov1",
"rbfdov1",
"wcfdov1",
"cofdov1",
"befdov1",
"bcfdov2",
"norsho",
"whbfrd1",
"yebfrd1",
"yebfrd2",
"cbfdov1",
"whfdov2",
"obfdov1",
"gyhfrd1",
"cafdov1",
"bknfrd1",
"dwafrd1",
"gadwal",
"oradov1",
"goldov1",
"veldov1",
"mabpig1",
"cobpig1",
"sebpig1",
"pbipig1",
"wbipig1",
"gripig1",
"grnimp2",
"falduc",
"wheimp2",
"wheimp1",
"elipig1",
"paipig1",
"miipig1",
"marimp1",
"rkipig1",
"spipig1",
"spiimp2",
"ptipig1",
"eurwig",
"cbipig2",
"fiipig1",
"isipig1",
"phipig1",
"ciipig1",
"gryimp1",
"peipig1",
"cbipig1",
"baipig1",
"ncipig1",
"chiwig1",
"piipig2",
"zoeimp1",
"mouimp1",
"moipig1",
"dbipig1",
"tiipig1",
"piipig1",
"whiimp1",
"torimp1",
"torimp2",
"amewig",
"nezpig2",
"nezpig3",
"sompig2",
"afrfin1",
"masfin3",
"sungre1",
"madwor1",
"tsiwor1",
"whsflu1",
"busflu1",
"afbduc1",
"recflu1",
"chhflu1",
"stbflu1",
"strflu1",
"madflu1",
"slbflu1",
"wsfrai1",
"chfrai1",
"forrai1",
"astcra1",
"yebduc1",
"pabcra",
"colcra2",
"sporai",
"blarai1",
"plurai1",
"unicra1",
"rnwrai1",
"liwrai1",
"runwor1",
"gycwor1",
"gretin1",
"melduc1",
"brwrai1",
"giwrai1",
"rwwrai1",
"sbwrai1",
"ridrai1",
"clarai11",
"kinrai2",
"manrai1",
"kinrai4",
"virrai",
"pabduc1",
"bograi1",
"virrai1",
"ausrai1",
"watrai1",
"bncrai1",
"afrrai1",
"madrai1",
"afrcra1",
"whtrai1",
"corcra",
"hawduc",
"slbrai1",
"lewrai1",
"invrai1",
"weka1",
"cherai1",
"okirai1",
"barrai1",
"bubrai1",
"lohrai1",
"spfgal1",
"phiduc1",
"sora",
"spocra1",
"auscra1",
"tanhen1",
"lesmoo1",
"dusmoo1",
"comgal1",
"commoo3",
"trimoo3",
"refcoo1",
"isbduc1",
"giacoo1",
"regcoo1",
"eurcoo",
"rekcoo1",
"hawcoo",
"y00475",
"slccoo1",
"whwcoo1",
"allgal1",
"purgal2",
"spbduc",
"fusfly1",
"azugal1",
"purswa1",
"purswa2",
"purswa3",
"purswa6",
"takahe3",
"ocecra1",
"ruccra1",
"chhcra1",
"mallar3",
"swirai1",
"yelrai",
"yebcra1",
"blkrai",
"mappyt1",
"galrai1",
"dowcra1",
"rudcra1",
"ruscra1",
"rufcra2",
"motduc",
"rufcra1",
"rawcra1",
"grbcra1",
"whtcra1",
"blbcra1",
"blacra1",
"sakrai1",
"rubcra1",
"babcra1",
"bltcra1",
"ambduc",
"brocra1",
"baicra1",
"litcra1",
"spocra2",
"sllcra1",
"andcra1",
"relcra1",
"rencra1",
"nkurai1",
"whbcra1",
"mexduc",
"strcra1",
"waterc1",
"whbwat1",
"plabuh1",
"isabuh1",
"rutbuh1",
"gywtru1",
"pawtru2",
"dawtru1",
"grccra1",
"whttin1",
"captea1",
"blccra1",
"sibcra1",
"sancra",
"whncra1",
"sarcra1",
"brolga1",
"watcra2",
"blucra2",
"demcra1",
"reccra1",
"whcpin",
"whocra",
"comcra",
"hoocra1",
"blncra1",
"limpki",
"litgre1",
"litgre4",
"ausgre1",
"madgre1",
"leagre",
"rebduc1",
"pibgre",
"whtgre3",
"titgre1",
"gregre1",
"rengre",
"grcgre1",
"horgre",
"eargre",
"silgre1",
"jungre1",
"yebpin1",
"hoogre1",
"wesgre",
"clagre",
"grefla3",
"grefla2",
"chifla1",
"lesfla1",
"andfla2",
"jamfla1",
"smabut2",
"eatpin1",
"rebbut2",
"hotbut1",
"hotbut3",
"yelbut1",
"barbut1",
"chbbut2",
"paibut",
"eutkne1",
"indthk1",
"setkne1",
"norpin",
"watkne1",
"sptkne1",
"dstkne",
"petkne1",
"butkne1",
"grtkne1",
"beathk1",
"blfshe1",
"magplo1",
"magoys1",
"gnwtea",
"blaoys1",
"blkoys",
"ameoys",
"afroys1",
"euroys1",
"soioys1",
"pieoys1",
"varoys1",
"chaoys1",
"soooys1",
"yebtea1",
"ibisbi1",
"bkwsti",
"piesti1",
"bknsti",
"bknsti2",
"blasti1",
"pieavo1",
"ameavo",
"renavo1",
"andavo1",
"spetea3",
"norlap",
"lotlap1",
"blaplo1",
"spwlap1",
"rivlap1",
"blhlap1",
"yewlap2",
"whhlap1",
"senlap1",
"blwlap1",
"suntea1",
"crolap1",
"watlap1",
"spblap1",
"brclap1",
"gyhlap1",
"rewlap1",
"banlap1",
"maslap1",
"soclap1",
"whtlap1",
"higtin1",
"andtea1",
"soulap1",
"andlap1",
"rekdot1",
"inldot2",
"wrybil1",
"eugplo",
"pagplo",
"amgplo",
"bkbplo",
"rebdot1",
"gretea1",
"corplo",
"semplo",
"lobplo1",
"lirplo",
"wilplo",
"killde",
"pipplo",
"madplo1",
"kitplo1",
"sthplo1",
"chetea1",
"thbplo1",
"forplo1",
"whfplo1",
"kenplo1",
"whfplo2",
"snoplo5",
"javplo1",
"recplo1",
"chbplo1",
"colplo1",
"bertea1",
"punplo1",
"twbplo1",
"dobplo1",
"lesplo",
"grsplo",
"casplo1",
"oriplo1",
"eurdot",
"rucdot1",
"mouplo",
"brotea1",
"shoplo1",
"blfdot1",
"tatdot1",
"diaplo1",
"pielap1",
"egyplo1",
"grpsni1",
"soapas1",
"lesjac1",
"afrjac1",
"caitea1",
"madjac1",
"cocjac1",
"phtjac1",
"brwjac1",
"norjac",
"watjac1",
"rubsee2",
"whbsee2",
"gybsee1",
"leasee1",
"martea1",
"uplsan",
"brtcur",
"whimbr",
"whimbr3",
"litcur",
"lobcur",
"faecur",
"slbcur",
"eurcur",
"batgod",
"recpoc",
"bktgod",
"hudgod",
"margod",
"rudtur",
"blktur",
"tuasan1",
"grekno",
"redkno",
"surfbi",
"ruff",
"robpoc1",
"brbsan",
"shtsan",
"stisan",
"cursan",
"temsti",
"lotsti",
"spbsan1",
"rensti",
"sander",
"dunlin",
"canvas",
"rocsan",
"pursan",
"baisan",
"litsti",
"leasan",
"whrsan",
"bubsan",
"pecsan",
"semsan",
"wessan",
"tabtin1",
"redhea",
"asidow1",
"lobdow",
"shbdow",
"eurwoo",
"amawoo1",
"duswoo4",
"duswoo3",
"bukwoo1",
"molwoo1",
"amewoo",
"compoc",
"chisni1",
"snisni1",
"jacsni",
"solsni1",
"latsni1",
"woosni1",
"pitsni",
"swisni1",
"afrsni1",
"madsni1",
"ferduc",
"gresni1",
"comsni",
"wilsni1",
"soasni2",
"soasni3",
"punsni1",
"nobsni1",
"giasni1",
"fuesni1",
"andsni1",
"nezsca1",
"impsni1",
"tersan",
"wilpha",
"renpha",
"redpha1",
"comsan",
"sposan",
"grnsan",
"solsan",
"wantat1",
"rinduc",
"gyttat1",
"lesyel",
"willet1",
"comred1",
"marsan",
"woosan",
"spored",
"comgre",
"norgre1",
"greyel",
"tufduc",
"craplo1",
"crccou1",
"somcou1",
"burcou2",
"temcou1",
"dobcou2",
"thbcou1",
"brwcou1",
"jercou1",
"auspra1",
"gresca",
"colpra",
"oripra",
"blwpra1",
"madpra1",
"rocpra1",
"smapra1",
"brnnod",
"lesnod1",
"blknod",
"bugnod",
"lessca",
"whiter",
"blkski",
"afrski1",
"indski1",
"swtgul1",
"bklkit",
"relkit",
"ivogul",
"sabgul",
"slbgul1",
"steeid",
"bongul",
"silgul2",
"blbgul1",
"andgul1",
"bnhgul1",
"brhgul2",
"bkhgul",
"grhgul",
"hargul1",
"saugul2",
"speeid",
"litgul",
"rosgul",
"dolgul2",
"lavgul1",
"laugul",
"fragul",
"grygul",
"audgul1",
"medgul1",
"gbhgul2",
"hootin1",
"kineid",
"whegul2",
"soogul2",
"pacgul1",
"belgul",
"olrgul1",
"bktgul",
"heegul",
"mewgul",
"mewgul2",
"ribgul",
"comeid",
"calgul",
"gbbgul",
"kelgul",
"glwgul",
"wesgul",
"yefgul",
"glagul",
"y00478",
"hergul",
"amhgul1",
"harduc",
"veggul1",
"casgul2",
"yelgul1",
"armgul1",
"slbgul",
"lbbgul",
"gubter1",
"caster1",
"royter1",
"grcter1",
"sursco",
"lecter2",
"royter2",
"chcter2",
"santer1",
"santer2",
"eleter1",
"litter1",
"sauter2",
"leater1",
"yebter2",
"whwsco3",
"faiter2",
"damter2",
"aleter1",
"gybter1",
"briter1",
"sooter1",
"rivter1",
"roster",
"whfter1",
"blnter1",
"whwsco2",
"soater1",
"comter",
"whcter1",
"arcter",
"antter1",
"kerter1",
"forter",
"truter",
"blbter1",
"blfter1",
"whwsco1",
"whiter2",
"whwter",
"blkter",
"labter1",
"incter1",
"chisku1",
"brnsku3",
"gresku1",
"pomjae",
"parjae",
"blksco1",
"lotjae",
"doveki",
"thbmur",
"commur",
"razorb",
"blkgui",
"piggui",
"marmur",
"xanmur2",
"cramur",
"blksco2",
"ancmur",
"japmur1",
"casauk",
"leaauk",
"whiauk",
"atlpuf",
"horpuf",
"kagu1",
"sunbit1",
"rebtro",
"lotduc",
"rettro",
"whttro",
"retloo",
"arcloo",
"pacloo",
"comloo",
"yebloo",
"kinpen1",
"genpen1",
"litpen1",
"lesrhe2",
"bertin1",
"buffle",
"galpen1",
"humpen1",
"magpen1",
"jacpen1",
"roypen1",
"rocpen4",
"rocpen1",
"fiopen1",
"snapen1",
"wispet",
"comgol",
"wvspet1",
"wfspet",
"bbspet1",
"layalb",
"bkfalb",
"wavalb",
"wanalb",
"wanalb2",
"wanalb3",
"royalb1",
"bargol",
"royalb3",
"limalb1",
"bkbalb",
"whcalb1",
"salalb1",
"bulalb2",
"bripet",
"ftspet",
"rispet1",
"swspet",
"smew",
"lcspet",
"barpet",
"monstp1",
"cavstp1",
"maspet",
"trspet",
"norgip1",
"norful",
"cappet",
"blupet1",
"hoomer",
"brbpri1",
"dovpri1",
"slbpri1",
"faipri1",
"fulpri1",
"kerpet2",
"whhpet1",
"grwpet2",
"atlpet1",
"solpet1",
"bramer1",
"magpet1",
"soppet1",
"madpet",
"feapet1",
"feapet2",
"berpet",
"jufpet",
"kerpet",
"herpet2",
"tripet1",
"commer",
"phopet1",
"barpet1",
"hawpet1",
"galpet",
"motpet",
"bkwpet",
"chapet1",
"coopet",
"grapet",
"whcpet1",
"rebmer",
"parpet1",
"wespet1",
"strshe",
"corshe",
"cavshe1",
"wetshe",
"sooshe",
"shtshe",
"pifshe",
"flfshe",
"scsmer1",
"greshe",
"chrshe",
"manshe",
"levshe1",
"balshe1",
"bkvshe",
"towshe1",
"flushe1",
"hutshe1",
"audshe",
"blhduc1",
"pershe1",
"troshe5",
"audshe3",
"litshe8",
"litshe1",
"litshe2",
"sgdpet1",
"codpet1",
"bulpet",
"woosto",
"cintin1",
"masduc",
"milsto1",
"yebsto1",
"paisto1",
"asiope1",
"blasto1",
"whisto1",
"oristo1",
"jabiru",
"lesadj1",
"marsto1",
"rudduc",
"magfri",
"grefri",
"norgan",
"capgan1",
"ausgan1",
"abbboo2",
"bfoboo",
"perboo1",
"masboo",
"nazboo1",
"andduc1",
"refboo",
"brnboo",
"darter2",
"darter3",
"darter4",
"anhing",
"pygcor2",
"lotcor1",
"litcor1",
"lipcor1",
"lakduc1",
"relcor1",
"bracor",
"refcor",
"pelcor",
"soccor1",
"piisha1",
"sposha1",
"blfcor1",
"piecor1",
"libcor1",
"blbduc1",
"indcor1",
"grecor4",
"grecor",
"eursha1",
"flicor1",
"neocor",
"doccor",
"magcor1",
"rofsha1",
"chisha1",
"macduc1",
"impcor1",
"kersha1",
"sacibi2",
"blhibi1",
"ausibi1",
"stnibi1",
"renibi1",
"whsibi1",
"giaibi1",
"waldra1",
"whhduc1",
"balibi1",
"creibi1",
"oliibi2",
"spbibi1",
"hadibi1",
"watibi1",
"pluibi1",
"bunibi1",
"bkfibi1",
"bkfibi2",
"musduc1",
"shtibi1",
"greibi1",
"bafibi1",
"whiibi",
"scaibi",
"gloibi",
"whfibi",
"punibi1",
"madibi1",
"eurspo1",
"ausbrt1",
"blfspo1",
"afrspo1",
"royspo1",
"rosspo1",
"whcbit1",
"ruther1",
"father1",
"btther1",
"agaher1",
"bobher1",
"watbrt1",
"zigher1",
"grebit1",
"ausbit1",
"amebit",
"pinbit1",
"stbbit1",
"leabit",
"litbit1",
"bkbbit1",
"yelbit",
"littin1",
"rebbrt1",
"schbit1",
"cinbit1",
"dwabit1",
"blabit1",
"janher1",
"manher1",
"wbnher1",
"bcnher",
"runher1",
"ycnher",
"bkbbrt1",
"grnher",
"strher",
"squher1",
"inpher1",
"chpher1",
"rubher2",
"categr",
"categr2",
"graher1",
"grbher3",
"bncbrt1",
"cocher1",
"pacher1",
"blhher1",
"grbher2",
"golher1",
"purher1",
"greegr",
"capher1",
"whiher1",
"pieher2",
"micscr1",
"whfher1",
"redegr",
"blaher1",
"triher",
"libher",
"snoegr",
"litegr",
"werher",
"litegr2",
"pacreh1",
"tabscr1",
"hamerk1",
"grwpel1",
"spbpel1",
"dalpel1",
"auspel1",
"amwpel",
"brnpel",
"perpel1",
"hoatzi1",
"kinvul1",
"tanscr1",
"andcon1",
"blkvul",
"turvul",
"lyhvul1",
"gyhvul1",
"secret2",
"osprey",
"osprey4",
"bkskit1",
"auskit1",
"dusscr1",
"whtkit",
"peakit1",
"sctkit1",
"afhhaw1",
"mahhaw1",
"panvul1",
"lammer1",
"egyvul1",
"maseag1",
"grhkit1",
"dusscr3",
"whckit1",
"hobkit",
"euhbuz1",
"orihob2",
"barhob1",
"barhob2",
"swtkit",
"sqtkit1",
"bkbkit1",
"afrcuh1",
"melscr1",
"jerbaz1",
"pacbaz1",
"blabaz1",
"whbvul1",
"whrvul1",
"indvul1",
"himgri1",
"eurgri1",
"capgri1",
"cinvul1",
"vanscr1",
"crseag1",
"moseag1",
"suseag1",
"phseag1",
"anseag1",
"grpeag1",
"shteag1",
"brseag1",
"faseag1",
"coseag1",
"teptin1",
"negscr1",
"bathaw1",
"negeag1",
"creeag1",
"hareag1",
"y00839",
"flohae1",
"mouhae1",
"mouhae2",
"blyhae1",
"javhae1",
"orfscr1",
"sulhae1",
"pinhae1",
"walhae1",
"blheag1",
"bawhae1",
"orheag1",
"baceag2",
"crheag1",
"rubeag2",
"mareag1",
"placha",
"loceag1",
"blaeag1",
"leseag1",
"inseag1",
"grseag1",
"waheag3",
"booeag1",
"liteag1",
"ayheag1",
"taweag1",
"grhcha1",
"steeag1",
"spaeag1",
"impeag1",
"goleag",
"weteag1",
"vereag1",
"cashae1",
"boneag2",
"afrhae1",
"dotkit1",
"chwcha1",
"rutkit1",
"lizbuz1",
"gabgos2",
"dacgos1",
"eacgos1",
"pacgos1",
"lothaw1",
"redgos1",
"dorgos1",
"tinhaw1",
"ruvcha1",
"semhaw2",
"cregos1",
"gybhaw1",
"recgos3",
"afrgos1",
"shikra1",
"levspa1",
"grfhaw1",
"fragos2",
"sptgos1",
"ruhcha1",
"grygos1",
"vargos1",
"brogos1",
"blmgos1",
"piegos1",
"necgos1",
"fijgos1",
"molgos1",
"retspa1",
"litspa1",
"rubcha1",
"japspa1",
"besra1",
"smaspa1",
"colspa1",
"vibspa1",
"madspa1",
"ovaspa2",
"eurspa1",
"shshaw",
"shshaw3",
"wemcha1",
"shshaw4",
"shshaw5",
"coohaw",
"gunhaw1",
"bichaw1",
"bichaw4",
"blagos1",
"hengos1",
"norgos",
"wemhar1",
"chacha1",
"easmah1",
"easmah2",
"swahar1",
"afmhar1",
"reuhar2",
"reuhar3",
"lowhar1",
"blahar1",
"norhar1",
"norhar2",
"brotin1",
"whbcha1",
"cinhar1",
"palhar1",
"piehar1",
"monhar1",
"redkit1",
"blakit1",
"blkkit3",
"whikit1",
"brakit1",
"wbseag1",
"specha3",
"solsee1",
"affeag1",
"pafeag1",
"whteag",
"baleag",
"stseag",
"lefeag1",
"gyhfie1",
"grabuz1",
"whebuz1",
"specha2",
"ruwbuz1",
"gyfbuz1",
"miskit",
"plukit1",
"blchaw1",
"snakit",
"slbkit1",
"crahaw",
"pluhaw",
"slchaw2",
"specha4",
"comblh1",
"cubblh1",
"ruchaw1",
"savhaw1",
"whnhaw2",
"grbhaw1",
"soleag1",
"croeag1",
"barhaw1",
"roahaw",
"colcha1",
"hrshaw",
"whrhaw1",
"whthaw",
"rebhaw2",
"bcbeag1",
"manhaw2",
"whihaw1",
"gybhaw2",
"semhaw",
"blfhaw1",
"varcha1",
"whbhaw2",
"gryhaw2",
"gryhaw3",
"reshaw",
"ridhaw1",
"brwhaw",
"whthaw1",
"shthaw",
"hawhaw",
"swahaw",
"varcha3",
"galhaw1",
"zothaw",
"rethaw",
"ruthaw1",
"ferhaw",
"rolhaw",
"uplbuz1",
"combuz6",
"combuz9",
"lolbuz1",
"bubcha1",
"combuz4",
"combuz1",
"moubuz3",
"moubuz2",
"renbuz1",
"madbuz1",
"augbuz2",
"jacbuz1",
"sooowl1",
"lesowl1",
"batgua1",
"minowl1",
"talowl1",
"lemowl1",
"aumowl1",
"sulowl1",
"marowl1",
"brnowl",
"barowl28",
"barowl7",
"afgowl1",
"beagua1",
"ausgro1",
"orbowl1",
"srlbao1",
"paphao1",
"rufowl2",
"powowl1",
"barowl1",
"sumboo1",
"souboo8",
"souboo4",
"undtin1",
"baugua1",
"souboo5",
"souboo6",
"morepo2",
"norboo1",
"brnhao1",
"brnhao3",
"choboo1",
"andhao1",
"phihao1",
"minboo1",
"andgua1",
"sulboo1",
"cebboo1",
"romboo1",
"minboo2",
"lishao1",
"toghao1",
"ocbhao1",
"cinhao1",
"molhao3",
"hanboo2",
"margua1",
"molhao2",
"chihao1",
"junhao1",
"spehao1",
"nebhao1",
"balowl",
"colowl1",
"colowl3",
"elfowl",
"lowowl1",
"rumgua1",
"borowl",
"nswowl",
"uswowl1",
"bufowl1",
"burowl",
"spoowl1",
"litowl1",
"whbowl1",
"forowl1",
"solboo1",
"refgua1",
"solboo4",
"nohowl",
"eupowl1",
"pesowl1",
"recowl1",
"asbowl1",
"javowl1",
"junowl1",
"chbowl1",
"afbowl1",
"cregua1",
"albowl1",
"norpyo1",
"nopowl",
"norpyo3",
"norpyo4",
"crpowl",
"clopyo1",
"anpowl1",
"yupowl1",
"copowl1",
"caugua1",
"tapowl1",
"capowl1",
"supowl1",
"amapyo1",
"leapyo1",
"fepowl",
"pepowl1",
"aupowl1",
"cupowl1",
"mineao1",
"whwgua1",
"wfsowl2",
"resowl1",
"sasowl1",
"ansowl1",
"flsowl1",
"mosowl2",
"jasowl2",
"misowl1",
"lusowl1",
"misowl2",
"spigua1",
"torsco1",
"madsco1",
"maysco1",
"cosowl3",
"ansowl2",
"mohsco1",
"pesowl2",
"eursco1",
"eursco3",
"pasowl3",
"dulgua1",
"arasco1",
"afsowl1",
"afrsco3",
"afrsco2",
"orsowl",
"ryusco1",
"mosowl1",
"biasco1",
"susowl1",
"sulsco5",
"pabtin1",
"dulgua3",
"sansco1",
"masowl2",
"sesowl1",
"nicsco1",
"sisowl1",
"ensowl1",
"mesowl1",
"rasowl1",
"insowl1",
"cosowl1",
"whcgua1",
"jasowl1",
"susowl2",
"phsowl1",
"negsco1",
"evesco1",
"pasowl2",
"wasowl1",
"rinsco1",
"palowl2",
"nwfowl1",
"chbgua1",
"swfowl1",
"jamowl1",
"strowl1",
"loeowl",
"mleowl1",
"styowl1",
"sheowl",
"marowl2",
"feaowl1",
"snoowl1",
"whbgua1",
"grhowl",
"grhowl2",
"eueowl1",
"roeowl1",
"pheowl1",
"caeowl1",
"speowl2",
"graeao1",
"spoeao2",
"fraeao1",
"butpig1",
"useowl1",
"veeowl1",
"sheowl1",
"baeowl1",
"sbeowl1",
"dueowl1",
"akeowl1",
"pheowl2",
"blfowl1",
"pefowl1",
"rtpgua1",
"rufowl1",
"vefowl1",
"brfowl1",
"tafowl1",
"bufowl2",
"flaowl",
"prsowl",
"whsowl1",
"bssowl",
"whtsco1",
"bfpgua1",
"trsowl",
"besowl",
"pasowl4",
"wesowl1",
"easowl1",
"basowl",
"vesowl",
"versco5",
"koesco1",
"rufsco1",
"watgua1",
"cinsco1",
"clfsco1",
"mofsco1",
"versco2",
"foosco1",
"lotsco1",
"samsco1",
"persco1",
"tabsco1",
"bkcsco1",
"blagua1",
"speowl1",
"tabowl1",
"babowl1",
"creowl1",
"spwowl1",
"mowowl1",
"brwowl1",
"tawowl1",
"tawowl3",
"himowl1",
"siwgua1",
"humowl1",
"omaowl1",
"brdowl",
"barowl13",
"fulowl1",
"rubowl2",
"chaowl1",
"rulowl1",
"uraowl1",
"pedowl1",
"bratin1",
"higgua1",
"grgowl",
"afwowl1",
"motowl",
"bawowl1",
"bkbowl1",
"rubowl3",
"spemou2",
"rebmou1",
"whbmou1",
"blnmou1",
"horgua1",
"refmou1",
"cuckoo1",
"earque",
"pavque1",
"gohque1",
"whtque1",
"resque1",
"creque1",
"cubtro1",
"histro1",
"noccur1",
"lattro1",
"slttro1",
"buttro1",
"bkttro2",
"blttro1",
"blhtro1",
"cittro1",
"whttro1",
"baitro1",
"gnbtro1",
"crecur2",
"gartro1",
"viotro3",
"viotro2",
"blctro1",
"surtro1",
"blttro2",
"eletro",
"moutro1",
"coltro1",
"mastro1",
"salcur1",
"nartro1",
"bactro1",
"battro1",
"javtro1",
"sumtro1",
"maltro1",
"rentro1",
"diatro1",
"phitro1",
"whitro1",
"rabcur2",
"cirtro1",
"scrtro1",
"orbtro2",
"rehtro1",
"wartro1",
"hoopoe",
"eurhoo2",
"madhoo1",
"forwoo1",
"whhwoo1",
"helcur1",
"grewoo2",
"blbwoo2",
"viowoo1",
"viowoo3",
"blsbil1",
"cosbil1",
"absbil1",
"soghor1",
"trbhor1",
"wrbhor2",
"horcur2",
"drbhor1",
"srbhor1",
"rebhor1",
"vddhor1",
"jachor1",
"sybhor1",
"eybhor1",
"brahor1",
"crohor1",
"afphor1",
"grecur1",
"hemhor1",
"afghor1",
"rbdhor1",
"pabhor1",
"piphor1",
"truhor1",
"bnchor1",
"whthor1",
"bawhor2",
"sichor1",
"bubcur1",
"blchor1",
"yechor1",
"bldhor1",
"whchor3",
"whchor2",
"rhihor1",
"grehor1",
"rufhor1",
"palhor1",
"orphor1",
"gyltin1",
"yekcur1",
"maphor1",
"blahor1",
"maghor2",
"ceghor1",
"inghor2",
"ruchor1",
"brnhor1",
"buchor1",
"runhor1",
"blyhor1",
"blacur1",
"wrehor1",
"sumhor1",
"plphor1",
"knohor1",
"wrihor2",
"sulhor1",
"wrihor1",
"luzhor1",
"minhor2",
"minhor1",
"watcur1",
"samhor1",
"tarhor1",
"rucrol2",
"indrol2",
"indrol3",
"puwrol1",
"ratrol2",
"librol2",
"abyrol2",
"eurrol1",
"bafcur1",
"blbrol1",
"bltrol1",
"brbrol1",
"dollar1",
"purrol1",
"slgrol1",
"scagrr1",
"plgrol1",
"rhgrol1",
"ltgrol1",
"rebcur1",
"grbkin1",
"scakin1",
"spokin1",
"blckin2",
"ruckin1",
"hobkin1",
"bankin1",
"copkin1",
"bipkin1",
"nupkin1",
"whbgui1",
"lipkin1",
"bubpak1",
"bubpak2",
"rbpkin1",
"bhpkin1",
"lickin2",
"shbkoo1",
"laukoo1",
"blwkoo1",
"spakoo1",
"helgui",
"rubkoo1",
"whrkin1",
"stbkin1",
"bnwkin1",
"rudkin1",
"whtkin2",
"chbkin2",
"blckin1",
"gyhkin1",
"brhkin1",
"plugui1",
"strkin1",
"blbkin4",
"wookin1",
"mankin2",
"blbkin3",
"rulkin1",
"bawkin1",
"forkin1",
"nebkin1",
"ultkin1",
"cregui3",
"chbkin1",
"somkin1",
"colkin1",
"colkin9",
"colkin2",
"melkin1",
"packin1",
"talkin1",
"mickin5",
"beakin2",
"vulgui1",
"sackin1",
"cibkin1",
"chakin2",
"mankin1",
"tahkin1",
"rebkin2",
"yebkin1",
"moukin1",
"dwakin1",
"afpkin1",
"reltin1",
"stopar1",
"mapkin1",
"whbkin1",
"malkin1",
"malkin2",
"smbkin1",
"bubkin2",
"shbkin1",
"blekin1",
"comkin1",
"hackin1",
"nahfra2",
"bkbkin1",
"phikin1",
"sulkin1",
"varkin1",
"vardwk1",
"vardwk6",
"vardwk15",
"silkin1",
"azukin1",
"amakin1",
"bcwpar1",
"ampkin1",
"grnkin",
"garkin1",
"crekin1",
"giakin3",
"rinkin1",
"belkin1",
"piekin1",
"cubtod1",
"brbtod1",
"ltwpar1",
"nabtod1",
"jamtod1",
"purtod1",
"todmot1",
"bltmot1",
"rucmot1",
"bucmot1",
"bucmot2",
"bucmot3",
"bucmot4",
"bewpar1",
"higmot1",
"rufmot1",
"rucmot2",
"kebmot1",
"brbmot1",
"tubmot1",
"rbbeat1",
"bbbeat1",
"pbbeat1",
"bhbeat1",
"mouqua",
"bhbeat2",
"bumbee1",
"blbeat1",
"stbeat1",
"libeat1",
"bbbeat2",
"bubbee2",
"ccbeat1",
"rtbeat1",
"wfbeat1",
"scaqua",
"wtbeat1",
"bobeat1",
"grnbee1",
"grnbee2",
"grnbee3",
"bcbeat1",
"mabeat1",
"btbeat1",
"rabeat1",
"btbeat2",
"elequa",
"chbeat1",
"eubeat1",
"robeat1",
"ncbeat1",
"scbeat1",
"whejac1",
"purjac2",
"dubjac1",
"pahjac1",
"brojac2",
"calqua",
"whtjac1",
"thtjac1",
"yebjac1",
"bucjac1",
"rutjac1",
"grtjac1",
"cocjac2",
"whcjac1",
"blfjac1",
"purjac1",
"gamqua",
"brojac1",
"parjac1",
"grejac2",
"whnpuf2",
"guipuf1",
"bubpuf1",
"blbpuf1",
"brbpuf1",
"piepuf1",
"chcpuf1",
"sobkiw1",
"yeltin1",
"banqua1",
"spopuf1",
"socpuf1",
"colpuf1",
"barpuf1",
"whepuf1",
"strpuf1",
"wespuf1",
"spbpuf1",
"spbpuf3",
"rutpuf1",
"norbob",
"rutpuf3",
"crcpuf1",
"whcpuf1",
"sempuf1",
"blspuf1",
"runpuf1",
"whwpuf1",
"moupuf1",
"lanmon1",
"rubnun1",
"bltbob1",
"fucnun1",
"bronun1",
"gycnun1",
"rucnun1",
"whfnun2",
"blanun1",
"blfnun1",
"whfnun1",
"yebnun1",
"swwpuf1",
"crebob2",
"sccbar1",
"scbbar2",
"spcbar1",
"orfbar1",
"whmbar1",
"blgbar1",
"brcbar1",
"blsbar1",
"gilbar1",
"ficbar1",
"crebob1",
"letbar1",
"rehbar1",
"schbar1",
"verbar1",
"prbbar1",
"toubar1",
"emetou3",
"noremt1",
"emetou4",
"souemt1",
"mawqua1",
"emetou8",
"grbtou1",
"chttou3",
"chttou2",
"crrtou1",
"yebtou1",
"blbtou1",
"greara1",
"letara1",
"renara1",
"swwqua1",
"ivbara1",
"ivbara3",
"blnara1",
"cheara1",
"mabara1",
"colara1",
"colara4",
"colara5",
"fibara1",
"cucara1",
"bewqua1",
"saftou2",
"yeetou1",
"guitou1",
"goctou1",
"tattou1",
"goutou1",
"spbtou1",
"gybmot1",
"pbmtou1",
"homtou1",
"rfwqua1",
"bbmtou1",
"rebtou2",
"chbtou1",
"chbtou3",
"chotou1",
"kebtou1",
"toctou1",
"whttou1",
"bkmtou1",
"fitbar1",
"bfwqua1",
"grebar1",
"revbar1",
"brhbar1",
"linbar1",
"whcbar1",
"grebar3",
"brtbar1",
"gowbar2",
"recbar1",
"retbar1",
"blctin1",
"chwqua1",
"blbbar2",
"yefbar1",
"gotbar2",
"gotbar3",
"blbbar5",
"indbar1",
"chibar1",
"taibar2",
"bltbar2",
"tutbar1",
"dbwqua1",
"moubar1",
"moubar2",
"yecbar1",
"flfbar1",
"gonbar1",
"litbar1",
"blebar1",
"borbar1",
"crfbar3",
"crfbar1",
"rbwqua1",
"copbar1",
"brnbar2",
"soobar2",
"gytbar1",
"brnbar1",
"nafbar1",
"whebar1",
"whybar1",
"ancbar1",
"grebar2",
"tawqua1",
"spetin1",
"gretin2",
"moutin1",
"westin1",
"rertin1",
"yettin1",
"yertin1",
"reftin1",
"yeftin1",
"yesbar1",
"gowqua1",
"habbar1",
"refbar2",
"miobar1",
"piebar1",
"spfbar1",
"bltbar1",
"banbar1",
"viebar1",
"whhbar1",
"chabar1",
"venwoq1",
"refbar1",
"blbbar3",
"blcbar1",
"brbbar1",
"blbbar1",
"dotbar1",
"beabar1",
"yebbar1",
"crebar1",
"raybar1",
"bbwqua1",
"yebbar2",
"darbar1",
"darbar3",
"grbhon2",
"wahhon1",
"yefhon2",
"dwahon1",
"wilhon2",
"palhon1",
"leahon2",
"sfwqua1",
"thbhon1",
"y00400",
"spohon2",
"scthon1",
"yerhon1",
"grehon2",
"eurwry",
"runwry1",
"spepic1",
"babpic1",
"stwqua1",
"lafpic1",
"oripic1",
"gospic1",
"scapic1",
"ecupic1",
"whbpic2",
"arrpic1",
"spopic1",
"spcpic1",
"varpic1",
"spwqua1",
"whbpic1",
"ocepic2",
"occpic1",
"whwpic1",
"runpic1",
"rubpic1",
"ochpic1",
"motpic1",
"plbpic1",
"fibpic1",
"thitin1",
"sinqua1",
"olipic1",
"grapic1",
"chepic1",
"afrpic1",
"rufpic1",
"whbpic3",
"antpic1",
"gabwoo3",
"heswoo1",
"whiwoo1",
"monqua",
"lewwoo",
"guawoo1",
"purwoo1",
"rehwoo",
"acowoo",
"yetwoo2",
"yefwoo1",
"gonwoo1",
"beawoo2",
"blcwoo1",
"ocequa1",
"whfwoo1",
"hiswoo1",
"jamwoo1",
"gocwoo1",
"grbwoo1",
"yucwoo",
"recwoo1",
"gilwoo",
"hofwoo1",
"gofwoo",
"tafqua1",
"gofwoo2",
"rebwoo",
"weiwoo1",
"wilsap",
"yebsap",
"rensap",
"rebsap",
"cugwoo1",
"buswoo1",
"brewoo1",
"ferpar2",
"growoo1",
"fiswoo1",
"benwoo1",
"nubwoo1",
"gotwoo1",
"knywoo1",
"grbwoo2",
"ligwoo1",
"sulwoo2",
"bncwoo3",
"crepar1",
"gycwoo1",
"phiwoo1",
"bncwoo2",
"pygwoo1",
"ettwoo1",
"attwoo1",
"bkbwoo",
"arawoo1",
"brfwoo1",
"miswoo1",
"hilpar1",
"yecwoo1",
"beawoo1",
"gocwoo3",
"fibwoo1",
"ligwoo3",
"spbwoo2",
"abywoo1",
"carwoo1",
"gabwoo1",
"melwoo1",
"sicpar1",
"ellwoo1",
"grywoo1",
"gyhwoo1",
"oliwoo2",
"brbwoo1",
"nutwoo",
"labwoo",
"dowwoo",
"crbwoo3",
"leswoo1",
"chbpar2",
"litwoo2",
"dofwoo1",
"whswoo2",
"chewoo3",
"strwoo6",
"scbwoo3",
"yevwoo1",
"babwoo2",
"blcwoo3",
"rerwoo1",
"whnpar2",
"reswoo1",
"gocwoo2",
"yeewoo1",
"recwoo",
"smbwoo1",
"ariwoo",
"strwoo",
"haiwoo",
"whhwoo",
"rubwoo1",
"slbtin1",
"rutpar1",
"fubwoo2",
"frbwoo1",
"stbwoo4",
"darwoo1",
"himwoo1",
"sinwoo1",
"syrwoo1",
"whwwoo1",
"grswoo",
"okiwoo1",
"chhpar3",
"whbwoo1",
"ruwwoo1",
"stcwoo1",
"whtwoo2",
"litwoo1",
"yetwoo1",
"gogwoo1",
"whbwoo7",
"goowoo1",
"grcwoo1",
"haipar1",
"goowoo3",
"crmwoo2",
"blnwoo1",
"spbwoo1",
"grbwoo3",
"norfli",
"gilfli",
"ferfli1",
"chifli1",
"andfli1",
"taipar1",
"camfli1",
"cinwoo1",
"wavwoo1",
"scbwoo5",
"chcwoo1",
"chewoo2",
"pacwoo1",
"blcwoo4",
"blcwoo5",
"crcwoo2",
"whcpar1",
"ruhwoo1",
"caawoo1",
"rinwoo1",
"helwoo1",
"blbwoo3",
"linwoo1",
"pilwoo",
"whbwoo2",
"andwoo1",
"blawoo1",
"babpar1",
"powwoo1",
"crbwoo1",
"renwoo1",
"robwoo1",
"crcwoo1",
"pabwoo1",
"guawoo2",
"crbwoo2",
"magwoo1",
"ivbwoo",
"ornpar1",
"banwoo2",
"chtwoo1",
"greyel1",
"lesyel1",
"crwwoo1",
"stbwoo3",
"lacwoo1",
"sttwoo1",
"scbwoo1",
"japwoo1",
"rebpar5",
"eugwoo2",
"grnwoo3",
"levwoo1",
"recwoo2",
"blhwoo1",
"gyfwoo1",
"gyhwoo4",
"himfla1",
"comfla1",
"sptfla1",
"gybpar3",
"bkrfla1",
"bkrfla2",
"busfla1",
"luzfla1",
"yeffla1",
"rehfla1",
"javfla1",
"grefla1",
"crbfla1",
"whnwoo1",
"chbpar1",
"pahwoo1",
"bamwoo1",
"olbwoo2",
"marwoo1",
"baywoo1",
"orbwoo1",
"rufwoo2",
"burwoo1",
"babwoo3",
"bunwoo1",
"chotin1",
"snopar1",
"ashwoo1",
"soowoo1",
"sousow1",
"grswoo1",
"relser1",
"bllser1",
"blacar1",
"retcar2",
"carcar1",
"moucar1",
"blophe1",
"whtcar1",
"strcar1",
"y00678",
"yehcar1",
"chicar1",
"laufal1",
"baffal1",
"plffal1",
"liffal1",
"cryfof1",
"westra1",
"sbffal1",
"coffal1",
"buffal1",
"spwfal2",
"pygfal1",
"whrfal1",
"colfal1",
"bltfal1",
"whffal1",
"phifal1",
"sattra1",
"piefal2",
"leskes1",
"eurkes",
"eurkes1",
"madkes1",
"spokes1",
"auskes1",
"amekes",
"foxkes1",
"grykes1",
"blytra1",
"bankes1",
"renfal1",
"reffal1",
"amufal1",
"elefal1",
"soofal1",
"aplfal",
"merlin",
"batfal1",
"orbfal1",
"temtra1",
"eurhob",
"afrhob1",
"orihob1",
"aushob1",
"nezfal1",
"brofal1",
"gryfal1",
"lanfal1",
"lagfal1",
"sakfal1",
"verpar1",
"gyrfal",
"prafal",
"perfal",
"taifal1",
"kakapo2",
"kea1",
"nezkak1",
"cockat",
"rtbcoc1",
"glbcoc1",
"szepar1",
"ytbcoc1",
"whtblc1",
"slbblc1",
"palcoc1",
"gagcoc1",
"galah",
"pincoc1",
"lobcor1",
"wescor1",
"litcor2",
"himmon1",
"duccoc1",
"succoc",
"blecoc1",
"grepar",
"grypar1",
"refpar5",
"yefpar4",
"brnpar1",
"bnnpar2",
"meypar1",
"chimon1",
"ruepar1",
"brhpar2",
"senpar",
"rebpar1",
"litpar2",
"scspar1",
"bufpar1",
"sarpar2",
"brbpar1",
"gotpar2",
"vartin1",
"kokphe1",
"spwpar2",
"gyhpar1",
"moupar2",
"barpar1",
"rufpar1",
"andpar1",
"teppar1",
"monpar",
"monpar2",
"tuipar1",
"wiltur",
"plapar1",
"whwpar",
"yecpar",
"gycpar1",
"orcpar",
"cowpar1",
"gowpar2",
"recpar3",
"blbpar4",
"brhpar1",
"ocetur1",
"sahpar1",
"rofpar2",
"orcpar2",
"caipar2",
"balpar1",
"vulpar1",
"rufpar2",
"inwpar1",
"refpar2",
"blwpar1",
"rufgro",
"duspar1",
"rebpar2",
"schpar1",
"spfpar1",
"spfpar2",
"blhpar1",
"whcpar",
"brwpar1",
"shtpar2",
"yefpar5",
"hazgro1",
"fespar1",
"vinpar1",
"tucpar1",
"respar2",
"blbpar1",
"whfpar1",
"yebpar1",
"cubpar1",
"hispar1",
"licpar",
"saggro",
"relpar",
"relpar4",
"recpar",
"yelpar1",
"blcpar2",
"rebpar7",
"renpar1",
"yehpar",
"yenpar1",
"ywcpar",
"gusgro",
"bufpar",
"scnpar1",
"meapar1",
"meapar",
"kawpar1",
"imppar1",
"retpar1",
"orwpar",
"dubpar1",
"mexpar1",
"dusgro",
"buwpar2",
"buwpar1",
"buwpar3",
"grrpar1",
"spepar1",
"pacpar2",
"yefpar2",
"blhpar4",
"whbpar1",
"refpar3",
"soogro1",
"bltpar2",
"blwpar2",
"mabpar",
"peapar1",
"crbpar1",
"gncpar",
"pfrpar1",
"gybpar1",
"mafpar3",
"paipar1",
"shtgro",
"paipar6",
"sanpar2",
"bonpar1",
"rofpar3",
"sampar1",
"fispar1",
"matpar2",
"elopar1",
"whnpar1",
"blcpar1",
"rustin1",
"grpchi",
"brbpar2",
"reepar1",
"rohpar1",
"suwpar1",
"auspar1",
"slbpar1",
"burpar",
"hyamac1",
"indmac1",
"thbpar",
"lepchi",
"mafpar1",
"oltpar1",
"orfpar",
"pefpar1",
"brtpar1",
"cacpar1",
"duhpar",
"bkhpar",
"subpar1",
"janpar1",
"whtpta1",
"gocpar2",
"rebmac2",
"buhmac1",
"yecmac",
"buwmac1",
"baymac",
"chfmac1",
"milmac",
"scamac1",
"ragmac1",
"wilpta",
"goppar1",
"yeepar1",
"golpar3",
"resmac2",
"bucpar",
"grnpar",
"grnpar2",
"grnpar3",
"pacpar1",
"scfpar1",
"rocpta1",
"scfpar3",
"mitpar",
"rempar",
"crfpar",
"whepar2",
"cubpar2",
"hispar",
"pespar1",
"vaspar1",
"blapar1",
"sprgro",
"levpar1",
"seypar1",
"ycppar1",
"geppar1",
"bfppar1",
"fippar1",
"rbppar1",
"suppar1",
"regpar1",
"alepar1",
"wescap1",
"pakpar1",
"aukpar1",
"rewpar1",
"mirtai1",
"luzrat1",
"bhrtai1",
"bcrtai1",
"eclpar",
"recpar2",
"blcpar3",
"blbcap1",
"sinpar1",
"blrpar1",
"blnpar1",
"azrpar1",
"gyhpar2",
"slhpar1",
"blhpar3",
"plhpar1",
"rebpar4",
"lotpar2",
"blagro1",
"malpar1",
"laypar1",
"alepar2",
"rorpar",
"maupar1",
"brtpar2",
"patpar1",
"motpar1",
"rerpar1",
"bluebo1",
"caugro1",
"bluebo4",
"mulpar1",
"hoopar1",
"gospar1",
"recpar1",
"greros2",
"criros2",
"norros1",
"pahros1",
"easros1",
"bartin2",
"tibpar1",
"wesros1",
"polpar1",
"swipar1",
"crspar1",
"maspar2",
"respar1",
"horpar2",
"necpar1",
"chipar1",
"noipar1",
"grypar",
"yefpar3",
"malpar2",
"refpar4",
"gropar1",
"nigpar2",
"boupar2",
"blwpar3",
"elepar1",
"turpar1",
"sccpar1",
"daupar1",
"plflor1",
"reflor2",
"reflor1",
"failor1",
"joslor1",
"paplor1",
"paplor3",
"duclor1",
"meelor1",
"pallor1",
"reephe1",
"collor1",
"blclor2",
"ultlor1",
"kuhlor1",
"blulor1",
"yeblor2",
"orblor1",
"yeblor1",
"publor1",
"blclor1",
"mikphe1",
"varlor1",
"puclor1",
"litlor1",
"duslor1",
"carlor1",
"brolor1",
"blalor1",
"yeslor1",
"gollor1",
"muslor1",
"humphe1",
"yaglor2",
"ornlor1",
"pohlor1",
"scblor1",
"railor7",
"olhlor1",
"railor3",
"budger",
"lafpar1",
"edfpar1",
"golphe",
"safpar1",
"obfpar1",
"vehpar1",
"cehpar1",
"phihap1",
"bchpar1",
"suhpar1",
"mohpar1",
"sulhap1",
"sahpar2",
"laaphe1",
"paphap1",
"pyghap1",
"gyhlov1",
"rehlov1",
"blwlov1",
"peflov",
"fislov1",
"yeclov",
"lillov1",
"blclov1",
"rinphe1",
"riflem1",
"soiwre1",
"sapayo1",
"schasi1",
"velasi1",
"sunasi1",
"yebasi1",
"grabro1",
"lotbro1",
"dusbro1",
"rinphe2",
"visbro1",
"watbro1",
"sibbro1",
"barbro1",
"banbro1",
"baybro1",
"gyhbro1",
"rusbro1",
"afrbro1",
"grebro1",
"smbtin1",
"chephe1",
"whibro1",
"earpit1",
"giapit1",
"runpit1",
"schpit1",
"blnpit1",
"blrpit1",
"banpit3",
"banpit4",
"blhpit1",
"whieap2",
"blupit1",
"babpit1",
"rebpit1",
"sulpit3",
"siapit1",
"molpit1",
"sompit1",
"pappit1",
"loupit1",
"neipit1",
"whieap1",
"blcpit1",
"garpit1",
"bkhpit1",
"blbpit1",
"afrpit1",
"grbpit1",
"indpit1",
"blwpit1",
"manpit1",
"hoopit2",
"brephe1",
"faipit1",
"noipit1",
"ivbpit1",
"elepit2",
"elepit7",
"elepit6",
"blfpit1",
"azbpit1",
"suppit1",
"raipit1",
"blephe1",
"tatlea1",
"soalea1",
"shblea1",
"sctlea1",
"bltlea1",
"grtlea1",
"rublea1",
"coamin1",
"slbmin1",
"commin1",
"swiphe1",
"punmin1",
"cammin2",
"thbmin1",
"rubmin1",
"gramin1",
"shbmin1",
"dawmin1",
"sptwoo1",
"oliwoo1",
"lotwoo1",
"bulphe1",
"tyrwoo1",
"whcwoo1",
"rudwoo1",
"tawwoo1",
"plbwoo1",
"plwwoo1",
"webwoo1",
"citwoo1",
"lobwoo1",
"nobwoo1",
"kalphe",
"amabaw1",
"blbwoo1",
"hofwoo2",
"plawoo1",
"babwoo1",
"rebwoo1",
"uniwoo1",
"rebwoo4",
"stbwoo1",
"mouwoo1",
"silphe",
"whtwoo1",
"grrwoo1",
"strwoo2",
"leswoo2",
"leswoo4",
"chrwoo1",
"ocewoo1",
"ocewoo2",
"elewoo1",
"spiwoo1",
"compea",
"butwoo1",
"cocwoo1",
"ivbwoo1",
"blswoo1",
"spowoo1",
"olbwoo1",
"stbwoo2",
"zimwoo2",
"rebscy1",
"blbscy1",
"bartin1",
"grepea1",
"cubscy1",
"brbscy1",
"grescy1",
"scbwoo4",
"whswoo1",
"sthwoo1",
"nabwoo1",
"spcwoo1",
"monwoo1",
"scawoo1",
"scbpar1",
"scawoo2",
"linwoo3",
"linwoo4",
"inawoo1",
"ducwoo1",
"slbxen1",
"plaxen1",
"strxen1",
"potpal1",
"rutxen1",
"crhpar1",
"whttre2",
"stbear2",
"rocear1",
"batear1",
"crachi1",
"buftuf1",
"buftuf3",
"strtuf1",
"ruwbar1",
"bolear1",
"redspu1",
"chaear1",
"wibhor1",
"palhor2",
"palhor4",
"palhor5",
"pabhor2",
"leshor1",
"rufhor2",
"crehor1",
"shtstr1",
"paispu1",
"wrlrus1",
"cubree1",
"strear1",
"sctear1",
"pafear1",
"bubear2",
"lotcin1",
"blacin1",
"buwcin1",
"corcin1",
"ceyspu1",
"chwcin1",
"crwcin1",
"olrcin1",
"gyfcin1",
"stbcin1",
"roycin1",
"whbcin1",
"whwcin1",
"dabcin1",
"surcin1",
"palpep1",
"seacin1",
"ducfog1",
"wcfgle1",
"grexen1",
"pabtre1",
"crytre1",
"swfgle",
"rurfog1",
"alfgle1",
"bcfgle1",
"bopphe1",
"crfgle1",
"mofgle1",
"stfgle1",
"rutfog1",
"whbfog1",
"obfgle2",
"bbfgle1",
"rumfog1",
"wtfgle1",
"lifgle1",
"mapphe1",
"rnfgle1",
"gufgle1",
"perrec1",
"bolrec1",
"chwhoo1",
"bffgle",
"chwfog1",
"cangro1",
"ccfgle1",
"hhfgle1",
"gepphe1",
"rufgle1",
"samfog1",
"unitre1",
"flatre1",
"rubtre1",
"stbtre1",
"blbtre1",
"strtre1",
"stctre1",
"ccfgle2",
"nibkiw1",
"tattin1",
"grypep3",
"brfgle1",
"btfgle1",
"butfog4",
"strwoo1",
"strwoo5",
"obfgle3",
"parfog1",
"perfog1",
"wefgle1",
"spobar1",
"grypep2",
"whtbar1",
"rudtre1",
"fudtre1",
"peatre1",
"thtray1",
"demwir1",
"tatspi1",
"bctspi1",
"tutspi1",
"pmtspi1",
"mopphe1",
"sttspi2",
"rctspi1",
"wbtspi1",
"sttspi1",
"antspi1",
"artspi1",
"ruftho1",
"ruftho3",
"stftho1",
"littho1",
"btpphe1",
"chbtho1",
"spbtho1",
"frbtho1",
"gretho2",
"oretho1",
"orbtho1",
"whbspi2",
"firgat1",
"lalbru1",
"crbcan1",
"mobpar1",
"crbcan4",
"crbcan5",
"bercan1",
"shbcan1",
"cipcan1",
"hudcan1",
"auscan1",
"lifcan1",
"mascan1",
"juncan1",
"chbpar3",
"sctcan1",
"stbcan1",
"puncan1",
"sttcan1",
"corcan1",
"itaspi1",
"shbcan2",
"bltthi1",
"punthi1",
"vilthi2",
"taibap1",
"cancan1",
"rufcan1",
"maqcan1",
"eyrthi1",
"ocbthi1",
"perthi1",
"whcthi1",
"mocthi1",
"pilgra1",
"orfplu2",
"grejun1",
"dobgra1",
"equgra1",
"rorbar1",
"strsof1",
"orisof1",
"deasof1",
"plasof1",
"rumsof1",
"stbree2",
"sutspi1",
"redjun",
"marspi2",
"licspi1",
"rubspi4",
"rubspi5",
"parspi1",
"crespi1",
"stcspi2",
"bolspi1",
"olispi1",
"palspi1",
"grejun2",
"gyhspi1",
"crcspi1",
"refspi1",
"tepspi1",
"stcspi1",
"asbspi1",
"licspi5",
"spespi1",
"scaspi1",
"dutcan1",
"rewtin1",
"ceyjun1",
"patcan2",
"stecan1",
"caccan1",
"bcwspi1",
"caacac1",
"rufcac2",
"brncac1",
"whtcac2",
"yecspi2",
"rawspi2",
"forfra2",
"whbspi1",
"chospi2",
"occspi1",
"gybspi1",
"plcspi1",
"whlspi1",
"marspi3",
"grespi2",
"necspi3",
"necspi1",
"crefra2",
"rubspi3",
"slaspi1",
"sitspi1",
"resspi2",
"rucspi1",
"bahspi1",
"pinspi1",
"dusspi1",
"mccspi1",
"cabspi1",
"gryfra",
"cibspi1",
"spispi1",
"dabspi1",
"pabspi1",
"sofspi1",
"azaspi1",
"apuspi1",
"whwspi1",
"rubspi2",
"hotspi1",
"swafra1",
"blhspi1",
"ruhspi1",
"rufspi1",
"bltspi1",
"stbspi1",
"rudspi1",
"chtspi1",
"rurant1",
"chsant1",
"aswant1",
"chifra1",
"wibant1",
"spwant2",
"rusant1",
"rufant12",
"dowant1",
"blabus1",
"rebbus1",
"ronbus1",
"chtant1",
"brbant2",
"blkfra",
"wheant1",
"rubsti1",
"madant1",
"fooant1",
"ornant1",
"rutant3",
"stbant2",
"yapant1",
"gybant1",
"bltant2",
"paifra1",
"mouant",
"pygant1",
"guista1",
"amasta1",
"pacant",
"cheant1",
"klaant1",
"stcant4",
"yetant1",
"sclant1",
"coqfra2",
"whfant2",
"whfant6",
"slaant1",
"risant1",
"salant1",
"lowant1",
"batant3",
"iheant1",
"uniant1",
"alaant1",
"whtfra2",
"plwant2",
"gryant1",
"leaant1",
"stcant3",
"orbant1",
"bawant1",
"nabant1",
"blhant4",
"whfant1",
"whfant4",
"huatin1",
"rewfra2",
"serant1",
"blbant2",
"rubant4",
"sinant1",
"parant1",
"banant2",
"sttant1",
"pltant1",
"rubant3",
"dutant2",
"finfra2",
"satant1",
"cinant1",
"blsant1",
"peaant1",
"bahant1",
"caaant1",
"blcant2",
"ajpant1",
"mapant1",
"crbant1",
"moofra2",
"astant1",
"sptant1",
"dugant1",
"todant1",
"spbant4",
"rorant1",
"pecant1",
"labant1",
"ancant1",
"yebant2",
"gywfra1",
"ruwant3",
"ruwant4",
"spbant5",
"plaant1",
"stcant1",
"spcant1",
"rubant2",
"bicant4",
"pluant3",
"whsant4",
"orrfra2",
"colant1",
"blbant1",
"batant2",
"barant1",
"chaant1",
"bacant2",
"linant1",
"chbant2",
"blhant2",
"blaant1",
"shefra1",
"cocant1",
"blgant2",
"casant1",
"whsant2",
"uniant2",
"plwant1",
"mocant1",
"uplant1",
"wesant1",
"norsla1",
"tibsno1",
"natsla1",
"bolsla1",
"plasla1",
"soosla1",
"amaant2",
"acrant1",
"stbant1",
"varant1",
"ruwant2",
"rucant1",
"altsno1",
"blcant4",
"sicant1",
"gloant1",
"whbant2",
"fasant1",
"bamant1",
"greant1",
"latant1",
"tufant1",
"bltant3",
"causno1",
"undant2",
"fulant1",
"spbant3",
"giaant2",
"spfant1",
"whpant1",
"whmant2",
"oceant1",
"bicant2",
"whcant1",
"cassno1",
"rutant4",
"whtant1",
"lunant2",
"baeant1",
"harant1",
"whbant5",
"chcant1",
"hacant1",
"bsbeye1",
"rwbeye1",
"tactin1",
"himsno",
"pafant1",
"scbant3",
"scbant8",
"ferant1",
"berant1",
"rutant1",
"ocrant1",
"dutant1",
"scaant2",
"strant2",
"sespar1",
"samant2",
"klaant2",
"lotant1",
"sthant1",
"guiwaa1",
"imewaa1",
"perwaa1",
"yebwaa1",
"ronwaa1",
"spiwaa1",
"sanpar1",
"manwaa1",
"yebant3",
"chtant2",
"zimant1",
"wilant1",
"parant2",
"blaant4",
"blaant5",
"dusant1",
"blaant2",
"broqua1",
"manant1",
"rdjant1",
"gryant2",
"magant1",
"banant1",
"jetant1",
"ribant1",
"febant1",
"whiant1",
"scaant3",
"snmqua2",
"whbant4",
"squant1",
"blcant3",
"batant1",
"spoant1",
"spbant6",
"dobant2",
"silant1",
"pluant1",
"slcant3",
"blbqua1",
"spwant3",
"humant1",
"brhant1",
"rufant4",
"rorant2",
"cauant2",
"chbant1",
"gyhant1",
"sttant3",
"esmant1",
"comqua1",
"dumant3",
"dumant1",
"whbant1",
"bltant1",
"whlant1",
"blfant2",
"whbant6",
"asbant1",
"bacant1",
"wesfie1",
"japqua",
"whbfie9",
"eaafie1",
"fbfeye1",
"wsfeye1",
"sleant1",
"blhant3",
"allant1",
"whsant1",
"goeant1",
"sooant1",
"harqua1",
"immant1",
"zelant1",
"rucant2",
"blfant1",
"bkfant2",
"rufant3",
"blhant1",
"rubant1",
"shtant1",
"strant3",
"raiqua1",
"sucant1",
"rutant2",
"schant1",
"barant2",
"undant1",
"giaant1",
"greant2",
"varant2",
"mouant1",
"scaant1",
"orntin1",
"stuqua1",
"plbant1",
"ocsant1",
"eluant1",
"chcant2",
"watant1",
"samant1",
"cunant1",
"sthant2",
"gynant1",
"jocant1",
"barpar2",
"chnant1",
"pabant1",
"whtant2",
"yebant1",
"whbant3",
"rutant5",
"bayant1",
"rawant1",
"rufant5",
"rufant6",
"arapar1",
"rufant7",
"bicant3",
"chaant4",
"equant1",
"rufant8",
"cheant2",
"chaant5",
"panant1",
"rufant9",
"oxaant1",
"relpar1",
"rufant10",
"punant1",
"rufant11",
"tawant1",
"brbant1",
"antant1",
"rufant2",
"stcant2",
"spoant6",
"spoant5",
"chukar",
"alfant1",
"masant1",
"thiant1",
"whlant2",
"amaant1",
"whbant7",
"spbant1",
"thlant2",
"thlant3",
"tepant1",
"rocpar2",
"ocbant1",
"scbant2",
"hooant1",
"perant1",
"ocfant1",
"rubant5",
"rubant7",
"slcant2",
"slcant5",
"crfant1",
"phipar1",
"rufgna3",
"chbgna1",
"hoogna1",
"astgna1",
"rufgna2",
"slagna1",
"chcgna1",
"blcgna1",
"blbgna1",
"blcant1",
"przpar1",
"rucant3",
"ocetap1",
"cthhue1",
"bthhue1",
"moutur1",
"whttap1",
"chutap1",
"cregal1",
"sangal1",
"rubtap1",
"jubqua1",
"spobam1",
"slabri1",
"strbri1",
"ocftap1",
"asctap1",
"whbtap1",
"bahtap1",
"martap1",
"diatap2",
"bratap1",
"robqua1",
"roctap1",
"platap1",
"sertap1",
"moctap1",
"dustap1",
"magtap1",
"anctap1",
"whwtap1",
"partap4",
"partap2",
"chitin1",
"pabqua1",
"partap1",
"whbtap2",
"zimtap1",
"puntap1",
"diatap1",
"viltap1",
"amptap1",
"miltap1",
"nebtap1",
"tritap1",
"harfra3",
"boltap1",
"whctap1",
"samtap1",
"lottap1",
"ruvtap1",
"blatap2",
"laftap1",
"juntap1",
"unitap1",
"tsctap1",
"camfra2",
"blatap1",
"siftap1",
"nartap2",
"tactap1",
"chotap1",
"upmtap1",
"stitap1",
"alptap1",
"ecutap1",
"cartap1",
"hanfra2",
"mattap1",
"brrtap1",
"pertap1",
"mertap1",
"chutap2",
"spitap2",
"colcre1",
"colcre2",
"olccre1",
"marcre1",
"chnfra2",
"elecre1",
"grhpip1",
"wibpip1",
"bkcpip1",
"platyr2",
"yuntyr1",
"roltyr3",
"roltyr1",
"gretyr1",
"reityr1",
"chnfra3",
"scltyr1",
"gyctyr1",
"sohtyr1",
"plctyr1",
"blctyr1",
"ashtyr1",
"tartyr1",
"yectyr1",
"forela1",
"gryela3",
"ercfra",
"fooela1",
"pacela1",
"yecela1",
"greela",
"yebela1",
"carela1",
"larela1",
"norela1",
"whcela1",
"whcela4",
"djifra1",
"smbela1",
"oliela1",
"slaela1",
"mobela1",
"broela1",
"plcela1",
"lesela1",
"cooela1",
"rucela1",
"mouela1",
"swifra2",
"higela2",
"higela3",
"greela1",
"sieela3",
"sieela2",
"graela1",
"yebtyr1",
"brctyr",
"whltyr1",
"nobtyr",
"ahafra2",
"sobtyr1",
"suifly1",
"whttyr2",
"whttyr1",
"bubtyr1",
"ruwtyr1",
"subtyr1",
"whbtyr1",
"bkctit1",
"pcttyr1",
"brutin1",
"gysfra1",
"abttyr1",
"ybttyr1",
"tuttyr1",
"jfttyr1",
"agitit1",
"unstit1",
"tortyr1",
"rivtyr1",
"sootyr1",
"y01036",
"jacfra2",
"whbtyr2",
"gyctyr2",
"moctyr7",
"moctyr6",
"yeltyr1",
"beatac1",
"gybtac1",
"dindor2",
"credor1",
"subdor1",
"rebfra1",
"wardor1",
"ticdor1",
"boptyr1",
"hfptyr1",
"rhptyr1",
"rinant2",
"souant1",
"tacpyt1",
"rsptyr1",
"gawtyr2",
"capfra2",
"lewtyr1",
"grwtyr1",
"paltyr2",
"paltyr3",
"paltyr4",
"paltyr5",
"boltyr1",
"rebtyr2",
"mistyr1",
"chityr1",
"natfra2",
"slftyr1",
"guityr1",
"goftyr1",
"goftyr5",
"chotyr1",
"goftyr4",
"pertyr1",
"vabtyr1",
"chabrt1",
"mfbtyr1",
"hilfra2",
"spbtyr1",
"vebtyr2",
"anbtyr1",
"sobtyr2",
"moctyr2",
"alatyr1",
"restyr1",
"bahtyr1",
"yegtyr1",
"olgtyr1",
"dosfra2",
"ecutyr1",
"blftyr1",
"rubtyr1",
"rultyr1",
"ciftyr1",
"migtyr1",
"saptyr1",
"oustyr1",
"sdmtyr2",
"bartyr1",
"scafra2",
"stnfly1",
"olsfly2",
"ocbfly1",
"mccfly1",
"mccfly3",
"gyhfly1",
"secfly1",
"slcfly1",
"rubfly2",
"incfly1",
"heufra1",
"chafly3",
"nosfly1",
"amsfly1",
"sosfly1",
"slbtyr1",
"platyr1",
"amatyr1",
"pattyr3",
"flafly2",
"orcfly1",
"clafra1",
"unafly1",
"rorfly1",
"olcfly1",
"brcfly1",
"hanfly1",
"orbfly1",
"ornfly1",
"mcrtyr1",
"shttyr1",
"drbpyt1",
"andtin1",
"harfra4",
"bnbpyt1",
"flapyt1",
"snttyr1",
"yuttyr1",
"acrtot1",
"bbttyr1",
"wettyr1",
"whbtot1",
"zittyr1",
"erttyr1",
"swafra2",
"jottyr1",
"snttyr2",
"hattyr1",
"pvttyr1",
"pettyr1",
"btttyr1",
"bbttyr2",
"cbttyr1",
"kattyr1",
"btttyr2",
"yenspu1",
"fotpyt1",
"eaptyr1",
"wbptyr1",
"bcptyr1",
"stptyr1",
"norben1",
"souben1",
"scptyr1",
"lcptyr1",
"dbptyr1",
"gybfra1",
"heptyr1",
"peptyr1",
"rcttyr1",
"johtot1",
"wcttyr1",
"bawtyr1",
"buctof1",
"rudtof1",
"ocftof1",
"smftof1",
"renfra1",
"ruftof1",
"shtfly1",
"gowtof1",
"bkbtof1",
"blctyr2",
"sptfly1",
"gyhtof1",
"cotfly1",
"matfly1",
"patfly1",
"sponig1",
"ybtfly1",
"bhtfly1",
"brofly1",
"ruftwi1",
"eyrfla1",
"olifla1",
"pacfla1",
"fubfla1",
"yeofly1",
"orefly1",
"whtnig3",
"yemfly1",
"yemfly2",
"gycfly1",
"yebfly3",
"yebfly4",
"cicspa1",
"sttspa1",
"whtspa1",
"gocspa1",
"yetspa1",
"dianig1",
"whcspa1",
"ruwspa1",
"cinmat1",
"cinfly2",
"clifly1",
"eulfly1",
"gybfly1",
"tacfly1",
"blbfly1",
"belfly1",
"papnig1",
"pilfly1",
"easpho",
"blkpho",
"saypho",
"tuffly",
"olifly2",
"olsfly",
"grepew",
"darpew1",
"smcpew1",
"malnig1",
"ochpew1",
"wewpew",
"eawpew",
"tropew3",
"tropew2",
"whtpew1",
"blapew1",
"cubpew1",
"jampew1",
"leapew1",
"cubtin1",
"grenig1",
"yebfly",
"acafly",
"wilfly",
"aldfly",
"whtfly1",
"leafly",
"hamfly",
"dusfly",
"gryfly",
"pinfly1",
"nacnig1",
"pasfly",
"corfly",
"yelfly1",
"bubfly",
"blcfly1",
"verfly",
"brufly1",
"drwtyr1",
"yebtyr2",
"sbgtyr1",
"leanig1",
"wfgtyr1",
"ongtyr1",
"plcgrt1",
"rngtyr1",
"dafgrt1",
"wbgtyr1",
"cibgrt1",
"pugtyr1",
"andneg1",
"ausneg1",
"sacnig1",
"spetyr1",
"bbbtyr1",
"andtyr2",
"cintyr1",
"whwblt1",
"hubtyr1",
"ruttyr1",
"rivtyr2",
"ambtyr1",
"crbtyr1",
"lesnig",
"vebtyr1",
"rrbtyr1",
"rwbtyr1",
"whrmon2",
"whimon1",
"fiediu1",
"grymon1",
"bkcmon1",
"salmon1",
"chvtyr2",
"comnig",
"stbtyr1",
"rbbtyr1",
"smbtyr1",
"smbtyr2",
"bkbsht1",
"lessht1",
"wtstyr1",
"gresht1",
"stttyr2",
"shtgrt1",
"antnig",
"piwtyr1",
"bbwtyr1",
"mawtyr1",
"whmtyr1",
"bawmon3",
"cottyr1",
"stttyr1",
"tumtyr2",
"crocht1",
"crocht3",
"shtnig1",
"gobcht1",
"yebcht1",
"jelcht1",
"slbcht2",
"slbcht3",
"rbctyr1",
"bbctyr1",
"dorcht1",
"wbctyr1",
"pictyr1",
"rubnig1",
"pattyr2",
"lottyr1",
"stftyr1",
"cattyr",
"pirfly1",
"whbfly1",
"rumfly1",
"socfly1",
"grcfly1",
"ducfly2",
"batnig1",
"grekis",
"leskis1",
"whrfly",
"yetfly2",
"thsfly2",
"lebfly2",
"gobfly1",
"gocfly1",
"baifly1",
"subfly",
"whbnot1",
"bahnig1",
"strfly1",
"bobfly1",
"sulfly1",
"varfly",
"croslf1",
"sntkin1",
"whtkin1",
"trokin",
"coukin",
"caskin",
"blanig1",
"thbkin",
"weskin",
"sctfly",
"fotfly",
"easkin",
"grykin",
"giakin1",
"logkin",
"gramou1",
"pabmou1",
"pygnig1",
"rufmou1",
"sibsir1",
"whrsir1",
"todsir1",
"siryst3",
"rufcas2",
"astcas2",
"ruffly1",
"yucfly1",
"sadfly1",
"compau",
"ducfly",
"swafly1",
"venfly1",
"panfly1",
"shcfly1",
"apifly1",
"paefly1",
"socfly2",
"astfly",
"nutfly",
"scrnig1",
"grcfly",
"bncfly",
"galfly1",
"rutfly1",
"lasfly",
"stofly1",
"purfly1",
"lahfla2",
"flafly1",
"rutfla1",
"samnig1",
"dutfla1",
"rutatt1",
"cinatt1",
"ochatt1",
"cibatt1",
"ducatt1",
"gyhatt1",
"brratt1",
"scafru1",
"fitfru1",
"litnig1",
"scbfru1",
"hanfru1",
"rebfru1",
"blcfru1",
"orbfru1",
"masfru1",
"gobfru1",
"barfru1",
"batfru1",
"gabfru1",
"siwnig1",
"gytpih1",
"olipih2",
"hoober2",
"bkhber1",
"andcot1",
"gcoroc1",
"gurcot1",
"bnrcot1",
"whccot1",
"rutpla1",
"whwnig1",
"perpla1",
"whtpla1",
"swtcot1",
"bavcot1",
"chbcot1",
"reccot1",
"chccot1",
"crifru1",
"putfru1",
"rerfru1",
"bawnig1",
"banumb1",
"lowumb1",
"amaumb1",
"capuch1",
"rufpih1",
"rocpih1",
"scrpih1",
"civpih1",
"bagcot1",
"gywcot1",
"okbkiw1",
"lesnot1",
"bawnig3",
"chcpih1",
"duspih1",
"scwpih1",
"whibel2",
"thwbel",
"batbel1",
"beabel1",
"pltcot1",
"spacot1",
"lovcot1",
"swtnig1",
"blucot1",
"turcot1",
"pubcot1",
"putcot1",
"blfcot1",
"pomcot1",
"whtcot1",
"whwcot1",
"bltcot1",
"snocot1",
"lytnig1",
"dwtman1",
"titman1",
"sctman1",
"sbtman1",
"pbtman1",
"witman1",
"sdmman1",
"yehman2",
"jetman2",
"araman1",
"whtnig1",
"helman1",
"lotman1",
"latman1",
"blbman1",
"yunman1",
"swtman1",
"pitman1",
"gowman1",
"whtman1",
"whrman1",
"sptnig1",
"oliman2",
"blaman1",
"greman2",
"blcman1",
"sncman1",
"gocman2",
"opcman1",
"orbman1",
"whfman1",
"blrman1",
"latnig1",
"orcman3",
"yecman2",
"flcman2",
"whbman1",
"whcman1",
"gocman1",
"orcman1",
"crhman1",
"witman2",
"batman1",
"sctnig2",
"clwman1",
"strman2",
"strman5",
"paiman1",
"ficman1",
"whcman2",
"schman1",
"recman1",
"rotman1",
"gohman1",
"lotnig2",
"rehman1",
"sharpb1",
"royfly1",
"royfly2",
"royfly3",
"royfly5",
"tabfly1",
"surfly1",
"whifly1",
"bltfly1",
"leapau1",
"rutfly2",
"blctit1",
"blttit1",
"mastit1",
"varsch1",
"thlsch7",
"thlsch2",
"thlsch8",
"thlsch4",
"thlsch3",
"chopoo1",
"gresch2",
"spemou1",
"cinmou1",
"butpur1",
"duspur1",
"whbpur1",
"shlcot1",
"shlcot2",
"whnxen1",
"grbbec1",
"darnot1",
"earpoo1",
"gnbbec2",
"barbec1",
"slabec1",
"cinbec2",
"chcbec1",
"cinbec1",
"whwbec1",
"blcbec1",
"bawbec1",
"grcbec1",
"yucpoo1",
"glbbec1",
"oncbec1",
"pitbec1",
"crebec1",
"rotbec",
"jambec1",
"alblyr1",
"suplyr1",
"rusbir1",
"nosbir1",
"ocepoo1",
"ocbcat1",
"taccat1",
"grecat1",
"spocat2",
"huocat1",
"bkccat1",
"norcat1",
"arfcat1",
"spocat1",
"tobcat2",
"compoo",
"arcbow1",
"vogbow2",
"macbow2",
"strbow1",
"golbow1",
"flabow2",
"regbow1",
"satbow1",
"wesbow1",
"grebow1",
"chwwid",
"spobow1",
"fabbow1",
"whttre3",
"paptre1",
"whbtre2",
"ruftre3",
"brotre2",
"bkttre1",
"walfai1",
"empfai1",
"rufnig1",
"lovfai1",
"varfai1",
"varfai5",
"blbfai1",
"rewfai1",
"supfai1",
"splfai1",
"pucfai2",
"whsfai1",
"rebfai1",
"granig3",
"whwfai1",
"orcfai1",
"souemu1",
"malemu1",
"rucemu1",
"blagra1",
"whtgra1",
"shtgra1",
"rufgra1",
"rusgra1",
"tacnig1",
"strgra2",
"eyrgra1",
"thbgra1",
"thbgra4",
"dusgra1",
"daehon1",
"grshon1",
"easspi1",
"wesspi1",
"grbhon1",
"yucnig1",
"soomel1",
"olshon1",
"rushon1",
"rubhon1",
"blbhon1",
"cricha1",
"yelcha1",
"whfcha1",
"bouhon1",
"rubhon2",
"sitnig1",
"ruthon1",
"gryhon1",
"babhon1",
"brbhon1",
"lobhon2",
"tawstr1",
"arfhon1",
"smohon1",
"spahon1",
"barhon2",
"sponot1",
"bucnig",
"nehhon1",
"tachon1",
"plahon1",
"marhon1",
"sthhon1",
"piehon1",
"tui1",
"nezbel1",
"blahon1",
"whsfri1",
"easwpw1",
"sermyz1",
"whcmyz1",
"ashmyz1",
"dusmyz4",
"dusmyz1",
"redmyz1",
"blamyz1",
"crhmyz1",
"alomyz1",
"rehmyz1",
"souwpw1",
"summyz1",
"rotmyz2",
"moumyz1",
"banmyz1",
"sulmyz1",
"scamyz1",
"necmyz1",
"carmyz1",
"micmyz1",
"scbmyz1",
"purnig1",
"ebomyz1",
"scnmyz1",
"yevmyz1",
"soomyz1",
"orbmyz1",
"bkbmyz1",
"recmyz1",
"meyfri1",
"litfri1",
"gryfri1",
"dusnig1",
"timfri1",
"dusfri1",
"serfri1",
"bkffri1",
"bkffri2",
"helfri1",
"helfri3",
"helfri4",
"nebfri1",
"whnfri1",
"bronig1",
"sicfri1",
"noifri1",
"necfri1",
"spohon3",
"machon2",
"tabhon1",
"strhon1",
"paihon1",
"whshon1",
"crehon2",
"rennig1",
"nehhon2",
"whchon2",
"sunhon1",
"olihon1",
"brohon1",
"dabhon1",
"siehon1",
"whthon2",
"serhon1",
"yeehon1",
"grynig2",
"blchon1",
"banhon1",
"sacmel1",
"chagih1",
"duegih1",
"crohon1",
"easwah1",
"norwah1",
"weswah1",
"kanhon1",
"grynig1",
"whehon1",
"yethon1",
"blfhon1",
"blchon2",
"stbhon2",
"brhhon1",
"whthon1",
"whnhon2",
"whnhon3",
"blhhon1",
"palnig1",
"whghon1",
"yelhon1",
"pubhon1",
"yeshon1",
"lewhon1",
"whfhon1",
"yethon3",
"pughon1",
"forhon1",
"moumel1",
"dwatin1",
"eurnig1",
"scrhon1",
"mimhon1",
"taghon1",
"grahon2",
"grahon5",
"grahon3",
"yeghon1",
"whlhon1",
"stbhon3",
"varhon1",
"somnig1",
"manhon1",
"sinhon1",
"orchon1",
"yethon2",
"fushon1",
"gyhhon1",
"yephon1",
"whphon1",
"yefhon1",
"blthon1",
"rucnig1",
"obshon1",
"bruwat1",
"litwat1",
"redwat1",
"yelwat1",
"reghon1",
"spchon1",
"brihon1",
"eunhon1",
"cibmel1",
"egynig1",
"vogmel1",
"yebmel1",
"huomel1",
"belmel1",
"ornmel1",
"belmin1",
"noimin1",
"yetmin1",
"blemin1",
"easbri1",
"syknig1",
"wesbri1",
"rufbri1",
"spopar1",
"fospar1",
"rebpar6",
"strpar1",
"dwawhi1",
"fernwr1",
"scrubt2",
"weebil1",
"nubnig1",
"strfie1",
"ruffie3",
"ruffie2",
"chrhea1",
"shyhea1",
"pilotb1",
"redthr1",
"spewar3",
"rocwar1",
"rumwar1",
"golnig1",
"momwar1",
"yetscr1",
"pabscr1",
"vogscr1",
"bufscr1",
"papscr1",
"gygscr1",
"labscr2",
"becscr1",
"larscr1",
"jernig1",
"whbscr3",
"tasscr1",
"athscr1",
"whbscr1",
"bimwar1",
"broger1",
"gryger1",
"noiger1",
"chiger2",
"fatger1",
"latnig2",
"brbger1",
"gobger1",
"rusger1",
"manger1",
"plager1",
"wesger1",
"dusger1",
"labger1",
"biager1",
"yebger1",
"meenig1",
"gnbger1",
"whtger1",
"faiger1",
"moutho1",
"brotho1",
"inltho1",
"tastho1",
"chrtho1",
"burtho1",
"westho1",
"elctin1",
"andnig1",
"slbtho2",
"yertho1",
"yeltho1",
"mouger1",
"strtho1",
"slbtho1",
"souwhi1",
"chbwhi1",
"banwhi1",
"negbab1",
"phinig1",
"gycbab1",
"halbab1",
"whbbab3",
"chcbab2",
"norlog1",
"soulog1",
"chowch1",
"lorsat1",
"cresat1",
"obsber1",
"sulnig1",
"lebber1",
"blaber1",
"fatber1",
"satber1",
"spober1",
"dwahon2",
"pyghon1",
"yeblon1",
"slclon1",
"titber1",
"dosnig1",
"creber1",
"kokako3",
"kokako4",
"saddle2",
"saddle3",
"stitch1",
"easwhi1",
"weswhi1",
"weswhi4",
"chiwed2",
"bksnig1",
"chiwed1",
"spjbab1",
"blujeb1",
"blujeb2",
"cbjbab1",
"spqthr1",
"chequt1",
"copqut1",
"ciqthr1",
"nulqut1",
"finnig1",
"cbqthr1",
"chbqut1",
"paqthr1",
"ruwbat1",
"shtbat1",
"darbat1",
"capbat10",
"woobat1",
"chibat1",
"senbat1",
"monnig1",
"gyhbat1",
"palbat1",
"pribat1",
"bkhbat2",
"bkhbat1",
"pygbat1",
"angbat1",
"verbat1",
"itubat1",
"weabat1",
"indnig1",
"fepbat1",
"whtshr1",
"weawae1",
"chweye1",
"baweye1",
"btweye1",
"wfweye1",
"btweye2",
"ybweye1",
"rcweye1",
"madnig1",
"bnweye1",
"jaweye1",
"fibbus1",
"monbus1",
"gyhbus1",
"grbbus1",
"mtkbus1",
"macbus2",
"blfbus1",
"olibus1",
"swanig1",
"gygbus1",
"subbus1",
"focbus2",
"dohbus1",
"bokmak1",
"ropbus1",
"martch2",
"brctch1",
"thstch1",
"soutch1",
"quctin1",
"planig1",
"bkctch1",
"labpuf1",
"pifpuf1",
"reepuf1",
"blbpuf2",
"norpuf1",
"pripuf1",
"soobou1",
"mosbou1",
"mosbou4",
"stsnig1",
"wisbou1",
"fuebou1",
"slcbou1",
"luebus1",
"gabbus1",
"renbus1",
"trobou2",
"trobou1",
"zanbou1",
"soubou1",
"savnig1",
"gabbou1",
"turbou1",
"comgon1",
"papgon1",
"blhgon1",
"crbgon1",
"yebbou1",
"brubru1",
"yebboa1",
"blbboa1",
"frenig1",
"retvan1",
"resvan1",
"hobvan1",
"bervan1",
"lafvan1",
"vadvan1",
"polvan1",
"sibvan1",
"whhvan1",
"chavan2",
"bonnig1",
"bluvan3",
"rufvan1",
"helvan1",
"tylvan1",
"nuthat2",
"darnew1",
"comnew1",
"arcnew1",
"retnew1",
"warfly1",
"salnig1",
"crobab1",
"whihel1",
"chbhel1",
"rubhel1",
"rethel1",
"anghel1",
"chfhel1",
"bwfshr1",
"bwfshr2",
"larwoo1",
"batnig2",
"malwoo1",
"comwoo1",
"srlwoo1",
"ruwphi2",
"mabphi2",
"afrshf1",
"bawfly1",
"borbri1",
"ashwoo2",
"whbwoo4",
"lotnig1",
"fijwoo1",
"whbwoo8",
"grewoo1",
"maswoo1",
"whbwoo5",
"blfwoo1",
"duswoo1",
"lowpel1",
"moupel1",
"blabut1",
"sltnig1",
"ausmag2",
"grybut1",
"sibbut1",
"blbbut1",
"piebut1",
"hoobut1",
"tagbut1",
"piecur1",
"blacur2",
"grycur1",
"sqtnig1",
"motwhi1",
"comior1",
"whtior1",
"greior2",
"greior1",
"whbmin3",
"whbmin2",
"fiemin1",
"smamin1",
"gycmin1",
"puntin1",
"stwnig1",
"sunmin1",
"shbmin2",
"flomin1",
"lotmin1",
"scamin3",
"scamin1",
"ashmin1",
"ryumin1",
"brrmin1",
"rosmin1",
"pewnig1",
"ashcus2",
"ashcus3",
"whbcus2",
"grycus1",
"stbcus1",
"hoocus1",
"cercus1",
"piecus1",
"grocus1",
"yeecus1",
"oilbir1",
"bkfcus1",
"boycus1",
"burcus1",
"walcus1",
"melcus3",
"melcus1",
"babcus1",
"javcus1",
"larcus1",
"slacus1",
"grepot1",
"whrcus1",
"suncus1",
"whbcus1",
"molcus1",
"blkcus1",
"rescus1",
"petcus1",
"putcus1",
"golcus1",
"mcgcus1",
"lotpot1",
"neccus1",
"whwcus1",
"blacus1",
"bkbcus2",
"sumcus1",
"kaicus1",
"gyhcus1",
"bkbcus1",
"cicada7",
"soicus1",
"norpot1",
"sulcus2",
"papcus1",
"cicada1",
"mancic1",
"cicada4",
"cicada5",
"cicada3",
"negcus1",
"halcus1",
"pygcus1",
"compot1",
"blucus1",
"poltri1",
"whwtri2",
"lottri1",
"whwtri1",
"rubtri1",
"blbtri1",
"bkbtri2",
"whbtri1",
"vartri1",
"andpot1",
"bawtri1",
"pietri1",
"whrtri1",
"bkwcus1",
"bkhcus1",
"indcus1",
"lescus1",
"maucus1",
"reucus1",
"yellow3",
"whwpot1",
"whiteh1",
"pipipi1",
"varsit8",
"blksit1",
"watplo1",
"runwhi1",
"crepit1",
"crebel1",
"cresht1",
"mabwhi1",
"rufpot1",
"sansht1",
"blapit1",
"oliwhi1",
"relwhi1",
"gilwhi1",
"manwhi1",
"grbwhi1",
"whvwhi1",
"biawhi1",
"ruswhi1",
"pattin1",
"marfro1",
"brbwhi1",
"yebwhi1",
"subwhi1",
"borwhi1",
"vogwhi1",
"grywhi2",
"sclwhi1",
"rubwhi1",
"yetwhi1",
"golwhi2",
"papfro1",
"bkcwhi1",
"golwhi1",
"weswhi2",
"biswhi1",
"oriwhi1",
"louwhi1",
"renwhi1",
"necwhi2",
"necwhi3",
"fijwhi2",
"tawfro1",
"temwhi1",
"bltwhi1",
"bohwhi1",
"batwhi1",
"lorwhi1",
"regwhi1",
"gobwhi1",
"rufwhi1",
"blhwhi1",
"whbwhi1",
"soifro1",
"walwhi1",
"drawhi1",
"drawhi3",
"whbwhi2",
"mornin1",
"whbpit1",
"ruspit1",
"bowsht1",
"rufsht2",
"litshr5",
"larfro1",
"litshr6",
"litshr1",
"litshr4",
"litshr3",
"litshr2",
"grysht1",
"yebshr1",
"magshr1",
"whrshr1",
"whcshr1",
"dulfro1",
"tigshr1",
"soushr3",
"buhshr1",
"brnshr",
"rebshr1",
"isashr1",
"rutshr2",
"burshr1",
"babshr1",
"gybshr1",
"phifro1",
"gycshr1",
"macshr1",
"legshr2",
"logshr",
"norshr4",
"norshr1",
"ibgshr1",
"chgshr1",
"gybfis1",
"lotfis1",
"goufro1",
"taifis1",
"somfis1",
"norfis1",
"soufis1",
"wooshr1",
"masshr1",
"grsbab1",
"besbab1",
"bhsbab1",
"himshb1",
"ceyfro1",
"whbshb1",
"dalshb1",
"clishb1",
"whbyuh1",
"rubpep1",
"blbpep1",
"cssvir1",
"grsvir1",
"ybsvir1",
"scsvir1",
"hodfro1",
"gyegre1",
"rucgre1",
"oligre1",
"ashgre1",
"brhgre1",
"lecgre2",
"scrgre1",
"gycgre1",
"tacgre1",
"lesgre1",
"horscr1",
"shtfro3",
"ducgre1",
"bucgre1",
"gofgre1",
"rungre1",
"golvir1",
"yegvir",
"reevir1",
"yucvir",
"bkwvir",
"chivir1",
"shtfro2",
"norvir1",
"tepgre1",
"phivir",
"warvir",
"brcvir1",
"hutvir",
"gryvir",
"yetvir",
"yewvir1",
"chovir1",
"javfro3",
"buhvir",
"casvir",
"plsvir",
"flbvir1",
"manvir1",
"thbvir2",
"cozvir1",
"stavir1",
"whevir",
"thbvir",
"javfro2",
"jamvir1",
"cubvir1",
"belvir",
"purvir1",
"bkcvir1",
"dwavir1",
"slavir1",
"grnfig1",
"wetfig1",
"ausfig1",
"palfro1",
"varpit2",
"varpit4",
"varpit3",
"hoopit1",
"serori1",
"halori1",
"olbori1",
"timori1",
"timori3",
"burori3",
"sunfro1",
"broori1",
"greori1",
"bacori1",
"marori2",
"blhori1",
"datori1",
"phiori1",
"whlori1",
"grhori1",
"wbhori1",
"feonig1",
"abhori1",
"dahori1",
"bltori1",
"blwori1",
"afgori2",
"ingori1",
"eugori2",
"slbori1",
"brodro1",
"lrtdro1",
"spaown1",
"crbdro1",
"grtdro1",
"srldro1",
"anddro1",
"suldro1",
"spadro1",
"ritdro1",
"hacdro1",
"tabdro1",
"hacdro9",
"molown1",
"balica1",
"ashdro1",
"whbdro1",
"maydro1",
"credro1",
"vemdro6",
"fotdro5",
"bladro1",
"vemdro5",
"fotdro4",
"waonig1",
"cstdro1",
"shidro1",
"wstdro1",
"shadro1",
"blufan1",
"visblf1",
"blhfan1",
"visfan1",
"whtfan1",
"spbfan1",
"norscr1",
"moonig1",
"whbfan1",
"whbfan2",
"piefan1",
"phipif1",
"spofan1",
"wilwag1",
"brcfan1",
"citfan1",
"norfan1",
"whwfan1",
"barown1",
"sotfan1",
"bltfan1",
"wbtfan1",
"blafan1",
"chbfan1",
"frifan1",
"gryfan1",
"nezfan1",
"manfan1",
"brofan1",
"auonig1",
"dusfan1",
"renfan1",
"strfan1",
"kanfan1",
"rutfan1",
"bacfan1",
"dimfan1",
"palfan1",
"stbfan1",
"cibfan1",
"cretre1",
"rubfan2",
"pelfan1",
"lotfan1",
"rubfan1",
"matfan1",
"manfan2",
"ruffan1",
"arafan1",
"papdro1",
"silkta2",
"gyrtre1",
"blnmon1",
"pabmon1",
"shcmon1",
"celmon1",
"afcfly1",
"bhcfly1",
"rvpfly1",
"bhpfly1",
"batpaf1",
"afpfly1",
"whitre1",
"aspfly1",
"blypaf1",
"amupaf1",
"japfly1",
"blpfly1",
"rupfly1",
"mapfly1",
"sepfly1",
"mapfly2",
"elepai5",
"moutre1",
"elepai4",
"elepai",
"rarmon1",
"tahmon2",
"marmon2",
"iphmon2",
"fatmon1",
"vanmon1",
"slamon1",
"bubmon1",
"spfswi1",
"soushr2",
"fijshr1",
"bktshr1",
"renshr1",
"blamon1",
"spwmon1",
"blbmon2",
"flomon1",
"blcmon1",
"spemon1",
"whcswi1",
"whtmon1",
"whtmon2",
"biamon1",
"hoomon1",
"bltmon1",
"bawmon1",
"kulmon1",
"whcmon2",
"islmon1",
"blfmon1",
"whfswi1",
"blwmon1",
"boumon1",
"chbmon1",
"whcmon1",
"yapmon1",
"whemon1",
"whnmon1",
"loemon1",
"golmon1",
"rucmon1",
"liskiw1",
"souscr1",
"sooswi1",
"frimon1",
"frnmon1",
"piemon1",
"maglar1",
"torlar1",
"ocefly1",
"palfly3",
"pohfly1",
"molfly1",
"biafly1",
"rotswi1",
"leafly2",
"stbfly1",
"ochfly1",
"melfly1",
"vanfly1",
"chtfly1",
"brbfly1",
"satfly1",
"shifly1",
"papfly1",
"blkswi",
"resfly1",
"blamag1",
"blkmag2",
"sibjay1",
"sicjay1",
"gryjay",
"blcjay2",
"whcjay2",
"turjay1",
"beajay1",
"whcswi2",
"azhjay1",
"bltjay1",
"dwajay1",
"whtjay1",
"sitjay1",
"bucjay1",
"sabjay",
"yucjay1",
"pubjay1",
"viojay1",
"grdswi1",
"azujay1",
"purjay1",
"cucjay1",
"tufjay1",
"blcjay1",
"whtjay2",
"cayjay1",
"aznjay1",
"plcjay1",
"whnjay1",
"tepswi1",
"grnjay",
"brnjay",
"btmjay",
"wtmjay1",
"blujay",
"stejay",
"mexjay4",
"mexjay3",
"unijay1",
"cowscj1",
"chcswi1",
"wooscj2",
"issjay",
"flsjay",
"pinjay",
"eurjay1",
"blhjay1",
"lidjay1",
"azwmag2",
"azwmag3",
"ceymag1",
"whcswi",
"formag1",
"gobmag1",
"rbbmag",
"whwmag1",
"yebmag1",
"ruftre2",
"bortre1",
"grytre1",
"whbtre1",
"coltre1",
"bisswi1",
"andtre1",
"rattre1",
"hootre1",
"rattre2",
"eurmag1",
"eurmag3",
"eurmag5",
"eurmag6",
"orimag1",
"bkbmag1",
"whnswi1",
"yebmag",
"stbcro1",
"mogjay1",
"xigjay1",
"tugjay1",
"irgjay1",
"clanut",
"redcro",
"redcro9",
"whwcro",
"maggoo1",
"pltswi1",
"hiscro",
"mouser1",
"eurgol",
"citfin1",
"corfin1",
"fifser1",
"eurser1",
"syrser1",
"comcan",
"capcan1",
"tenswi1",
"yeccan1",
"bkhcan2",
"tibser1",
"lawgol",
"amegfi",
"lesgol",
"eursis",
"antsis1",
"pinsis",
"blhsis1",
"draswi1",
"blcsis2",
"yebsis1",
"olisis1",
"hoosis1",
"safsis1",
"yefsis1",
"blasis1",
"yersis1",
"thbsis1",
"andsis1",
"gloswi1",
"eleeup1",
"anteup1",
"goreup1",
"blnchl1",
"chbchl1",
"yecchl1",
"blcchl1",
"gobchl1",
"jameup1",
"orceup1",
"satswi1",
"plueup1",
"puteup1",
"fineup1",
"vefeup1",
"trieup1",
"screup3",
"screup1",
"yeceup1",
"gobeup1",
"whveup1",
"cavswi3",
"gnteup1",
"vioeup1",
"yeteup1",
"thbeup1",
"spceup1",
"olbeup1",
"fuveup1",
"taceup1",
"orbeup1",
"brgeup1",
"cavswi2",
"goseup1",
"rubeup1",
"chbeup1",
"mcclon",
"laplon",
"smilon",
"chclon",
"snobun",
"austhr1",
"lawthr1",
"pygswi2",
"slathr3",
"crbthr1",
"trithr1",
"marthr2",
"blbthr1",
"bkbthr3",
"yelthr1",
"whtrob1",
"whtthr2",
"whnrob1",
"seyswi1",
"rubrob",
"pavthr1",
"pabthr1",
"cocthr1",
"hauthr1",
"rubthr1",
"clcrob",
"baerob1",
"ecuthr1",
"hauthr3",
"masswi1",
"unithr1",
"ficale3",
"ficale2",
"kasrob2",
"fosrob1",
"besrob1",
"misrob1",
"blsrob1",
"rutscr1",
"kasrob1",
"wfwduc1",
"indswi1",
"bbsrob1",
"rbsrob1",
"brsrob1",
"indrob1",
"rutsha2",
"mamrob1",
"semrob1",
"phimar1",
"andsha1",
"whrsha2",
"phiswi1",
"whbsha1",
"vissha1",
"whvsha1",
"blasha1",
"afffly1",
"wbffly1",
"gyttif1",
"grytif1",
"angslf1",
"wheslf1",
"molswi3",
"abyslf1",
"yebfly2",
"nobfly1",
"sobfly1",
"palfly2",
"chafly2",
"grafly1",
"marfly1",
"fisfly1",
"silver1",
"mouswi2",
"spofly1",
"spofly3",
"gamfly1",
"gysfly1",
"dasfly",
"asbfly",
"bnsfly1",
"subfly2",
"brbfly2",
"ferfly1",
"whrswi2",
"ashfly1",
"swafly3",
"casfly1",
"olifly1",
"chafly1",
"afdfly1",
"ligfly2",
"yeffly1",
"dubfly2",
"tesfly1",
"ausswi1",
"ussfly1",
"whgfly1",
"rubfly3",
"habfly1",
"pabfly2",
"wbbfly1",
"pacblf1",
"hibfly1",
"larblf1",
"pabfly1",
"himswi2",
"tibfly3",
"tibfly4",
"lobblf1",
"bobfly2",
"butfly1",
"butfly2",
"mabfly1",
"mabfly2",
"subfly4",
"subfly1",
"monswi2",
"tibfly2",
"blffly1",
"matfly2",
"whtfly2",
"flojuf2",
"flojuf1",
"bncjuf1",
"fucjuf1",
"gycjuf1",
"chtjuf1",
"uniswi1",
"fujnil1",
"rubnil1",
"ruvnil1",
"vivnil3",
"larnil1",
"smanil1",
"bawfly2",
"zapfly1",
"dubfly3",
"verfly4",
"palswi2",
"islfly1",
"nilfly2",
"indfly1",
"eurrob1",
"bubwre1",
"supwre1",
"fabwre1",
"lobwre1",
"grywre1",
"rivwre1",
"bbwduc",
"palswi1",
"baywre1",
"stbwre1",
"sttwre1",
"carwre",
"winwre4",
"winwre3",
"pacwre1",
"clawre1",
"houwre",
"houwre5",
"caiswi1",
"socwre2",
"rubwre2",
"ochwre1",
"mouwre1",
"samwre1",
"tepwre1",
"timwre1",
"whbwre1",
"wbwwre1",
"gbwwre1",
"atiswi1",
"gybwow3",
"bwwwre1",
"munwow1",
"nigwre1",
"scbwre1",
"fluwre1",
"wibwre1",
"chbwre1",
"muswre2",
"sonwre1",
"marswi2",
"lobgna4",
"lobgna5",
"tafgna1",
"colgna1",
"guigna2",
"guigna3",
"sltgna1",
"guigna4",
"iqugna1",
"inagna1",
"ednswi1",
"trogna1",
"melbla1",
"cubbla",
"rusbla",
"brebla",
"comgra",
"nicgra1",
"cargra1",
"gragra1",
"botgra",
"gerswi1",
"grtgra",
"rebgra1",
"vefgra1",
"oribla1",
"mougra1",
"gotgra1",
"ausbla1",
"schbla1",
"forbla1",
"chobla1",
"scaswi1",
"bolbla1",
"bawcow4",
"bawcow3",
"yewbla2",
"paebla2",
"unibla2",
"chcbla2",
"yehbla2",
"sacbla2",
"baymar1",
"papnee1",
"yermar1",
"ovenbi1",
"woewar1",
"louwat",
"norwat",
"gowwar",
"buwwar",
"bawwar",
"prowar",
"swawar",
"whrnee1",
"crcwar",
"fltwar1",
"tenwar",
"orcwar",
"colwar",
"lucwar",
"naswar",
"virwar",
"conwar",
"gycyel",
"motspi1",
"masyel2",
"masyel3",
"masyel4",
"masyel5",
"macwar",
"mouwar",
"kenwar",
"olcyel1",
"blpyel1",
"belyel1",
"spwduc1",
"sirnee1",
"bahyel1",
"altyel1",
"comyel",
"hooyel1",
"elwwar1",
"arrwar1",
"hoowar",
"amered",
"kirwar",
"camwar",
"sabspi1",
"cerwar",
"norpar",
"tropar",
"magwar",
"babwar",
"bkbwar",
"yelwar1",
"yelwar",
"chswar",
"bkpwar",
"whtnee",
"btbwar",
"palwar",
"olcwar1",
"pinwar",
"yerwar",
"audwar",
"yerwar2",
"yetwar",
"yetwar3",
"vitwar1",
"sibnee1",
"prawar",
"adewar1",
"grawar",
"btywar",
"towwar",
"herwar",
"comchi1",
"ibechi2",
"eacwar1",
"ijlwar1",
"grrswi1",
"phlwar1",
"letwar1",
"yetwow1",
"brwwar1",
"rfwwar1",
"lauwow1",
"bcwwar1",
"ugawow1",
"whswar1",
"gycwar2",
"barswi",
"goswar1",
"gycwar1",
"whiwar2",
"biawar1",
"pltwar1",
"marwar4",
"grnwar1",
"grewar2",
"grewar3",
"emlwar1",
"leaswi1",
"lblwar1",
"salwar1",
"pllwar1",
"arcwar3",
"arcwar2",
"arcwar1",
"chcwar2",
"sunwar1",
"yebwar2",
"limlew1",
"corswi",
"subwar3",
"yevwar1",
"weclew1",
"blylew1",
"clalew1",
"harlew1",
"klolew1",
"halwar1",
"davlew1",
"gyhwar2",
"parswi1",
"mouwar2",
"mouwar4",
"tilwar2",
"rolwar1",
"sclwar1",
"sulwar1",
"sulwar3",
"kullew1",
"islwar1",
"isllew9",
"chiswi",
"isllew10",
"grawar1",
"malbrw1",
"subbrw1",
"anbwar1",
"gcbwar1",
"mohbrw1",
"barwar2",
"cvswar1",
"grswar2",
"wiwduc1",
"vauswi",
"leswar1",
"maswar1",
"sebwar1",
"grrwar1",
"orrwar1",
"clrwar1",
"aurwar1",
"miller",
"sairew1",
"narwar1",
"chaswi2",
"carrew1",
"chiwar1",
"marrew2",
"tahrew1",
"marwar2",
"turwar1",
"cirwar2",
"rimrew1",
"bbrwar1",
"mouwar1",
"sicswi1",
"aquwar1",
"sedwar1",
"blutit",
"azutit2",
"grotit1",
"gretit1",
"gretit4",
"gretit2",
"grbtit1",
"whwtit2",
"shtswi1",
"yeltit2",
"blltit1",
"indtit1",
"yectit1",
"whsblt1",
"whwblt3",
"soublt1",
"cartit2",
"whbtit5",
"whbblt1",
"whtswi",
"dustit2",
"rubtit3",
"rettit2",
"stbtit2",
"somtit4",
"miotit2",
"ashtit2",
"grytit1",
"euptit1",
"bhptit1",
"whtswi1",
"wcptit1",
"chptit1",
"yeptit1",
"mcptit1",
"afptit1",
"soptit1",
"verdin",
"yesnic1",
"easnic1",
"yetnic1",
"andswi1",
"bearee1",
"grhlar1",
"beelar1",
"sphlar12",
"gralar2",
"shclar1",
"klblar6",
"benlar1",
"elblar1",
"y00415",
"anpswi",
"agular1",
"thblar1",
"deslar1",
"batlar1",
"rutlar2",
"beslar1",
"madlar1",
"bcslar1",
"cbslar1",
"ascspl1",
"ftpswi1",
"chhspl1",
"gybspl1",
"fislar1",
"sablar2",
"piblar3",
"foxlar1",
"faclar8",
"karlar2",
"ferlar2",
"dunlar5",
"gstswi1",
"barlar4",
"rudlar1",
"liblar1",
"eaclar1",
"caclar1",
"rewlar1",
"runlar1",
"flalar1",
"retale1",
"brcale1",
"fuwduc",
"lstswi1",
"whcale1",
"wbrcha1",
"arrcha1",
"ofrcha1",
"carcha1",
"wtrcha1",
"gywroc1",
"bsrcha1",
"rurcha1",
"wbrcha2",
"afpswi1",
"rcrcha1",
"chrcha1",
"whrcha1",
"scrcha1",
"wcrcha1",
"swyrob1",
"whsrob1",
"obfrob1",
"shtaka2",
"bocaka11",
"malpas1",
"equaka1",
"shaaka1",
"rubaka1",
"usaaka1",
"iriaka1",
"copthr1",
"rtpthr1",
"spmthr1",
"gresho1",
"bagbab2",
"aspswi1",
"gousho1",
"rubsho1",
"lessho1",
"whbsho4",
"whbsho5",
"whbsho6",
"whbsho10",
"eyjfly1",
"minjuf1",
"inbrob1",
"alpswi1",
"sibrob",
"rutrob1",
"ryurob2",
"ryurob3",
"japrob2",
"blueth",
"whbred1",
"thrnig1",
"comnig1",
"whtrob3",
"motswi2",
"himrub1",
"chirub1",
"sibrub",
"fireth1",
"whtrob2",
"whbsho1",
"whbsho3",
"wbbrob1",
"rbbrob1",
"cobrob1",
"aleswi1",
"refblu",
"himblu1",
"gobrob1",
"litfor1",
"chnfor1",
"blbfor1",
"slbfor1",
"whcfor1",
"whcfor3",
"spofor1",
"comswi",
"ceywht1",
"shwthr1",
"borwht1",
"chwwht1",
"mawthr2",
"mawthr1",
"fowthr1",
"blwthr1",
"blfrob1",
"korfly1",
"plaswi1",
"narfly1",
"narfly2",
"narfly3",
"slbfly1",
"mugfly",
"pybfly1",
"rugfly1",
"sapfly1",
"ultfly1",
"lipfly1",
"nyaswi1",
"slbfly2",
"snbfly1",
"rutfly6",
"taifly1",
"rebfly",
"kasfly1",
"semfly1",
"eupfly1",
"colfly1",
"anglar1",
"plwduc1",
"palswi3",
"monlar2",
"latlar1",
"sinbus6",
"sinbus1",
"burbus1",
"benbus1",
"indbus3",
"indbus2",
"jerbus2",
"gillar1",
"afrswi1",
"frilar1",
"whtlar1",
"woolar1",
"stalar2",
"shtlar1",
"piblar1",
"whwlar1",
"razsky1",
"orisky1",
"skylar",
"madswi1",
"tawlar1",
"sunlar1",
"lablar1",
"thelar1",
"crelar1",
"mallar1",
"crelar3",
"horlar",
"temlar1",
"humlar1",
"fowswi1",
"sstlar4",
"blalar2",
"blalar4",
"reclar1",
"gstlar1",
"bimlar1",
"callar1",
"blalar1",
"tiblar1",
"duplar1",
"braswi1",
"dunlar1",
"dunlar4",
"lstlar2",
"sstlar1",
"mstlar1",
"tstlar1",
"sanlar1",
"somgre1",
"slbgre1",
"golgre1",
"fotswi",
"blcbul1",
"combri2",
"gntbri1",
"gyhbri1",
"lesbri2",
"lesbri3",
"yetgre1",
"spogre1",
"swagre1",
"joygre1",
"saaswi1",
"yengre1",
"yebgre1",
"simgre1",
"hongre1",
"sjogre1",
"camgre2",
"shegre1",
"easmog4",
"easmog3",
"easmog5",
"blyswi1",
"easmog1",
"stcgre3",
"stcgre4",
"stcgre1",
"wesbeg1",
"easbeg1",
"retgre1",
"whbgre1",
"yebgre3",
"litgre2",
"cooswi1",
"yewgre1",
"plagre2",
"grygre1",
"ansgre1",
"tingre1",
"whtgre2",
"xavgre1",
"ictgre1",
"terbro1",
"caogre1",
"darswi1",
"norbro1",
"gyogre1",
"fisgre1",
"cabgre1",
"cabgre3",
"leaflo1",
"yesbul1",
"gyhgre1",
"toogre1",
"baugre1",
"wawduc1",
"litswi1",
"paogre1",
"habbul1",
"hobbul1",
"yebbul2",
"gytbul1",
"ochbul3",
"whtbul1",
"ochbul2",
"putbul1",
"strbul2",
"houswi1",
"finbul1",
"olibul1",
"buvbul1",
"chabul1",
"cacbul1",
"gyebul1",
"crsbul1",
"ashbul1",
"cinbul1",
"chebul1",
"horswi1",
"yebbul3",
"sunbul1",
"sunbul2",
"strbul1",
"moubul2",
"phibul1",
"minbul1",
"stbbul1",
"golbul3",
"sulgob1",
"whrswi1",
"golbul4",
"visbul1",
"yelbul1",
"yelbul4",
"brebul1",
"reubul1",
"madbul1",
"maubul1",
"blabul1",
"sqtbul1",
"critop1",
"combul1",
"mohbul1",
"seybul1",
"pubbul1",
"bawbul2",
"yewbul1",
"gyhbul1",
"blhbul1",
"andbul1",
"spebul1",
"fietop1",
"gybbul1",
"scbbul1",
"blcbul2",
"bkcbul1",
"bkcbul4",
"bafbul1",
"crefin1",
"colfin1",
"crvbul1",
"olwbul1",
"whnjac1",
"reebul1",
"asfbul1",
"whbbul2",
"ayebul1",
"stebul2",
"sttbul1",
"flabul1",
"flabul3",
"yetbul1",
"yeebul1",
"blkjac1",
"brbbul1",
"livbul1",
"stybul1",
"rewbul",
"yevbul1",
"revbul",
"sohbul1",
"whebul1",
"gchwar",
"btnwar",
"whtsic1",
"citwar1",
"samwar1",
"whswar2",
"flawar1",
"whbwar2",
"palwar1",
"blcwar2",
"burwar1",
"rivwar1",
"twbwar1",
"butsic1",
"twbwar2",
"gobwar3",
"gobwar4",
"whlwar1",
"gytwar1",
"gagwar2",
"rucwar1",
"fatwar",
"rucwar",
"rucwar4",
"lewduc1",
"sabher1",
"blcwar1",
"pirwar1",
"gobwar1",
"gcrwar",
"thswar5",
"thswar9",
"thbwar2",
"thswar2",
"thswar1",
"canwar",
"hobher2",
"wlswar",
"refwar",
"redwar1",
"pihwar1",
"paired",
"sltred",
"brcred1",
"yecred1",
"whfred2",
"gofred1",
"broher",
"spered1",
"colred1",
"parred1",
"tepred1",
"duftan1",
"olbtan1",
"olgtan1",
"rbptan1",
"flctan",
"heptan2",
"rubher",
"heptan",
"sumtan",
"rottan1",
"scatan",
"westan",
"whwtan1",
"rehtan1",
"rehtan2",
"rcatan1",
"rtatan1",
"batbar1",
"bcatan1",
"soatan1",
"cratan1",
"olitan1",
"cartan2",
"lestan",
"ocbtan1",
"yelgro",
"bltgro1",
"gobgro1",
"patbar1",
"blbgro2",
"robgro",
"bkhgro",
"rebcha1",
"grtcha1",
"robcha1",
"norcar",
"vercar1",
"pyrrhu",
"blfgro1",
"brther2",
"yeggro1",
"crcgro",
"rabgro1",
"blusee1",
"blusee4",
"blbsee3",
"dickci",
"glbgro1",
"bubgro1",
"bubgro2",
"duther1",
"ultgro1",
"blubun",
"blugrb1",
"indbun",
"lazbun",
"varbun",
"paibun",
"robbun1",
"orbbun1",
"plushc1",
"stther1",
"cocfin2",
"brotan1",
"yesgro2",
"hootan1",
"chttan1",
"blbtan2",
"whctan1",
"scttan1",
"blmfin1",
"grpfin1",
"lither2",
"ptpfin1",
"lesgrf1",
"wtgfin1",
"grifin1",
"rbifin1",
"gywinf1",
"bbifin1",
"liifin1",
"mosfin1",
"blufin1",
"grskiw1",
"whbduc1",
"lither3",
"btsfin1",
"casfin1",
"grehon1",
"gochon2",
"sawtan1",
"baytan2",
"surtan1",
"scbtan2",
"yebtan1",
"guitan1",
"minher1",
"ruhtan1",
"swatan1",
"purhon1",
"relhon1",
"shbhon2",
"shihon1",
"scbdac1",
"sctdac1",
"bludac1",
"bagtan2",
"cither1",
"gycfin1",
"blcfin1",
"whbtan1",
"codfin1",
"yelcar1",
"diatan1",
"magtan2",
"blftan1",
"cintan1",
"reccar",
"bkther1",
"reccar2",
"crfcar1",
"yebcar",
"reccar3",
"reccar4",
"dottan1",
"ruttan1",
"spotan1",
"spetan1",
"yebtan2",
"stther2",
"gontan1",
"azrtan1",
"gagtan1",
"bugtan",
"saytan1",
"glatan1",
"azstan1",
"yewtan1",
"goctan2",
"paltan1",
"gycher1",
"blhtan1",
"siltan1",
"gnttan1",
"blctan1",
"gohtan1",
"blntan1",
"mastan1",
"blbtan1",
"chbtan1",
"scrtan1",
"redher1",
"bubtan2",
"babtan1",
"bestan1",
"spctan1",
"blbtan3",
"megtan1",
"bahtan1",
"ruwtan1",
"goetan1",
"sactan1",
"bubher1",
"flftan1",
"blwtan1",
"gagtan2",
"goltan1",
"emetan1",
"sittan1",
"sectan1",
"grhtan2",
"rentan1",
"brbtan1",
"socher1",
"gietan1",
"plctan1",
"turtan1",
"partan1",
"opctan1",
"oprtan1",
"ibesee1",
"piecro1",
"wiltit1",
"combul2",
"plaher1",
"scbcup3",
"slbtes1",
"pfbwar1",
"webwar1",
"humwar1",
"savwar1",
"blackc1",
"grewhi1",
"cacwre",
"sohwre1",
"cabgoo1",
"scther1",
"blbwre1",
"pltwre1",
"shttre1",
"woothr",
"spnthr1",
"sponit2",
"obnthr1",
"bhnthr1",
"sbnthr1",
"swathr",
"pabher1",
"bbnthr1",
"herthr",
"runthr1",
"rcnthr1",
"gycthr",
"bicthr",
"veery",
"lotthr1",
"alpthr1",
"himthr1",
"whbher1",
"sicthr1",
"lobthr1",
"dasthr1",
"evethr1",
"scathr2",
"scathr8",
"scathr4",
"scathr5",
"scathr6",
"sacthr2",
"whwher1",
"rutthr1",
"oltthr1",
"sibthr1",
"piethr1",
"grygrt1",
"spgthr1",
"spwthr1",
"crgthr1",
"abgthr1",
"orgthr1",
"greher1",
"orbthr1",
"rubthr2",
"grothr1",
"chithr2",
"sonthr1",
"fieldf",
"eacaka1",
"gabaka1",
"comcha",
"blucha2",
"tabher1",
"brambl",
"baygro1",
"colgro1",
"spwgro1",
"whwgro1",
"evegro",
"hoogro1",
"hawfin",
"pingro",
"brrbun1",
"koeher1",
"cabbun1",
"blhbun1",
"rehbun1",
"bkfbun1",
"leasal1",
"sibtan2",
"drasee1",
"crbgna1",
"masgna1",
"cubgna1",
"nebher1",
"trogna2",
"buggna",
"bktgna",
"calgna",
"bkcgna",
"whlgna2",
"whcnut1",
"prznut1",
"gianut1",
"whbnut",
"stbher1",
"beanut1",
"blunut1",
"vefnut1",
"yebnut1",
"subnut1",
"pygnut",
"bnhnut",
"bnhnut2",
"yunnut1",
"algnut1",
"mexher1",
"krunut1",
"rebnut",
"cornut1",
"snbnut1",
"rocnut1",
"pernut1",
"whbnut1",
"whtnut1",
"eurnut2",
"chvnut1",
"brant",
"lobher",
"chbnut2",
"chbnut3",
"chbnut4",
"wallcr1",
"eurtre1",
"eurtre3",
"brncre",
"battre1",
"ruftre4",
"bnttre1",
"lother1",
"bnttre2",
"sictre1",
"spocre3",
"spocre2",
"grycat",
"blacat1",
"normoc",
"tromoc",
"bahmoc",
"chimoc1",
"grbher1",
"lotmoc1",
"chbmoc1",
"patmoc1",
"whbmoc1",
"brbmoc1",
"galmoc1",
"hoomoc1",
"chamoc2",
"socmoc1",
"sagthr",
"grflan1",
"brnthr",
"lobthr",
"grathr1",
"benthr",
"ocethr1",
"cubthr",
"calthr",
"crithr",
"lecthr",
"blumoc",
"blflan1",
"bawmoc1",
"scbthr",
"peethr1",
"brotre1",
"metsta1",
"sinsta1",
"tansta1",
"atosta1",
"rensta1",
"lotsta1",
"webhum3",
"whesta2",
"brwsta1",
"sacsta1",
"ruwsta1",
"strsta1",
"asgsta1",
"molsta1",
"shtsta1",
"micsta1",
"polsta1",
"webhum1",
"samsta1",
"rarsta1",
"yefmyn1",
"lotmyn1",
"golmyn1",
"sulmyn1",
"apomyn2",
"coleto1",
"whnmyn1",
"baemyn1",
"hyavis1",
"fibmyn1",
"spwsta1",
"gocmyn1",
"ceymyn1",
"sohmyn1",
"whvmyn1",
"cremyn",
"junmyn1",
"colmyn1",
"banmyn1",
"hoovis2",
"commyn",
"vibsta1",
"rebsta1",
"whcsta1",
"bkcsta1",
"blwwar1",
"manrew1",
"labrew1",
"padwar1",
"blrwar1",
"brvear1",
"eurwar1",
"afrwar1",
"marwar3",
"thbwar1",
"afywar1",
"moywar1",
"boowar1",
"sykwar2",
"eaowar1",
"weowar1",
"rebgoo1",
"grnvie1",
"paywar1",
"upcwar1",
"oltwar1",
"melwar1",
"ictwar1",
"simgrw1",
"gybbab2",
"grgwar1",
"sakwar1",
"margra1",
"lesvio1",
"pagwar1",
"migwar",
"plewar1",
"lanwar",
"baswar1",
"eurwar2",
"brbwar2",
"cogwar1",
"chbwar1",
"frbwar1",
"spvear1",
"ltbwar1",
"cbbwar2",
"cbbwar4",
"talgrw1",
"cbbwar3",
"spobuw1",
"spobuw2",
"spobuw3",
"taibuw1",
"rubwar1",
"wvvear1",
"dabwar1",
"jabwar1",
"sicbuw1",
"benbuw1",
"flrgra1",
"spibir1",
"fernbi1",
"litgra1",
"malia1",
"broson1",
"tobhum1",
"bubbus1",
"rufson1",
"tawgra2",
"tawgra3",
"guathi2",
"necgra1",
"lolwar1",
"strgra1",
"ceybuw1",
"brtgra2",
"horsun2",
"brigra2",
"fatgra1",
"knswar1",
"banscw1",
"afswar1",
"camscw1",
"olbsun4",
"apbsun2",
"flbsun2",
"sousun2",
"pucfai1",
"madsun1",
"seysun2",
"humsun2",
"anjsun2",
"maysun2",
"lobsun2",
"gyhsun2",
"moasun1",
"linsun1",
"mansun1",
"bkefai1",
"mewsun2",
"mousun1",
"bohsun1",
"elesun1",
"lovsun1",
"hansun1",
"gousun1",
"grtsun1",
"fotsun1",
"bltsun1",
"whtgol1",
"eacsun1",
"magsun1",
"wecsun1",
"scasun1",
"temsun1",
"fitsun1",
"punsun1",
"litspi1",
"ortspi1",
"palspi2",
"tepgol1",
"thbspi1",
"lobspi1",
"spespi2",
"yeespi1",
"nafspi1",
"gybspi2",
"stbspi2",
"borspi1",
"strspi1",
"whispi1",
"hawgoo",
"grtgol1",
"cinwhe1",
"palroc1",
"rocpet1",
"whrsno1",
"tibsno2",
"whwsno1",
"blwsno1",
"runsno1",
"pedsno1",
"blasno1",
"fitawl1",
"yespet1",
"yetpet1",
"buspet1",
"chspet1",
"capspa1",
"chespa1",
"shrspa1",
"kerspa2",
"grrspa1",
"gyhspa1",
"ruthum1",
"swaspa2",
"swaspa1",
"pabspa1",
"sghspa2",
"sinspa1",
"russpa2",
"eutspa",
"saxspa1",
"plbspa1",
"socspa1",
"jamman1",
"spaspa1",
"itaspa1",
"houspa",
"somspa1",
"cavspa1",
"desspa3",
"argspa2",
"sugspa1",
"desspa1",
"wbbwea1",
"bltman1",
"rbbwea1",
"whbwea1",
"wbswea1",
"ccswea1",
"dsswea1",
"cbswea1",
"rutwea1",
"gyhsow1",
"bcswea1",
"socwea1",
"grtman1",
"scawea1",
"spfwea1",
"growea1",
"bagwea1",
"banwea1",
"berwea2",
"slbwea1",
"litwea1",
"spewea1",
"bknwea2",
"gnbman",
"strwea1",
"blbwea1",
"capwea1",
"bocwea1",
"afgwea1",
"hogwea1",
"orawea1",
"hemwea1",
"gopwea1",
"tagwea1",
"verman1",
"sbtwea1",
"kilwea1",
"ruewea1",
"lesmaw1",
"afmwea",
"kamwea1",
"vimwea1",
"spewea2",
"vilwea1",
"viewea3",
"antman2",
"blhwea1",
"gobwea1",
"cinwea1",
"chewea1",
"yemwea1",
"nelwea1",
"sakwea1",
"asgwea2",
"comwea1",
"strwea2",
"greman1",
"baywea1",
"forwea1",
"usawea1",
"brcwea1",
"bawwea1",
"recmal2",
"bltmal1",
"balmal2",
"revmal1",
"gramal1",
"cangoo",
"grtcar1",
"rehmal1",
"cremal1",
"rehwea1",
"carque1",
"rehque1",
"rebque1",
"redfod1",
"rehfod1",
"forfod1",
"maufod1",
"putcar1",
"seyfod1",
"rodfod1",
"yecbis",
"blabis1",
"zanbis1",
"blwbis1",
"redbis",
"orabis1",
"yelbis1",
"fatwid1",
"ortsun1",
"yeswid2",
"marwid1",
"whwwid1",
"recwid3",
"lotwid1",
"picmun1",
"moufir1",
"diafir1",
"reefir1",
"beafir1",
"amtsun3",
"crifin1",
"rebfir1",
"paifir1",
"stafin1",
"plhfin1",
"dobfin1",
"lotfin1",
"bltfin1",
"gyhsil1",
"broman1",
"amtsun2",
"magman1",
"bawman1",
"bawman3",
"madmun1",
"afrsil1",
"indsil",
"sthmun2",
"nutman",
"bltmun1",
"blfmun1",
"amtsun4",
"whrmun",
"dusmun1",
"javmun1",
"trimun",
"chemun",
"whhmun1",
"blbmun1",
"snmmun1",
"gybmun1",
"gycmun1",
"gorsun1",
"gyhmun1",
"hoomun1",
"neimun1",
"motmun1",
"yermun1",
"chbmun1",
"blamun1",
"bismun1",
"goufin3",
"tabpar1",
"tousun1",
"retpar3",
"fijpar1",
"reepar2",
"blfpar3",
"fepoli1",
"yebwax2",
"swewax1",
"swewax3",
"grbtwi1",
"ducwin1",
"litsun1",
"abcwin1",
"rfcwin1",
"refant1",
"whbneg2",
"chbneg1",
"gyhneg1",
"rerwax1",
"blcwax1",
"lavwax",
"bltwax1",
"putsun1",
"cinwax1",
"blcwax2",
"kanwax1",
"orcwax",
"anawax1",
"fabwax1",
"comwax",
"bkrwax",
"crrwax1",
"arawax1",
"bargoo",
"roysun1",
"quailf1",
"locust3",
"cutthr1",
"rehfin1",
"zebwax2",
"redava",
"purgre2",
"viewax1",
"bubcor1",
"reccor",
"grbfir1",
"blccor1",
"grablu1",
"wesblu1",
"rehblu1",
"lessee1",
"blbsee1",
"grwpyt1",
"orwpyt1",
"rewpyt1",
"dybtwi1",
"juffir1",
"dustwi1",
"pettwi1",
"rebfir2",
"afffin",
"jamfir1",
"malfir1",
"rocfir1",
"blbfir1",
"babfir1",
"brnfir1",
"gretho1",
"bkffir1",
"vilind",
"purind1",
"bakind1",
"varind1",
"greind1",
"pawind1",
"jopind1",
"camind1",
"pitwhy",
"wictho2",
"stbwhy1",
"sttwhy1",
"rottan2",
"crebun1",
"slabun1",
"corbun1",
"yellow2",
"pinbun",
"rocbun1",
"godbun1",
"bkbtho1",
"meabun1",
"chbbun1",
"gyhbun1",
"cinbun1",
"ortbun1",
"crebun2",
"cirbun1",
"houbun2",
"houbun3",
"lalbun1",
"ratcoq2",
"cibbun1",
"gosbun1",
"capbun1",
"tribun1",
"chebun2",
"litbun",
"yebbun1",
"rusbun",
"yetbun1",
"sombun1",
"tufcoq1",
"gobbun1",
"tibbun1",
"yelbun1",
"grybun",
"palbun",
"ocrbun1",
"reebun",
"garwar1",
"abycat1",
"busbla1",
"doecoq1",
"afhbab1",
"afhbab3",
"barwar1",
"laywar2",
"banwar2",
"ruvwar2",
"smawhi1",
"leswhi4",
"humwhi1",
"brnwar1",
"fricoq1",
"yemwar1",
"reswar1",
"weowar2",
"eaowar2",
"afdwar1",
"asdwar1",
"triwar1",
"menwar1",
"ruewar1",
"cypwar1",
"cacgoo1",
"ruccoq1",
"sarwar1",
"subwar6",
"subwar8",
"easwar1",
"spewar2",
"marwar1",
"darwar1",
"lottit1",
"lottit5",
"blttit2",
"fescoq3",
"whttit1",
"bkbtit3",
"bkbtit4",
"bkbtit6",
"sootit1",
"pygtit1",
"wbtwar1",
"bushti",
"woowar",
"eabwar1",
"blccoq1",
"bubwar1",
"astwar2",
"yebwar3",
"brlwar1",
"chilew1",
"parwar1",
"siclew1",
"ganlew1",
"palwar5",
"tylwar1",
"whccoq1",
"yeswar1",
"radwar1",
"subwar2",
"y00989",
"smowar1",
"duswar",
"pllwar2",
"butwar1",
"wlwwar",
"mouchi2",
"ecupie1",
"caichi1",
"gyhcaf1",
"citcaf1",
"afbfly1",
"wtbfly1",
"wbcfly1",
"wtcfly1",
"fictit1",
"yebtit3",
"sultit1",
"spehum1",
"bkbtit2",
"ruvtit2",
"coatit2",
"yebtit4",
"eletit2",
"paltit2",
"cretit2",
"gyctit1",
"britit",
"oaktit",
"lotsyl1",
"juntit1",
"tuftit",
"blctit4",
"vartit1",
"vartit4",
"vartit2",
"vartit3",
"whbtit4",
"somtit3",
"gyhchi",
"vitsyl1",
"chbchi",
"borchi2",
"mexchi",
"carchi",
"bkcchi",
"mouchi",
"pedtit1",
"bkbtit1",
"martit2",
"sictit1",
"vensyl1",
"castit2",
"afbtit2",
"redwin",
"eurbla",
"yemthr1",
"islthr24",
"gywbla1",
"eurbla2",
"ticthr1",
"blbthr2",
"retcom1",
"japthr1",
"gybthr1",
"eyethr",
"palthr1",
"gysthr1",
"brhthr1",
"izuthr1",
"islthr1",
"tibbla1",
"whbthr2",
"bahgoo",
"brtcom1",
"rinouz1",
"datthr1",
"retthr1",
"dusthr2",
"dusthr1",
"chethr1",
"whcbla1",
"amerob",
"blarob1",
"rucrob1",
"gybcom1",
"soorob1",
"relthr1",
"whcthr1",
"forthr1",
"mourob1",
"paethr1",
"lasthr1",
"chbthr2",
"plbthr2",
"chithr1",
"andhil3",
"slathr2",
"glbthr1",
"blhthr1",
"grethr1",
"trepip",
"olbpip",
"pecpip",
"rospip1",
"retpip",
"amepip",
"whshil1",
"watpip1",
"rocpip1",
"nilpip1",
"uplpip1",
"berpip1",
"strpip1",
"yetpip1",
"shtpip1",
"buspip1",
"sokpip1",
"ecuhil1",
"malpip1",
"yebpip2",
"alppip1",
"sprpip",
"yelpip2",
"yelpip3",
"shbpip1",
"shbpip3",
"chapip1",
"corpip1",
"buthil1",
"ocbpip1",
"helpip1",
"parpip1",
"przros1",
"brobul1",
"rehbul1",
"gyhbul2",
"gyhbul5",
"whcbul1",
"eurbul",
"andhil2",
"eurbul1",
"crwfin2",
"crwfin1",
"trufin2",
"monfin2",
"spefin1",
"gonfin1",
"dabros1",
"plmfin1",
"bhmfin1",
"blbhil1",
"asrfin1",
"gcrfin",
"bkrfin",
"bcrfin",
"comros",
"scafin1",
"strros1",
"greros1",
"blyros1",
"remros1",
"wethil1",
"bearos1",
"chbros1",
"pirros1",
"pibros2",
"darros1",
"spwros2",
"spwros3",
"vinros2",
"vinros3",
"sinros1",
"mouavo1",
"palros3",
"tibros1",
"lotros1",
"palros2",
"thbros1",
"whbros1",
"cwbros1",
"refros1",
"crbfin3",
"mauala",
"empgoo",
"blttra1",
"akikik",
"nihfin",
"palila",
"iiwi",
"crehon",
"apapan",
"akiapo",
"maupar",
"aniani",
"hawcre",
"grttra1",
"akekee",
"akepa1",
"hawama",
"oahama",
"kauama",
"purfin",
"casfin",
"houfin",
"rcgspa1",
"cantow",
"blbtho1",
"whttow1",
"abetow",
"caltow",
"wegspa1",
"pregrs1",
"pregrs2",
"russpa1",
"rucspa",
"oaxspa1",
"gnttow",
"pubtho1",
"spotow",
"eastow",
"coltow1",
"rcbfin1",
"wnbfin1",
"yetfin1",
"yegfin1",
"mobfin1",
"moubru2",
"tebfin1",
"brttho1",
"smbfin1",
"obbfin1",
"yehbrf1",
"dhbfin1",
"wrbfin1",
"whbfin1",
"rebfin1",
"tribrf1",
"trbfin1",
"slbfin2",
"rabtho1",
"pnbfin1",
"yebbrf1",
"yebbru1",
"wwbfin1",
"phbfin1",
"bcbfin1",
"rbbfin1",
"apubrf1",
"cuzbrf1",
"bkfbrf1",
"ructho1",
"rnbfin1",
"fhbfin1",
"ysbfin1",
"wectan1",
"eactan1",
"bcptan1",
"grtwar1",
"whwwar1",
"purtan1",
"wesspi",
"blmtho1",
"hisspi",
"purspi",
"wrenth1",
"yehwar1",
"oriwar1",
"yebcha",
"yehbla",
"boboli",
"wesmea",
"easmea",
"bufhel1",
"rebbla1",
"whbbla2",
"permea1",
"lotmea1",
"pammea1",
"yebcac1",
"yewcac1",
"chhoro1",
"ruboro1",
"dugoro1",
"bubhel1",
"creoro1",
"greoro1",
"olioro1",
"monoro1",
"blaoro1",
"bauoro2",
"sobcac1",
"gowcac1",
"selcac1",
"ecucac1",
"soucas1",
"rosgoo",
"gnbhel1",
"yercac1",
"scrcac1",
"moucac1",
"batoro1",
"casoro2",
"rercac1",
"scoori",
"yebori1",
"audori",
"jamori1",
"tyrmet1",
"oraori1",
"altori",
"yelori1",
"bulori",
"stbori",
"blbori1",
"balori",
"yetori1",
"spbori",
"wheori1",
"permet1",
"camtro1",
"ventro1",
"orbtro3",
"bawori1",
"bkvori",
"hooori",
"bkcori",
"orcori",
"orcori3",
"graori2",
"virmet1",
"graori3",
"marori1",
"graori4",
"graori1",
"orcori1",
"epaori4",
"epaori1",
"jambla1",
"yesbla1",
"tasbla",
"vitmet1",
"tribla",
"rewbla",
"resbla1",
"scrcow1",
"giacow",
"shicow",
"brocow",
"brocow2",
"bnhcow",
"scrbla1",
"nebmet1",
"armbab1",
"bacbab1",
"blabab2",
"dusbab2",
"sopbab1",
"harbab1",
"bklbab1",
"bkfbab1",
"norpib1",
"taihwa1",
"fitmet1",
"lenlau1",
"whclau2",
"blhlau1",
"whnlau1",
"grylau1",
"ruclau3",
"suklau1",
"ruclau1",
"spolau1",
"gialau1",
"scamet1",
"barlau1",
"wynlau1",
"ruvlau1",
"whclau1",
"chhlau1",
"runlau1",
"chblau1",
"whblau1",
"maslau1",
"gnlthr",
"blamet1",
"pedlau1",
"chibub1",
"chibab2",
"giabab1",
"tibbab1",
"whtlau1",
"ruclau2",
"gyslau",
"ruslau1",
"spothr1",
"grepuf1",
"dapthr1",
"gycill1",
"capsug1",
"gursug1",
"phifab1",
"ruckin",
"firecr1",
"firecr3",
"gockin",
"flamec1",
"snogoo",
"butpuf1",
"goldcr1",
"spwbab1",
"yebhyl1",
"souhyl1",
"whhwre1",
"babwre1",
"grbwre1",
"stbwre2",
"faswre1",
"giawre1",
"hoapuf1",
"bicwre1",
"runwre1",
"runwre3",
"runwre4",
"spowre1",
"bouwre1",
"yucwre1",
"thlwre1",
"gymwre1",
"tobwre1",
"blbpuf3",
"rocwre",
"canwre",
"sumwre1",
"navwre1",
"rufwre1",
"shawre1",
"perwre1",
"fulwre1",
"sedwre1",
"merwre1",
"glopuf2",
"apowre1",
"sedwre",
"marwre",
"bewwre",
"zapwre1",
"bltwre1",
"incwre1",
"mouwre2",
"whiwre1",
"corwre1",
"bltpuf1",
"hapwre1",
"spbwre1",
"rubwre1",
"spbwre2",
"banwre1",
"rawwre1",
"antwre2",
"nicwre1",
"sinwre1",
"plawre1",
"cobpuf1",
"plawre3",
"istwre1",
"cibwar1",
"gryemt1",
"bretai1",
"jrswar1",
"afbwar1",
"wwswar1",
"grswar1",
"hirwar2",
"savpuf1",
"bkcdon",
"whtoxy1",
"lobber1",
"crywar1",
"wetjer2",
"thamno2",
"spetet1",
"apptet1",
"dustet1",
"gyctet1",
"gobpuf1",
"yeboxy1",
"ranwar1",
"comjer1",
"grejer1",
"sttjer1",
"refcis1",
"sincis1",
"whicis1",
"tricis1",
"chacis1",
"blcpuf1",
"bubcis1",
"huncis1",
"chucis1",
"bllcis1",
"rolcis2",
"ratcis1",
"borcis1",
"chucis2",
"ashcis1",
"grycis1",
"embpuf1",
"rehcis2",
"waicis1",
"waicis2",
"wincis2",
"wincis3",
"wincis4",
"wincis6",
"wincis5",
"chicis1",
"carcis1",
"gragoo",
"marspa1",
"tincis1",
"stocis1",
"abecis1",
"crocis1",
"dorcis1",
"tincis3",
"sifcis1",
"rufcis1",
"foxcis1",
"pipcis2",
"shisun1",
"tabcis1",
"zitcis1",
"soccis1",
"madcis2",
"descis1",
"clocis1",
"clscis1",
"pepcis1",
"paccis1",
"wiscis1",
"pubsun1",
"gohcis1",
"socwar2",
"strpri2",
"strpri8",
"bropri1",
"brnpri2",
"brnpri3",
"hilpri1",
"hilpri2",
"gycpri1",
"blhsun1",
"rufpri2",
"rufpri1",
"gybpri1",
"grapri1",
"delpri1",
"junpri1",
"bawpri1",
"yebpri1",
"ashpri1",
"tafpri1",
"broinc1",
"plapri1",
"palpri1",
"rivpri1",
"blcpri1",
"karpri1",
"drapri1",
"banpri1",
"banpri3",
"rewpri1",
"refwar2",
"broinc2",
"whcpri2",
"silpri2",
"nampri1",
"robpri1",
"minmib1",
"grnlon1",
"blcapa2",
"ruwapa1",
"crilon1",
"bubwar2",
"blainc1",
"batapa2",
"batapa3",
"batapa4",
"rudapa1",
"yebapa2",
"yebapa1",
"masapa1",
"blfapa1",
"bltapa1",
"whwapa1",
"colinc1",
"blcapa1",
"blhapa1",
"chiapa1",
"chtapa3",
"chaapa1",
"shaapa2",
"butapa1",
"karapa1",
"gosapa1",
"gryapa1",
"vitsta1",
"brhapa1",
"ruewar2",
"oriwar2",
"gycwar3",
"grbcam1",
"gnbcam3",
"yebcam1",
"olgcam1",
"grywrw1",
"miowrw2",
"raista1",
"miowrw3",
"bawwar1",
"kopwar1",
"bkcruw1",
"bkfruw1",
"mrmwar1",
"mrmwar3",
"comtai1",
"dantai1",
"camtai1",
"swagoo1",
"whtsta1",
"phitai1",
"ruftai1",
"gybtai1",
"ruttai1",
"ashtai1",
"olbtai1",
"whetai1",
"whbtai1",
"yebtai1",
"lobtai1",
"dussta1",
"afrtai2",
"whtwar1",
"yebere1",
"salere1",
"yevere1",
"senere1",
"grbere1",
"greere1",
"yerere1",
"bunere1",
"buwsta1",
"rucere1",
"turere1",
"blnere1",
"shtwhy1",
"eapwhy1",
"nopwhy1",
"ltpwhy1",
"btpwhy1",
"oliwar",
"alpacc1",
"gobsta1",
"himacc1",
"robacc1",
"rubacc1",
"sibacc",
"broacc1",
"radacc2",
"bltacc1",
"monacc1",
"dunnoc1",
"japacc1",
"mouvel1",
"mabacc1",
"forwag1",
"eaywag1",
"eaywag",
"citwag",
"capwag1",
"madwag1",
"grywag",
"mouwag1",
"whiwag",
"swbhum1",
"afpwag1",
"mekwag1",
"japwag1",
"whbwag1",
"shalon1",
"abylon1",
"fuelon2",
"ortlon1",
"yetlon1",
"panlon1",
"gresap1",
"rotlon1",
"ricpip1",
"oripip1",
"auspip2",
"auspip3",
"afrpip1",
"moupip1",
"blypip1",
"tawpip1",
"lobpip1",
"butcor1",
"lobpip7",
"woopip1",
"bufpip1",
"plbpip1",
"lolpip1",
"meapip1",
"dausta1",
"chcsta1",
"whssta2",
"chtsta2",
"chbcor1",
"whhsta2",
"malsta1",
"brasta1",
"whfsta2",
"rossta2",
"eursta",
"sposta1",
"watsta1",
"bbgsta1",
"phgsta1",
"vepcor1",
"ctgsta1",
"capgls1",
"gbesta1",
"lbesta1",
"btgsta1",
"spgsta1",
"pugsta1",
"ruegls1",
"ltgsta1",
"gobsta5",
"taibeg1",
"boorat1",
"megsta1",
"bugsta1",
"stgsta1",
"supsta1",
"hilsta1",
"shesta1",
"chbsta1",
"ashsta2",
"fissta1",
"afpsta1",
"boorat2",
"whcsta3",
"madsta1",
"vibsta2",
"rewsta1",
"slbsta1",
"chwsta1",
"walsta1",
"somsta1",
"trista1",
"pawsta1",
"rubrat1",
"brcsta1",
"whbsta1",
"neusta1",
"stusta1",
"kensta1",
"natsta1",
"shasta2",
"abbsta2",
"whcsta2",
"magsta1",
"whthil2",
"babsta1",
"stsrha2",
"stbrha1",
"yeboxp1",
"reboxp1",
"moublu",
"wesblu",
"easblu",
"fifthr1",
"rufthr1",
"whthil3",
"wtathr1",
"rtathr1",
"boucha1",
"brbsol1",
"slcsol1",
"towsol",
"puaioh",
"omao",
"cubsol1",
"rutsol1",
"pubwhi1",
"blfsol1",
"varsol1",
"andsol1",
"sulthr1",
"fruith1",
"purcoc1",
"grecoc1",
"varthr",
"aztthr",
"rubsol1",
"ruvwhi1",
"blasol1",
"whesol1",
"misthr1",
"afrthr1",
"abythr1",
"olithr2",
"kurthr1",
"comthr1",
"abethr1",
"karthr1",
"vebbri1",
"chibla1",
"balwar1",
"fitmyz1",
"rutbab1",
"gobful1",
"yeebab1",
"jerbab1",
"beibab1",
"speful1",
"indful1",
"pitbri1",
"chiful1",
"whbful1",
"ludful1",
"sttful2",
"sttful1",
"taiful1",
"wrenti",
"reepar3",
"blbpar2",
"spbpar1",
"ruwbri1",
"grepar1",
"bropar1",
"thtpar1",
"gyhpar3",
"bkhpar1",
"ruhpar3",
"ruhpar2",
"shtpar3",
"fulpar1",
"bltpar1",
"pifgoo",
"bltbri1",
"golpar2",
"spepar2",
"gyhpar4",
"brwpar2",
"vitpar1",
"astpar1",
"rutpar2",
"whcyuh1",
"chcyuh1",
"stryuh1",
"goujew1",
"indyuh1",
"blcyuh1",
"taiyuh1",
"whiyuh1",
"buryuh1",
"whnyuh1",
"sttyuh1",
"ruvyuh1",
"fltbab1",
"vispyb1",
"fabbri1",
"pygbab1",
"rucbab3",
"bkcbab3",
"pasbab1",
"chfbab1",
"nesbab1",
"giweye1",
"gyhwhe1",
"pyweye1",
"minwhe1",
"grcbri1",
"sthwhe1",
"whbwhe1",
"dacwhe1",
"timwhe1",
"flowhe1",
"ysweye1",
"bonhon1",
"ceywhe1",
"yelwhe1",
"bkcwhe1",
"empbri1",
"wbweye1",
"whbwhe3",
"brrwhe9",
"cfweye1",
"swiwhe1",
"mouble1",
"warwhe1",
"loweye2",
"coweye1",
"reuwhe1",
"vifbri1",
"mauwhe1",
"maswhe2",
"maswhe3",
"afywhe1",
"afywhe3",
"brrwhe8",
"heuwhe2",
"brrwhe3",
"abywhe1",
"anweye1",
"brarub1",
"brrwhe4",
"afywhe2",
"capwhe6",
"capwhe2",
"afywhe4",
"peweye1",
"seywhe1",
"madwhe1",
"kirwhe1",
"maywhe1",
"giahum1",
"bcweye2",
"bkrwhe1",
"bfweye1",
"wtweye1",
"crtwhe2",
"crtwhe1",
"burwhe1",
"serwhe1",
"asbwhe1",
"ayweye3",
"maghum1",
"silver3",
"humwhe1",
"sanwhe2",
"evweye1",
"baweye2",
"sacwhe1",
"capwhe3",
"capwhe8",
"yfweye1",
"vanwhe1",
"maghum2",
"laweye1",
"bhweye1",
"biweye1",
"gytwhe2",
"gytwhe1",
"yapwhe1",
"duweye1",
"koswhe1",
"rotwhe1",
"ytweye1",
"tunbeg1",
"fithum1",
"ngweye1",
"ambwhe1",
"grkwhe1",
"spweye2",
"likwhe1",
"gaweye1",
"soiwhe3",
"brweye1",
"ciweye1",
"loweye1",
"lobsta1",
"kulwhe1",
"llweye1",
"sbweye1",
"slweye1",
"gnbwhe1",
"chcbab1",
"tabbab1",
"dafbab1",
"gyftib1",
"gyctib1",
"plcsta",
"sttbab1",
"bostib1",
"fbtbab1",
"brtbab1",
"golbab1",
"chwbab1",
"chwbab3",
"crcbab1",
"rufbab2",
"blcbab2",
"stbsta1",
"rucbab1",
"bucbab1",
"rtwbab1",
"miswrb1",
"bwwbab1",
"patwrb1",
"ltwbab1",
"chhwrb1",
"tbwbab1",
"gybwrb1",
"bltsta2",
"blalau1",
"bahlau1",
"cobscb1",
"rbsbab1",
"sbsbab1",
"sbsbab3",
"taiscb1",
"wbsbab1",
"insbab1",
"srlscb1",
"wbmgem1",
"chbscb2",
"chbscb1",
"lasbab1",
"rcsbab1",
"spbscb1",
"bksscb1",
"gysscb1",
"sbsbab2",
"bltbab1",
"whbbab2",
"buthum",
"chrbab1",
"gytbab1",
"gyhbab1",
"nonbab1",
"soobab1",
"wbwbab1",
"chbbab1",
"whnbab1",
"whbbab1",
"sntbab1",
"amthum1",
"spnbab1",
"rurgra1",
"chigra1",
"lawbab1",
"mawbab1",
"btwbab1",
"socbab1",
"gybbab1",
"sccbab1",
"rucbab2",
"gtmgem1",
"moubab1",
"palbab1",
"whhbab2",
"colbab1",
"yetful1",
"ruwful1",
"bkcful1",
"gofful2",
"rutful1",
"rucful1",
"gbmgem1",
"dusful1",
"putbab1",
"bncbab1",
"marbab2",
"bkcbab1",
"blcbab1",
"shtbab1",
"ashbab1",
"sptbab1",
"bubbab1",
"gwfgoo",
"ptmgem",
"sumbab1",
"tembab1",
"whcbab1",
"ferbab1",
"sulbab1",
"ruvpri1",
"swapri1",
"broill1",
"pabill1",
"pabill3",
"whtmog2",
"mouill1",
"blaill1",
"scbill1",
"thrbab1",
"puvill1",
"ruwill1",
"stwbab3",
"abbbab1",
"horbab2",
"mowbab1",
"gathum1",
"stwbab1",
"limwrb4",
"limwrb2",
"limwrb3",
"rbwbab1",
"stwbab2",
"bowbab1",
"fawbab1",
"eywbab1",
"lbwbab1",
"amewoo1",
"sumwrb1",
"whtwrb1",
"namscb1",
"stsbab1",
"brcful1",
"bkbful1",
"broful1",
"javful1",
"nepful1",
"gycful3",
"pucwoo1",
"gycful5",
"gycful1",
"gycful4",
"mouful1",
"strlau2",
"cutia1",
"viecut1",
"scalau1",
"brclau1",
"blwlau1",
"oashum1",
"strlau1",
"bhulau1",
"strlau3",
"varlau1",
"blflau1",
"whwlau1",
"prhlau1",
"elllau1",
"retlau1",
"chclau2",
"shtwoo1",
"asslau1",
"rewlau1",
"sielau1",
"mallau1",
"bkclau1",
"bkclau2",
"kerlau2",
"lotsib1",
"whesib1",
"rufsib1",
"pershe2",
"beasib1",
"grysib1",
"blbsib1",
"blhsib1",
"hotbar1",
"taibar1",
"sttbar1",
"strbar1",
"blwmin1",
"chtmin1",
"matwoo1",
"rufbar1",
"spebar1",
"bkcbar1",
"retmin1",
"rubsib1",
"stelio1",
"reflio2",
"reflio3",
"lagbab2",
"ashlau1",
"putwoo1",
"slbbab1",
"rufbab3",
"orbbab1",
"junbab2",
"yebbab1",
"rufcha2",
"scacha1",
"irabab1",
"combab1",
"combab3",
"lwfgoo",
"chiwoo1",
"fulcha1",
"arabab1",
"strbab1",
"whtbab1",
"spibab1",
"capbab1",
"wtmbab1",
"brobab1",
"whrbab2",
"hipbab1",
"sltwoo1",
"scabab2",
"tanfin1",
"ytbtan1",
"shbbut1",
"atbtan1",
"scbtan1",
"cobtan1",
"tabtan1",
"dubtan1",
"tumspa1",
"whbwoo6",
"stcspa2",
"stcspa3",
"ruwspa",
"citspa1",
"sthspa1",
"blcspa1",
"brispa1",
"botspa",
"casspa",
"bacspa",
"litwoo5",
"graspa",
"graspa1",
"yebspa1",
"olispa",
"grbspa1",
"blsspa1",
"tocspa1",
"fisspa",
"bktspa",
"larspa",
"gorwoo2",
"larbun",
"chispa",
"clcspa",
"bkcspa",
"fiespa",
"brespa",
"worspa",
"sthbrf4",
"sthbrf5",
"sthbrf3",
"samwoo2",
"sthbrf1",
"sthbrf8",
"sthbrf2",
"orbspa1",
"blcspa2",
"gowspa1",
"pecspa1",
"safspa1",
"hacspa1",
"sabspa4",
"spthum1",
"sabspa1",
"gsbfin1",
"ccbfin",
"soffin1",
"olifin1",
"foxsp2",
"foxsp3",
"foxsp4",
"foxspa",
"amtspa",
"sleshe1",
"voljun1",
"daejun",
"yeejun",
"yeejun2",
"rucspa1",
"whcspa",
"gocspa",
"harspa",
"whtspa",
"sagspa1",
"mexshe1",
"belspa2",
"strspa1",
"vesspa",
"lecspa",
"seaspa",
"nstspa",
"sstspa",
"baispa",
"henspa",
"savspa",
"luchum",
"simspa1",
"sonspa",
"linspa",
"swaspa",
"laffin1",
"zapspa1",
"whcbul2",
"whsbul1",
"blfbul1",
"combul4",
"cosswa1",
"beahum1",
"combul5",
"combul6",
"capbul1",
"afrmar2",
"sqtsaw1",
"blksaw1",
"whhsaw1",
"banmar1",
"bramar1",
"masmar1",
"bkchum",
"banswa",
"pasmar1",
"plamar1",
"gytmar1",
"treswa",
"vigswa",
"whrswa1",
"chiswa1",
"tumswa1",
"manswa1",
"rthhum",
"whwswa1",
"whbswa2",
"blcswa1",
"whtswa1",
"bawswa1",
"blcswa2",
"tahswa2",
"pafswa1",
"brbswa1",
"andswa2",
"verhum1",
"nrwswa",
"srwswa1",
"brcmar1",
"permar1",
"purmar",
"soumar",
"gybmar",
"cubmar",
"carmar1",
"gyrswa1",
"beehum1",
"whbswa3",
"eurcrm1",
"rocmar1",
"rocmar2",
"duscrm1",
"barswa1",
"piwswa1",
"pebswa1",
"pacswa1",
"pacswa3",
"annhum",
"welswa1",
"whtswa3",
"witswa1",
"wtbswa1",
"barswa",
"angswa1",
"recswa1",
"ethswa1",
"cohmar1",
"comhom2",
"coshum",
"nephom1",
"ashmar1",
"rucswa2",
"mosswa2",
"lessts1",
"grests1",
"rerswa1",
"srlswa1",
"strswa2",
"rubswa1",
"calhum",
"cliswa",
"cavswa",
"chcswa2",
"preswa2",
"retswa2",
"soaswa2",
"sttswa2",
"faimar2",
"tremar2",
"chicup1",
"rufhum",
"taiwrb1",
"immwrb1",
"pywbab1",
"mogwar1",
"capgra1",
"damroc1",
"yellon1",
"kemlon1",
"grylon1",
"pullon1",
"allhum",
"krelon1",
"norcro1",
"refcro1",
"capcro1",
"reccro1",
"grecro1",
"lebcro1",
"whbcro2",
"viswar1",
"yebwar1",
"dwacas1",
"blkswa",
"brthum",
"rufwar1",
"blfwar1",
"moutai2",
"ruhtai2",
"brbwar1",
"phbwar1",
"jabwar",
"manbuw1",
"pabwar1",
"tabwar1",
"bumhum",
"shawar1",
"odedi1",
"fibwar1",
"bfbwar1",
"yebbuw2",
"ybbwar1",
"subwar4",
"abbwar1",
"gybtes1",
"ructes1",
"withum1",
"javtes1",
"cetwar1",
"ccbwar1",
"gysbuw1",
"chhtes1",
"asistu1",
"borstu1",
"timstu1",
"neuwar1",
"stswar1",
"scihum1",
"yelfly2",
"chcfly1",
"livfly1",
"grehyl1",
"tithyl1",
"atlfly1",
"rucfly3",
"barfly1",
"rucfly1",
"furfly1",
"dushum1",
"palfly1",
"sumfly1",
"blbfly2",
"lisfly1",
"rutfly7",
"bunfly1",
"lomfly1",
"damfly1",
"alsred1",
"rubred2",
"cubeme1",
"bucred1",
"blared1",
"comred2",
"hodred1",
"whtred1",
"daured1",
"moured1",
"whwred2",
"blfred1",
"plured1",
"pureme1",
"whcred1",
"wwccha1",
"carthr1",
"serthr1",
"strthr1",
"mirthr1",
"rtrthr1",
"lirthr1",
"burthr",
"cbrthr1",
"blhhum1",
"bcrthr1",
"wtrthr1",
"litrot1",
"forrot2",
"whinch1",
"whbbus4",
"whtbus1",
"caisto1",
"stonec4",
"sibsto1",
"brbhum",
"stonec7",
"afrsto1",
"stonec6",
"reusto1",
"whtsto2",
"piebus1",
"jerbus1",
"grybus1",
"timbus1",
"busbus1",
"brbhum2",
"siccha1",
"karcha1",
"moocha1",
"moccha1",
"soocha1",
"noacha1",
"soacha1",
"ruecha1",
"mouwhe1",
"whbcha2",
"blnswa2",
"goceme1",
"ruacha1",
"norwhe",
"norwhe3",
"capwhe1",
"rebwhe2",
"heuwhe1",
"isawhe1",
"hoowhe1",
"deswhe1",
"bkewhe1",
"caneme1",
"bkewhe2",
"cypwhe1",
"piewhe1",
"wfbcha1",
"rerwhe1",
"blacks1",
"famcha1",
"brtcha1",
"somcha1",
"indcha1",
"weseme1",
"varwhe1",
"blawhe1",
"mouwhe4",
"whtwhe1",
"humwhe2",
"finwhe1",
"mouwhe6",
"mouwhe2",
"mouwhe7",
"mouwhe5",
"rebeme1",
"retwhe3",
"retwhe2",
"hercha1",
"grcfly3",
"whtdip1",
"brodip1",
"amedip",
"whcdip1",
"rutdip1",
"philea1",
"blteme1",
"yetlea1",
"leglea1",
"blwlea1",
"borlea1",
"jerlea1",
"goflea1",
"orblea1",
"orblea3",
"olbflo1",
"yebflo2",
"chieme1",
"crbflo1",
"palflo1",
"yerflo1",
"scbflo2",
"speflo1",
"gorflo1",
"thbflo1",
"thbflo3",
"whiflo1",
"yevflo1",
"glbeme1",
"yebflo1",
"whtflo1",
"yesflo1",
"olcflo1",
"bicflo1",
"resflo1",
"rekflo1",
"sccflo1",
"cebflo1",
"orbflo1",
"shteme1",
"whbflo1",
"pabflo1",
"plaflo1",
"plaflo2",
"andflo1",
"pygflo1",
"crcflo1",
"flbflo2",
"flbflo3",
"ashflo1",
"whehum",
"olcflo2",
"recflo1",
"louflo1",
"rebflo1",
"midflo1",
"motflo1",
"blfflo1",
"recflo2",
"mistle1",
"gysflo1",
"xanhum",
"blsflo1",
"fibflo2",
"blbflo1",
"scbflo1",
"schflo1",
"rucsun2",
"sctsun2",
"gyhsun1",
"plbsun1",
"ancsun1",
"mutswa",
"wetsab1",
"plasun1",
"pltsun2",
"retsun3",
"mobsun1",
"wvbsun1",
"kvbsun1",
"ligsun2",
"gresun1",
"grnsun2",
"bansun1",
"lotsab1",
"colsun2",
"pygsun2",
"nivsun2",
"amasun2",
"reisun2",
"orbsun2",
"gnhsun1",
"btbsun2",
"camsun2",
"buhsun1",
"rufsab1",
"eaosun1",
"mocsun2",
"butsun2",
"carsun2",
"gntsun1",
"amesun2",
"sccsun2",
"hunsun2",
"socsun2",
"pursun3",
"emchum1",
"crbsun2",
"putsun3",
"vahsun1",
"blksun1",
"cotsun2",
"bocsun2",
"tacsun1",
"brosun1",
"malsun1",
"retsun2",
"vihhum1",
"gowsun2",
"olbsun3",
"tinsun2",
"miosun3",
"miosun2",
"sdcsun3",
"neesun2",
"stusun1",
"mdcsun3",
"ndcsun2",
"anchum1",
"gdcsun2",
"regsun2",
"edcsun3",
"edcsun4",
"morsun2",
"lovsun3",
"beasun2",
"marsun2",
"shesun2",
"recsun2",
"samblo1",
"bkbsun1",
"pubsun4",
"tsasun1",
"pemsun2",
"ortsun3",
"palsun2",
"shisun3",
"splsun2",
"johsun2",
"supsun2",
"tolblo1",
"ruwsun2",
"oussun2",
"whbsun2",
"varsun2",
"dussun2",
"urssun2",
"batsun2",
"copsun2",
"pursun4",
"yebdac1",
"plover3",
"turdac1",
"blfdac1",
"bkfdac1",
"blldac1",
"whbdac1",
"mccfin1",
"bltsal1",
"orisal1",
"grwsal1",
"grasal2",
"plover4",
"grasal4",
"grasal3",
"strsal1",
"butsal1",
"blwsal1",
"blhsal1",
"blcsal1",
"bltgro2",
"slcgro1",
"massal1",
"truswa",
"gybsab1",
"thbsal1",
"gobsal1",
"banana",
"yefgra1",
"orange1",
"purbul1",
"grabul1",
"cubbul1",
"yesgra1",
"barbul1",
"gybsab4",
"leabul1",
"bkfgra",
"soogra2",
"ducgra2",
"warfin1",
"grywaf1",
"vegfin2",
"metfin1",
"woofin1",
"smtfin1",
"gybsab5",
"latfin1",
"smgfin1",
"larcaf2",
"lagfin1",
"cocfin3",
"megfin1",
"blbgra1",
"bawtan1",
"cobtan2",
"ructan4",
"rubsab1",
"grhtan1",
"blgtan1",
"flctan1",
"whstan1",
"yectan1",
"pilfin1",
"recfin1",
"fuctan1",
"tactan1",
"whltan1",
"whtsab1",
"restan1",
"ructan1",
"crbfin1",
"wwstan1",
"fustan1",
"btstan1",
"wtstan1",
"crctan1",
"flrtan1",
"flrtan3",
"lazsab1",
"y00599",
"bratan1",
"crbtan1",
"mactan1",
"bkbtan1",
"lessee2",
"linsee1",
"whcsee2",
"whcsee1",
"varsee3",
"viosab1",
"grysee1",
"wibsee1",
"whnsee1",
"caqsee1",
"bawsee1",
"docsee1",
"yebsee1",
"dubsee1",
"tbsfin1",
"cbsfin",
"bubsab1",
"nisfin1",
"gbsfin1",
"lbsfin1",
"bbsfin1",
"slcsee1",
"temsee1",
"bufsee1",
"plusee1",
"trosee1",
"rucsee1",
"napsab1",
"whtsee1",
"whbsee1",
"pabsee1",
"chtsee1",
"chbsee1",
"rubsee1",
"capsee1",
"batsee2",
"tabsee1",
"datsee1",
"brtplu1",
"pebsee1",
"rursee1",
"chesee1",
"marsee1",
"blbsee2",
"cinfin1",
"slbfin3",
"gyhbut1",
"blhhem1",
"bowfin1",
"tunswa",
"whvplu1",
"barwaf1",
"barwaf2",
"cowfin1",
"tumfin1",
"comfin1",
"rubhem1",
"gychem1",
"blchem1",
"bkchem2",
"orbhem1",
"crowoo1",
"olehem1",
"blehem1",
"bkehem3",
"bkehem1",
"fuhtan1",
"bubtan1",
"orhtan1",
"chhtan1",
"raytan1",
"suphem1",
"fotwoo1",
"ructan3",
"brftan1",
"bcwfin1",
"ltrfin1",
"whrtan1",
"rswfin1",
"cbmfin1",
"bbbtan1",
"pardus2",
"rrwfin1",
"lotwoo2",
"gytwaf1",
"rbwfin2",
"ptwfin1",
"riwfin1",
"thshem1",
"bcwfin2",
"ciwfin1",
"pebcon1",
"biccon1",
"chvcon1",
"vicwoo2",
"whecon1",
"capcon1",
"giacon1",
"blbcon1",
"whbcon1",
"tamcon1",
"rubcon1",
"cincon1",
"styfin1",
"sutfin1",
"snowca1",
"bryfin1",
"saffin",
"ofyfin1",
"gryfin1",
"chyfin1",
"payfin1",
"gryfin3",
"monyef1",
"gryfin2",
"rayfin1",
"whteme1",
"puyfin1",
"gyhsif1",
"pasfin1",
"bhsfin1",
"pesfin1",
"wilfin3",
"nigfin3",
"cawfin1",
"yebfin1",
"absfin1",
"mexwoo1",
"plsfin1",
"unifin1",
"slafin1",
"pebfin1",
"tildac1",
"rbsfin1",
"wtsfin1",
"wwdfin1",
"shtfin1",
"batsee1",
"stthum1",
"plcsee1",
"parsee1",
"debflo1",
"bluflo1",
"masflo1",
"indflo1",
"rusflo1",
"slaflo1",
"cibflo1",
"mouflo1",
"blbhum1",
"gloflo1",
"chbflo1",
"greflo1",
"whsflo1",
"gybflo1",
"bktflo1",
"merflo1",
"blkflo1",
"vertan1",
"pumtan2",
"whoswa",
"scbhum1",
"yettan1",
"goctan3",
"yestan1",
"goctan4",
"fabtan1",
"baytan3",
"rubsal1",
"bbmtan1",
"cbmtan1",
"homtan1",
"bufhum1",
"blctan2",
"mamtan1",
"grgtan1",
"bcmtan1",
"gbmtan1",
"bwmtan1",
"bcmtan2",
"bkcmot1",
"sbmtan1",
"lamtan1",
"tumhum1",
"glgtan1",
"multan1",
"oretan1",
"orttan1",
"ygbtan1",
"gortan1",
"mobtan1",
"goctan1",
"eurnut1",
"rebcho1",
"spthum2",
"yebcho1",
"piapia1",
"eurjac",
"daujac1",
"houcro1",
"neccro1",
"bancro1",
"slbcro1",
"slbcro4",
"slbcro3",
"mashum1",
"pipcro1",
"flocro1",
"marcro1",
"lobcro1",
"guacro1",
"boucro1",
"brhcro1",
"grycro1",
"capcro2",
"rook1",
"swthum1",
"amecro",
"tamcro",
"sincro1",
"fiscro",
"palcro2",
"cupcro1",
"jamcro1",
"cubcro1",
"whncro1",
"carcro1",
"somhum1",
"hoocro1",
"colcro1",
"labcro1",
"labcro4",
"labcro3",
"torcro2",
"torcro3",
"litcro1",
"forrav1",
"litrav1",
"olshum1",
"ausrav1",
"brnrav1",
"somcro2",
"comrav",
"chirav",
"fatrav1",
"whnrav1",
"thbrav1",
"whwcho1",
"apostl1",
"stream2",
"lesmel1",
"gremel1",
"bucifr1",
"parcro1",
"paradi3",
"glmman2",
"crcman2",
"cucman1",
"truman1",
"lotpar1",
"stream3",
"splast1",
"ritast1",
"prsast1",
"huoast1",
"wespar1",
"carpar1",
"lawpar1",
"wahpar1",
"kospar1",
"grsbop1",
"freduc1",
"vichum",
"vosbop1",
"parrif1",
"vicrif1",
"magrif3",
"magrif2",
"blasic1",
"brosic1",
"blbsic1",
"pabsic1",
"mbopar2",
"grfhum1",
"wbopar1",
"kbopar1",
"walsta2",
"twwbop1",
"gbopar2",
"rbopar1",
"lbopar1",
"rbopar2",
"ebopar1",
"bbopar1",
"azchum1",
"whfrob1",
"payrob1",
"whbrob1",
"yelrob1",
"gybrob1",
"olyrob1",
"hoorob1",
"dusrob1",
"whwrob2",
"smorob2",
"buvhum1",
"bugrob1",
"whrrob2",
"manrob1",
"blcrob1",
"blsrob2",
"whbrob2",
"bltrob1",
"gyhrob1",
"ashrob2",
"gyhrob2",
"berhum",
"papscr2",
"nosrob1",
"sosrob1",
"lebfly3",
"gobfly2",
"jacwin1",
"torfly1",
"yebrob1",
"yelfly4",
"olifly3",
"blthum1",
"canfly2",
"garrob1",
"rosrob1",
"pinrob1",
"snmrob1",
"alprob1",
"flarob1",
"pacrob3",
"pacrob1",
"pacrob2",
"snbhum1",
"scarob2",
"recrob1",
"tomtit1",
"nezrob2",
"nezrob3",
"charob1",
"grbrob1",
"grgrob1",
"legrob1",
"whnroc1",
"stvhum2",
"rufroc1",
"orbroc1",
"marbab1",
"bohwax",
"japwax1",
"cedwax",
"bayfly1",
"grsfly1",
"ltsfly1",
"phaino",
"inchum1",
"hypoco1",
"palmch1",
"olfwhi1",
"yebfan1",
"faifly1",
"eurgre1",
"origre",
"origre6",
"yebgre4",
"viegre2",
"chbhum1",
"bkhgre1",
"desfin2",
"gowgro2",
"orifin1",
"afrcit1",
"wescit1",
"soucit1",
"blfcan1",
"papcan1",
"forcan1",
"flystd1",
"grbhum1",
"whrsee",
"bltcan1",
"yerser1",
"reisee2",
"olrser1",
"yetser1",
"salser1",
"yefcan",
"whbcan1",
"ankser2",
"corhum1",
"yemser1",
"capsis2",
"drasis2",
"norgrc1",
"sougrc1",
"yelcan1",
"brican1",
"sthsee2",
"sthsee3",
"blesee1",
"cinhum1",
"brrsee1",
"whtcan1",
"thbsee1",
"strsee1",
"tansee1",
"procan1",
"twite1",
"eurlin1",
"yemlin1",
"comred",
"bubhum",
"lesred1",
"hoared",
"parcro2",
"scocro1",
"reblei",
"melthr",
"rtlhum",
"honeme1",
"manhum1",
"amahum1",
"andeme1",
"shghum1",
"flistd1",
"gotsap1",
"vereme1",
"sathum1",
"sabhum1",
"humsap2",
"blhsap1",
"whceme1",
"plbeme1",
"whthum2",
"glteme1",
"falstd1",
"saseme1",
"rutsap1",
"gilhum1",
"whbhum1",
"gawhum1",
"blchum1",
"chahum1",
"puchum1",
"whbeme1",
"bltgol1"
] |
Augusto777/vit-base-patch16-224-RX2-12
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-RX2-12
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7887
- Accuracy: 0.7391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3604 | 0.94 | 11 | 1.2834 | 0.4783 |
| 1.2312 | 1.96 | 23 | 1.1356 | 0.6522 |
| 1.0933 | 2.98 | 35 | 1.0386 | 0.6739 |
| 0.936 | 4.0 | 47 | 0.9049 | 0.6739 |
| 0.8011 | 4.94 | 58 | 0.9847 | 0.6087 |
| 0.616 | 5.96 | 70 | 0.9236 | 0.6304 |
| 0.5251 | 6.98 | 82 | 0.8640 | 0.6522 |
| 0.4618 | 8.0 | 94 | 0.8612 | 0.7174 |
| 0.3974 | 8.94 | 105 | 0.8461 | 0.6522 |
| 0.3532 | 9.96 | 117 | 0.7887 | 0.7391 |
| 0.335 | 10.98 | 129 | 0.7995 | 0.7174 |
| 0.3211 | 11.23 | 132 | 0.8058 | 0.7174 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-U10-12
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U10-12
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6632
- Accuracy: 0.7843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3629 | 0.95 | 15 | 1.2289 | 0.4706 |
| 1.1038 | 1.97 | 31 | 1.0413 | 0.5882 |
| 0.9375 | 2.98 | 47 | 0.8989 | 0.5882 |
| 0.6917 | 4.0 | 63 | 0.8520 | 0.7059 |
| 0.5862 | 4.95 | 78 | 0.6827 | 0.7255 |
| 0.4042 | 5.97 | 94 | 0.7281 | 0.7255 |
| 0.2987 | 6.98 | 110 | 0.7262 | 0.7647 |
| 0.2571 | 8.0 | 126 | 0.7604 | 0.7255 |
| 0.2326 | 8.95 | 141 | 0.6632 | 0.7843 |
| 0.1994 | 9.97 | 157 | 0.6744 | 0.7451 |
| 0.1968 | 10.98 | 173 | 0.6864 | 0.7451 |
| 0.1847 | 11.43 | 180 | 0.6647 | 0.7451 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-U10-24
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U10-24
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7874
- Accuracy: 0.7647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3749 | 0.95 | 15 | 1.2899 | 0.4706 |
| 1.1573 | 1.97 | 31 | 1.0844 | 0.5686 |
| 0.985 | 2.98 | 47 | 0.9140 | 0.6078 |
| 0.7024 | 4.0 | 63 | 0.8578 | 0.6863 |
| 0.5699 | 4.95 | 78 | 0.6802 | 0.7451 |
| 0.3784 | 5.97 | 94 | 0.8856 | 0.7059 |
| 0.2631 | 6.98 | 110 | 0.7526 | 0.7451 |
| 0.2201 | 8.0 | 126 | 0.7924 | 0.7255 |
| 0.1933 | 8.95 | 141 | 0.7874 | 0.7647 |
| 0.1592 | 9.97 | 157 | 0.9583 | 0.6863 |
| 0.154 | 10.98 | 173 | 0.9961 | 0.7059 |
| 0.1531 | 12.0 | 189 | 0.8916 | 0.7451 |
| 0.1153 | 12.95 | 204 | 0.9174 | 0.7451 |
| 0.1154 | 13.97 | 220 | 1.0267 | 0.7059 |
| 0.0922 | 14.98 | 236 | 0.9766 | 0.7255 |
| 0.0901 | 16.0 | 252 | 1.0410 | 0.7255 |
| 0.074 | 16.95 | 267 | 1.1869 | 0.6863 |
| 0.0743 | 17.97 | 283 | 1.1094 | 0.7255 |
| 0.084 | 18.98 | 299 | 1.0520 | 0.7255 |
| 0.0713 | 20.0 | 315 | 1.1213 | 0.7059 |
| 0.061 | 20.95 | 330 | 1.0927 | 0.7451 |
| 0.0669 | 21.97 | 346 | 1.0806 | 0.7255 |
| 0.0654 | 22.86 | 360 | 1.0647 | 0.7255 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
talli96123/meat_calssify_fresh_crop_fixed_V_0_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_crop_fixed_V_0_1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8065
- Accuracy: 0.7405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0665 | 1.0 | 630 | 1.0532 | 0.4051 |
| 1.0373 | 2.0 | 1260 | 1.0018 | 0.5063 |
| 1.1547 | 3.0 | 1890 | 1.0149 | 0.4557 |
| 1.2332 | 4.0 | 2520 | 1.0203 | 0.4937 |
| 1.2068 | 5.0 | 3150 | 1.8757 | 0.4114 |
| 1.2576 | 6.0 | 3780 | 1.1679 | 0.5570 |
| 1.2601 | 7.0 | 4410 | 1.0406 | 0.5886 |
| 1.0981 | 8.0 | 5040 | 1.1907 | 0.5949 |
| 1.0283 | 9.0 | 5670 | 1.1140 | 0.6962 |
| 1.0173 | 10.0 | 6300 | 1.3832 | 0.6582 |
| 1.0185 | 11.0 | 6930 | 1.0796 | 0.6646 |
| 0.8782 | 12.0 | 7560 | 1.4764 | 0.6582 |
| 0.8761 | 13.0 | 8190 | 1.5654 | 0.6076 |
| 0.8258 | 14.0 | 8820 | 1.4798 | 0.6456 |
| 0.8745 | 15.0 | 9450 | 2.2686 | 0.5380 |
| 0.6339 | 16.0 | 10080 | 1.5005 | 0.6835 |
| 0.6122 | 17.0 | 10710 | 1.7847 | 0.6709 |
| 0.5812 | 18.0 | 11340 | 1.6494 | 0.6962 |
| 0.54 | 19.0 | 11970 | 1.6806 | 0.6899 |
| 0.4075 | 20.0 | 12600 | 2.0121 | 0.6709 |
| 0.4847 | 21.0 | 13230 | 1.5560 | 0.7089 |
| 0.4862 | 22.0 | 13860 | 1.5739 | 0.7215 |
| 0.5239 | 23.0 | 14490 | 1.5406 | 0.6835 |
| 0.4293 | 24.0 | 15120 | 1.7675 | 0.6962 |
| 0.4592 | 25.0 | 15750 | 1.7242 | 0.7215 |
| 0.366 | 26.0 | 16380 | 1.4541 | 0.7468 |
| 0.3754 | 27.0 | 17010 | 1.8378 | 0.7089 |
| 0.356 | 28.0 | 17640 | 1.6920 | 0.7152 |
| 0.3689 | 29.0 | 18270 | 1.7062 | 0.7152 |
| 0.3305 | 30.0 | 18900 | 1.6425 | 0.7025 |
| 0.2995 | 31.0 | 19530 | 1.9092 | 0.6835 |
| 0.3692 | 32.0 | 20160 | 1.5893 | 0.7405 |
| 0.2386 | 33.0 | 20790 | 1.5449 | 0.7532 |
| 0.2427 | 34.0 | 21420 | 1.7221 | 0.7532 |
| 0.2645 | 35.0 | 22050 | 1.3437 | 0.7975 |
| 0.2673 | 36.0 | 22680 | 1.7081 | 0.7025 |
| 0.1925 | 37.0 | 23310 | 1.4535 | 0.7911 |
| 0.1304 | 38.0 | 23940 | 1.9245 | 0.7152 |
| 0.1797 | 39.0 | 24570 | 1.6966 | 0.7468 |
| 0.2092 | 40.0 | 25200 | 1.3813 | 0.7911 |
| 0.161 | 41.0 | 25830 | 1.4498 | 0.7975 |
| 0.1872 | 42.0 | 26460 | 1.6249 | 0.7595 |
| 0.1308 | 43.0 | 27090 | 1.4823 | 0.7785 |
| 0.1939 | 44.0 | 27720 | 1.4895 | 0.7848 |
| 0.1561 | 45.0 | 28350 | 1.6162 | 0.7532 |
| 0.0929 | 46.0 | 28980 | 1.4918 | 0.7911 |
| 0.0911 | 47.0 | 29610 | 1.5509 | 0.7722 |
| 0.0945 | 48.0 | 30240 | 1.4868 | 0.7848 |
| 0.0985 | 49.0 | 30870 | 1.6767 | 0.7595 |
| 0.1023 | 50.0 | 31500 | 1.8065 | 0.7405 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_V_0_1_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
habibi26/clip-vit-base-patch32-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-vit-base-patch32-finetuned-eurosat
This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0987
- Accuracy: 0.9716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4295 | 0.9979 | 351 | 0.2629 | 0.915 |
| 0.4167 | 1.9986 | 703 | 0.2365 | 0.9222 |
| 0.4104 | 2.9993 | 1055 | 0.2205 | 0.9252 |
| 0.3847 | 4.0 | 1407 | 0.1917 | 0.9338 |
| 0.3928 | 4.9979 | 1758 | 0.1803 | 0.9414 |
| 0.311 | 5.9986 | 2110 | 0.1429 | 0.9524 |
| 0.2614 | 6.9993 | 2462 | 0.1137 | 0.961 |
| 0.2579 | 8.0 | 2814 | 0.1102 | 0.9638 |
| 0.1993 | 8.9979 | 3165 | 0.1037 | 0.9688 |
| 0.1921 | 9.9787 | 3510 | 0.0987 | 0.9716 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
] |
hchcsuim/batch-size16_FFPP-c23_opencv-1FPS_unaugmentation
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-c23_opencv-1FPS_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2893
- Accuracy: 0.8694
- Precision: 0.8681
- Recall: 0.9825
- F1: 0.9217
- Roc Auc: 0.9282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.3718 | 1.0 | 1381 | 0.2893 | 0.8694 | 0.8681 | 0.9825 | 0.9217 | 0.9282 |
### Framework versions
- Transformers 4.39.2
- Pytorch 2.3.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"fake",
"real"
] |
talli96123/meat_calssify_fresh_crop_fixed_overlap_V_0_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_crop_fixed_overlap_V_0_1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5094
- Accuracy: 0.9198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0396 | 1.0 | 1293 | 0.9865 | 0.5123 |
| 1.1023 | 2.0 | 2586 | 1.0537 | 0.5525 |
| 1.1657 | 3.0 | 3879 | 1.4125 | 0.6080 |
| 1.1606 | 4.0 | 5172 | 1.0966 | 0.4568 |
| 1.233 | 5.0 | 6465 | 1.0640 | 0.6481 |
| 1.1345 | 6.0 | 7758 | 1.2839 | 0.6451 |
| 1.1208 | 7.0 | 9051 | 1.4499 | 0.6451 |
| 1.0212 | 8.0 | 10344 | 1.1759 | 0.7068 |
| 0.891 | 9.0 | 11637 | 1.0590 | 0.7130 |
| 0.8541 | 10.0 | 12930 | 1.0337 | 0.7253 |
| 0.7985 | 11.0 | 14223 | 0.8852 | 0.7778 |
| 0.7569 | 12.0 | 15516 | 0.9469 | 0.7778 |
| 0.6847 | 13.0 | 16809 | 1.1415 | 0.7407 |
| 0.6794 | 14.0 | 18102 | 0.8935 | 0.8210 |
| 0.6455 | 15.0 | 19395 | 0.9556 | 0.7809 |
| 0.5708 | 16.0 | 20688 | 1.0258 | 0.8056 |
| 0.4988 | 17.0 | 21981 | 1.4170 | 0.7654 |
| 0.477 | 18.0 | 23274 | 0.9100 | 0.8179 |
| 0.4559 | 19.0 | 24567 | 1.0474 | 0.7994 |
| 0.4284 | 20.0 | 25860 | 0.8757 | 0.8488 |
| 0.3892 | 21.0 | 27153 | 0.9961 | 0.8241 |
| 0.419 | 22.0 | 28446 | 0.9303 | 0.8333 |
| 0.3244 | 23.0 | 29739 | 1.0301 | 0.8179 |
| 0.4107 | 24.0 | 31032 | 0.7903 | 0.8488 |
| 0.3853 | 25.0 | 32325 | 0.5818 | 0.9012 |
| 0.294 | 26.0 | 33618 | 0.9773 | 0.8426 |
| 0.2826 | 27.0 | 34911 | 0.7444 | 0.8735 |
| 0.2218 | 28.0 | 36204 | 1.0961 | 0.8426 |
| 0.3422 | 29.0 | 37497 | 0.9692 | 0.8395 |
| 0.2809 | 30.0 | 38790 | 0.8668 | 0.8673 |
| 0.2618 | 31.0 | 40083 | 0.7958 | 0.8704 |
| 0.2702 | 32.0 | 41376 | 0.6700 | 0.8796 |
| 0.253 | 33.0 | 42669 | 1.1036 | 0.8148 |
| 0.2161 | 34.0 | 43962 | 0.5197 | 0.9198 |
| 0.1727 | 35.0 | 45255 | 0.6996 | 0.8981 |
| 0.2117 | 36.0 | 46548 | 1.3509 | 0.8056 |
| 0.1967 | 37.0 | 47841 | 0.5835 | 0.9105 |
| 0.1885 | 38.0 | 49134 | 0.7260 | 0.8673 |
| 0.1445 | 39.0 | 50427 | 0.7016 | 0.8735 |
| 0.1216 | 40.0 | 51720 | 0.7880 | 0.8858 |
| 0.1552 | 41.0 | 53013 | 0.7237 | 0.8765 |
| 0.0992 | 42.0 | 54306 | 0.7155 | 0.9043 |
| 0.1047 | 43.0 | 55599 | 0.5785 | 0.9167 |
| 0.1119 | 44.0 | 56892 | 0.4751 | 0.9228 |
| 0.128 | 45.0 | 58185 | 0.6190 | 0.9043 |
| 0.1066 | 46.0 | 59478 | 0.6420 | 0.9167 |
| 0.1453 | 47.0 | 60771 | 0.5683 | 0.9198 |
| 0.0991 | 48.0 | 62064 | 0.6286 | 0.9074 |
| 0.0688 | 49.0 | 63357 | 0.5495 | 0.9228 |
| 0.0907 | 50.0 | 64650 | 0.5094 | 0.9198 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_overlap_V_0_1_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_V_0_3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_crop_fixed_V_0_3
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6191
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0921 | 1.0 | 621 | 1.0844 | 0.3654 |
| 1.0438 | 2.0 | 1242 | 1.1755 | 0.4423 |
| 1.1558 | 3.0 | 1863 | 1.0148 | 0.4487 |
| 1.1274 | 4.0 | 2484 | 1.0371 | 0.5705 |
| 1.1992 | 5.0 | 3105 | 1.0598 | 0.3718 |
| 1.2195 | 6.0 | 3726 | 1.3422 | 0.5256 |
| 1.2366 | 7.0 | 4347 | 1.8744 | 0.4359 |
| 1.1302 | 8.0 | 4968 | 1.4699 | 0.5705 |
| 1.1137 | 9.0 | 5589 | 1.3480 | 0.6154 |
| 0.9534 | 10.0 | 6210 | 1.4723 | 0.5769 |
| 1.0269 | 11.0 | 6831 | 1.0999 | 0.6538 |
| 1.0545 | 12.0 | 7452 | 1.2980 | 0.6090 |
| 0.8669 | 13.0 | 8073 | 1.3408 | 0.6731 |
| 0.9571 | 14.0 | 8694 | 1.4879 | 0.6474 |
| 0.8345 | 15.0 | 9315 | 1.4030 | 0.6603 |
| 0.6623 | 16.0 | 9936 | 1.0840 | 0.7308 |
| 0.6014 | 17.0 | 10557 | 1.5054 | 0.7115 |
| 0.5599 | 18.0 | 11178 | 1.1956 | 0.7628 |
| 0.7209 | 19.0 | 11799 | 1.9734 | 0.5962 |
| 0.6289 | 20.0 | 12420 | 1.6165 | 0.6923 |
| 0.429 | 21.0 | 13041 | 1.5766 | 0.6987 |
| 0.628 | 22.0 | 13662 | 1.3948 | 0.7179 |
| 0.5427 | 23.0 | 14283 | 1.7663 | 0.6795 |
| 0.3806 | 24.0 | 14904 | 1.6219 | 0.6987 |
| 0.4443 | 25.0 | 15525 | 1.5065 | 0.7051 |
| 0.3648 | 26.0 | 16146 | 1.5225 | 0.7308 |
| 0.3812 | 27.0 | 16767 | 1.3488 | 0.75 |
| 0.3106 | 28.0 | 17388 | 1.7758 | 0.6987 |
| 0.3725 | 29.0 | 18009 | 1.4190 | 0.7372 |
| 0.4284 | 30.0 | 18630 | 1.6205 | 0.7115 |
| 0.2257 | 31.0 | 19251 | 1.5535 | 0.7628 |
| 0.2869 | 32.0 | 19872 | 1.2077 | 0.8013 |
| 0.3128 | 33.0 | 20493 | 1.9065 | 0.7051 |
| 0.2802 | 34.0 | 21114 | 1.2794 | 0.7885 |
| 0.2647 | 35.0 | 21735 | 1.5823 | 0.7436 |
| 0.3054 | 36.0 | 22356 | 1.3412 | 0.7756 |
| 0.2619 | 37.0 | 22977 | 1.4471 | 0.7308 |
| 0.2289 | 38.0 | 23598 | 1.8176 | 0.7244 |
| 0.1554 | 39.0 | 24219 | 1.5014 | 0.7628 |
| 0.1794 | 40.0 | 24840 | 1.4112 | 0.7628 |
| 0.1835 | 41.0 | 25461 | 1.8688 | 0.7244 |
| 0.2177 | 42.0 | 26082 | 1.2748 | 0.7821 |
| 0.1063 | 43.0 | 26703 | 1.4471 | 0.7628 |
| 0.1798 | 44.0 | 27324 | 1.1872 | 0.7949 |
| 0.1511 | 45.0 | 27945 | 1.3028 | 0.8077 |
| 0.1659 | 46.0 | 28566 | 1.7257 | 0.7308 |
| 0.0917 | 47.0 | 29187 | 1.2314 | 0.8205 |
| 0.1554 | 48.0 | 29808 | 1.4090 | 0.7821 |
| 0.0927 | 49.0 | 30429 | 1.0295 | 0.8397 |
| 0.1188 | 50.0 | 31050 | 1.6191 | 0.75 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_V_0_2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_crop_fixed_V_0_2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5986
- Accuracy: 0.7885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0979 | 1.0 | 10 | 1.0867 | 0.3910 |
| 1.0723 | 2.0 | 20 | 1.0606 | 0.4487 |
| 1.0368 | 3.0 | 30 | 1.0202 | 0.4936 |
| 0.968 | 4.0 | 40 | 0.9396 | 0.5449 |
| 0.8927 | 5.0 | 50 | 0.8491 | 0.6410 |
| 0.8256 | 6.0 | 60 | 0.8543 | 0.6282 |
| 0.7477 | 7.0 | 70 | 0.8216 | 0.6410 |
| 0.6567 | 8.0 | 80 | 0.7805 | 0.6282 |
| 0.6121 | 9.0 | 90 | 0.7005 | 0.7308 |
| 0.6303 | 10.0 | 100 | 0.7170 | 0.6923 |
| 0.5335 | 11.0 | 110 | 0.7192 | 0.7051 |
| 0.5375 | 12.0 | 120 | 0.6438 | 0.7436 |
| 0.4651 | 13.0 | 130 | 0.7292 | 0.7115 |
| 0.5207 | 14.0 | 140 | 0.6449 | 0.7244 |
| 0.4692 | 15.0 | 150 | 0.6545 | 0.7244 |
| 0.4146 | 16.0 | 160 | 0.6789 | 0.7372 |
| 0.383 | 17.0 | 170 | 0.6214 | 0.7564 |
| 0.3612 | 18.0 | 180 | 0.6287 | 0.7372 |
| 0.3444 | 19.0 | 190 | 0.7465 | 0.6987 |
| 0.3562 | 20.0 | 200 | 0.6255 | 0.7756 |
| 0.3149 | 21.0 | 210 | 0.5088 | 0.8141 |
| 0.2883 | 22.0 | 220 | 0.6508 | 0.7179 |
| 0.2829 | 23.0 | 230 | 0.7362 | 0.7179 |
| 0.2713 | 24.0 | 240 | 0.5616 | 0.7692 |
| 0.2562 | 25.0 | 250 | 0.7014 | 0.7244 |
| 0.2819 | 26.0 | 260 | 0.6033 | 0.7628 |
| 0.2237 | 27.0 | 270 | 0.5719 | 0.7885 |
| 0.2486 | 28.0 | 280 | 0.7404 | 0.7179 |
| 0.2049 | 29.0 | 290 | 0.6897 | 0.75 |
| 0.2185 | 30.0 | 300 | 0.6415 | 0.7564 |
| 0.239 | 31.0 | 310 | 0.6182 | 0.7821 |
| 0.2315 | 32.0 | 320 | 0.7067 | 0.75 |
| 0.1775 | 33.0 | 330 | 0.6307 | 0.7628 |
| 0.1829 | 34.0 | 340 | 0.5605 | 0.8205 |
| 0.1712 | 35.0 | 350 | 0.6619 | 0.7692 |
| 0.1896 | 36.0 | 360 | 0.5419 | 0.7949 |
| 0.1961 | 37.0 | 370 | 0.6204 | 0.7885 |
| 0.1825 | 38.0 | 380 | 0.5401 | 0.8013 |
| 0.1986 | 39.0 | 390 | 0.5964 | 0.7821 |
| 0.1623 | 40.0 | 400 | 0.5319 | 0.8269 |
| 0.1356 | 41.0 | 410 | 0.6096 | 0.7821 |
| 0.1615 | 42.0 | 420 | 0.6163 | 0.7692 |
| 0.1515 | 43.0 | 430 | 0.5757 | 0.7821 |
| 0.1655 | 44.0 | 440 | 0.6040 | 0.7756 |
| 0.1353 | 45.0 | 450 | 0.6121 | 0.7564 |
| 0.1133 | 46.0 | 460 | 0.4764 | 0.8141 |
| 0.1073 | 47.0 | 470 | 0.6337 | 0.7821 |
| 0.1266 | 48.0 | 480 | 0.5615 | 0.8077 |
| 0.1156 | 49.0 | 490 | 0.5092 | 0.8205 |
| 0.1344 | 50.0 | 500 | 0.5986 | 0.7885 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_V_0_3_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_V_0_2_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
mjun/dino-vits8-musinsa
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dino-vits8-musinsa
This model is a fine-tuned version of [facebook/dino-vits8](https://huggingface.co/facebook/dino-vits8) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1692
- Precision: 0.9579
- Recall: 0.9576
- F1: 0.9575
- Accuracy: 0.9576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 202 | 0.1408 | 0.9513 | 0.9494 | 0.9498 | 0.9494 |
| No log | 2.0 | 404 | 0.1328 | 0.9563 | 0.9558 | 0.9560 | 0.9558 |
| 0.1129 | 3.0 | 606 | 0.1692 | 0.9579 | 0.9576 | 0.9575 | 0.9576 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3"
] |
talli96123/meat_calssify_fresh_crop_fixed_epoch_80_V_0_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_crop_fixed_epoch_80_V_0_1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5098
- Accuracy: 0.8462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1019 | 1.0 | 10 | 1.0993 | 0.3718 |
| 1.0872 | 2.0 | 20 | 1.0762 | 0.4167 |
| 1.0617 | 3.0 | 30 | 1.0498 | 0.4808 |
| 1.0264 | 4.0 | 40 | 1.0153 | 0.4872 |
| 0.9738 | 5.0 | 50 | 0.9431 | 0.5641 |
| 0.9038 | 6.0 | 60 | 0.9137 | 0.5449 |
| 0.8405 | 7.0 | 70 | 0.8209 | 0.6538 |
| 0.8131 | 8.0 | 80 | 0.8620 | 0.5897 |
| 0.7412 | 9.0 | 90 | 0.7370 | 0.6859 |
| 0.7006 | 10.0 | 100 | 0.7230 | 0.6987 |
| 0.6531 | 11.0 | 110 | 0.7679 | 0.6923 |
| 0.6215 | 12.0 | 120 | 0.6398 | 0.7308 |
| 0.5351 | 13.0 | 130 | 0.7016 | 0.7051 |
| 0.4847 | 14.0 | 140 | 0.5606 | 0.7949 |
| 0.4677 | 15.0 | 150 | 0.8849 | 0.6410 |
| 0.6042 | 16.0 | 160 | 0.5766 | 0.7756 |
| 0.4113 | 17.0 | 170 | 0.5573 | 0.7885 |
| 0.3662 | 18.0 | 180 | 0.6451 | 0.7179 |
| 0.3899 | 19.0 | 190 | 0.5613 | 0.7692 |
| 0.3518 | 20.0 | 200 | 0.6618 | 0.7051 |
| 0.303 | 21.0 | 210 | 0.5417 | 0.7756 |
| 0.2568 | 22.0 | 220 | 0.4785 | 0.8205 |
| 0.3098 | 23.0 | 230 | 0.6330 | 0.7564 |
| 0.3299 | 24.0 | 240 | 0.4944 | 0.8077 |
| 0.2373 | 25.0 | 250 | 0.5141 | 0.8141 |
| 0.2556 | 26.0 | 260 | 0.5719 | 0.8013 |
| 0.2387 | 27.0 | 270 | 0.5495 | 0.8013 |
| 0.2651 | 28.0 | 280 | 0.7409 | 0.7436 |
| 0.2909 | 29.0 | 290 | 0.6281 | 0.7821 |
| 0.2369 | 30.0 | 300 | 0.5067 | 0.8333 |
| 0.2084 | 31.0 | 310 | 0.5297 | 0.8077 |
| 0.2506 | 32.0 | 320 | 0.6124 | 0.7756 |
| 0.2395 | 33.0 | 330 | 0.5564 | 0.7692 |
| 0.2243 | 34.0 | 340 | 0.5176 | 0.7692 |
| 0.1951 | 35.0 | 350 | 0.5289 | 0.7949 |
| 0.1967 | 36.0 | 360 | 0.4829 | 0.8333 |
| 0.1602 | 37.0 | 370 | 0.5496 | 0.8205 |
| 0.1647 | 38.0 | 380 | 0.5969 | 0.7692 |
| 0.1772 | 39.0 | 390 | 0.6299 | 0.7949 |
| 0.1595 | 40.0 | 400 | 0.6386 | 0.7756 |
| 0.1801 | 41.0 | 410 | 0.5485 | 0.7885 |
| 0.1577 | 42.0 | 420 | 0.6692 | 0.7692 |
| 0.1683 | 43.0 | 430 | 0.5639 | 0.8077 |
| 0.1677 | 44.0 | 440 | 0.4369 | 0.8462 |
| 0.1367 | 45.0 | 450 | 0.5955 | 0.7756 |
| 0.1061 | 46.0 | 460 | 0.6644 | 0.8013 |
| 0.0957 | 47.0 | 470 | 0.5834 | 0.8077 |
| 0.1341 | 48.0 | 480 | 0.5541 | 0.8077 |
| 0.1153 | 49.0 | 490 | 0.6226 | 0.7885 |
| 0.122 | 50.0 | 500 | 0.5326 | 0.8269 |
| 0.1237 | 51.0 | 510 | 0.4428 | 0.8462 |
| 0.1006 | 52.0 | 520 | 0.5562 | 0.8269 |
| 0.1256 | 53.0 | 530 | 0.5066 | 0.8333 |
| 0.0995 | 54.0 | 540 | 0.6685 | 0.8013 |
| 0.1033 | 55.0 | 550 | 0.5183 | 0.8269 |
| 0.1177 | 56.0 | 560 | 0.6426 | 0.7692 |
| 0.1033 | 57.0 | 570 | 0.5079 | 0.8141 |
| 0.1389 | 58.0 | 580 | 0.5120 | 0.8205 |
| 0.0955 | 59.0 | 590 | 0.5381 | 0.8333 |
| 0.1108 | 60.0 | 600 | 0.5210 | 0.8526 |
| 0.1355 | 61.0 | 610 | 0.5460 | 0.8205 |
| 0.0897 | 62.0 | 620 | 0.4909 | 0.8269 |
| 0.084 | 63.0 | 630 | 0.5438 | 0.8205 |
| 0.082 | 64.0 | 640 | 0.5693 | 0.8269 |
| 0.1026 | 65.0 | 650 | 0.4864 | 0.8590 |
| 0.0872 | 66.0 | 660 | 0.4856 | 0.8397 |
| 0.0966 | 67.0 | 670 | 0.4073 | 0.8590 |
| 0.097 | 68.0 | 680 | 0.5848 | 0.8269 |
| 0.1007 | 69.0 | 690 | 0.4663 | 0.8205 |
| 0.0695 | 70.0 | 700 | 0.5000 | 0.8333 |
| 0.1048 | 71.0 | 710 | 0.6038 | 0.8141 |
| 0.0715 | 72.0 | 720 | 0.6008 | 0.8269 |
| 0.1061 | 73.0 | 730 | 0.5291 | 0.8269 |
| 0.0688 | 74.0 | 740 | 0.4124 | 0.8654 |
| 0.0638 | 75.0 | 750 | 0.5575 | 0.8205 |
| 0.0785 | 76.0 | 760 | 0.5537 | 0.8141 |
| 0.0758 | 77.0 | 770 | 0.4363 | 0.8718 |
| 0.0865 | 78.0 | 780 | 0.5200 | 0.8269 |
| 0.0844 | 79.0 | 790 | 0.6848 | 0.7949 |
| 0.0776 | 80.0 | 800 | 0.5098 | 0.8462 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_epoch_80_V_0_1_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
Augusto777/vit-base-patch16-224-ve-b-U10-12
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-b-U10-12
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9868
- Accuracy: 0.7451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 6 | 1.3771 | 0.3137 |
| 1.3705 | 1.92 | 12 | 1.3219 | 0.5490 |
| 1.3705 | 2.88 | 18 | 1.2517 | 0.5490 |
| 1.2535 | 4.0 | 25 | 1.1875 | 0.5882 |
| 1.1079 | 4.96 | 31 | 1.1237 | 0.6078 |
| 1.1079 | 5.92 | 37 | 1.1003 | 0.6275 |
| 1.0048 | 6.88 | 43 | 1.0609 | 0.6863 |
| 0.9172 | 8.0 | 50 | 1.0668 | 0.6078 |
| 0.9172 | 8.96 | 56 | 1.0031 | 0.6667 |
| 0.8558 | 9.92 | 62 | 0.9868 | 0.7451 |
| 0.8558 | 10.88 | 68 | 0.9763 | 0.7451 |
| 0.8284 | 11.52 | 72 | 0.9733 | 0.7451 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-b-U10-24
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-b-U10-24
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6432
- Accuracy: 0.8431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 6 | 1.3827 | 0.3137 |
| 1.378 | 1.92 | 12 | 1.3335 | 0.5490 |
| 1.378 | 2.88 | 18 | 1.2577 | 0.5882 |
| 1.2725 | 4.0 | 25 | 1.1886 | 0.4706 |
| 1.1073 | 4.96 | 31 | 1.1040 | 0.6275 |
| 1.1073 | 5.92 | 37 | 1.0658 | 0.6078 |
| 0.9657 | 6.88 | 43 | 1.0155 | 0.6667 |
| 0.8361 | 8.0 | 50 | 0.9330 | 0.7451 |
| 0.8361 | 8.96 | 56 | 0.9690 | 0.6667 |
| 0.7181 | 9.92 | 62 | 0.8910 | 0.7255 |
| 0.7181 | 10.88 | 68 | 0.8953 | 0.6863 |
| 0.6126 | 12.0 | 75 | 0.8343 | 0.7451 |
| 0.5096 | 12.96 | 81 | 0.8048 | 0.7059 |
| 0.5096 | 13.92 | 87 | 0.7977 | 0.7059 |
| 0.4348 | 14.88 | 93 | 0.7250 | 0.7451 |
| 0.4011 | 16.0 | 100 | 0.6432 | 0.8431 |
| 0.4011 | 16.96 | 106 | 0.7317 | 0.7255 |
| 0.3292 | 17.92 | 112 | 0.7015 | 0.7451 |
| 0.3292 | 18.88 | 118 | 0.6248 | 0.7647 |
| 0.309 | 20.0 | 125 | 0.6990 | 0.7451 |
| 0.2744 | 20.96 | 131 | 0.6591 | 0.7843 |
| 0.2744 | 21.92 | 137 | 0.6452 | 0.7647 |
| 0.2864 | 22.88 | 143 | 0.6290 | 0.7843 |
| 0.2864 | 23.04 | 144 | 0.6285 | 0.7843 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-b-U10-40
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-b-U10-40
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5211
- Accuracy: 0.8431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 6 | 1.3845 | 0.2549 |
| 1.3817 | 1.92 | 12 | 1.3529 | 0.4706 |
| 1.3817 | 2.88 | 18 | 1.2772 | 0.5882 |
| 1.2986 | 4.0 | 25 | 1.2121 | 0.3922 |
| 1.1298 | 4.96 | 31 | 1.1164 | 0.5882 |
| 1.1298 | 5.92 | 37 | 1.0879 | 0.5882 |
| 0.9842 | 6.88 | 43 | 0.9898 | 0.6863 |
| 0.8402 | 8.0 | 50 | 0.9233 | 0.7843 |
| 0.8402 | 8.96 | 56 | 0.9650 | 0.6471 |
| 0.7084 | 9.92 | 62 | 0.8243 | 0.7451 |
| 0.7084 | 10.88 | 68 | 0.7988 | 0.7647 |
| 0.5914 | 12.0 | 75 | 0.8114 | 0.7451 |
| 0.461 | 12.96 | 81 | 0.7652 | 0.7451 |
| 0.461 | 13.92 | 87 | 0.7406 | 0.7451 |
| 0.3769 | 14.88 | 93 | 0.6916 | 0.7451 |
| 0.3376 | 16.0 | 100 | 0.6182 | 0.7843 |
| 0.3376 | 16.96 | 106 | 0.8395 | 0.6863 |
| 0.2606 | 17.92 | 112 | 0.6941 | 0.7255 |
| 0.2606 | 18.88 | 118 | 0.7345 | 0.7255 |
| 0.2314 | 20.0 | 125 | 0.7374 | 0.7059 |
| 0.1907 | 20.96 | 131 | 0.7490 | 0.7647 |
| 0.1907 | 21.92 | 137 | 0.7292 | 0.7255 |
| 0.1804 | 22.88 | 143 | 0.7301 | 0.7451 |
| 0.1447 | 24.0 | 150 | 0.7224 | 0.7647 |
| 0.1447 | 24.96 | 156 | 0.7415 | 0.7255 |
| 0.1537 | 25.92 | 162 | 0.6668 | 0.7843 |
| 0.1537 | 26.88 | 168 | 0.7188 | 0.7451 |
| 0.1471 | 28.0 | 175 | 0.7291 | 0.7451 |
| 0.1241 | 28.96 | 181 | 0.5919 | 0.8039 |
| 0.1241 | 29.92 | 187 | 0.5211 | 0.8431 |
| 0.1058 | 30.88 | 193 | 0.6107 | 0.7843 |
| 0.1032 | 32.0 | 200 | 0.6863 | 0.7647 |
| 0.1032 | 32.96 | 206 | 0.6295 | 0.7647 |
| 0.1116 | 33.92 | 212 | 0.6061 | 0.7843 |
| 0.1116 | 34.88 | 218 | 0.6610 | 0.7843 |
| 0.0871 | 36.0 | 225 | 0.6109 | 0.8039 |
| 0.1037 | 36.96 | 231 | 0.6116 | 0.7843 |
| 0.1037 | 37.92 | 237 | 0.6176 | 0.8039 |
| 0.0802 | 38.4 | 240 | 0.6169 | 0.8039 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-U11-12
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U11-12
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5924
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3668 | 0.96 | 16 | 1.2319 | 0.5652 |
| 1.1102 | 1.97 | 33 | 0.9996 | 0.6957 |
| 0.8257 | 2.99 | 50 | 0.8429 | 0.6304 |
| 0.68 | 4.0 | 67 | 0.6906 | 0.8043 |
| 0.4763 | 4.96 | 83 | 0.6871 | 0.7609 |
| 0.341 | 5.97 | 100 | 0.5924 | 0.8478 |
| 0.2956 | 6.99 | 117 | 0.4863 | 0.8478 |
| 0.2376 | 8.0 | 134 | 0.5947 | 0.7826 |
| 0.2098 | 8.96 | 150 | 0.5579 | 0.8043 |
| 0.2213 | 9.97 | 167 | 0.6474 | 0.7609 |
| 0.1767 | 10.99 | 184 | 0.6015 | 0.7826 |
| 0.1757 | 11.46 | 192 | 0.5928 | 0.7609 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
talli96123/meat_calssify_fresh_crop_fixed_overlap_V_0_2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_crop_fixed_overlap_V_0_2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3158
- Accuracy: 0.9051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0836 | 1.0 | 20 | 1.0836 | 0.3892 |
| 1.0325 | 2.0 | 40 | 1.0308 | 0.5032 |
| 0.9331 | 3.0 | 60 | 0.9478 | 0.5506 |
| 0.8711 | 4.0 | 80 | 0.9827 | 0.5380 |
| 0.8252 | 5.0 | 100 | 0.9171 | 0.5665 |
| 0.7597 | 6.0 | 120 | 0.8175 | 0.6234 |
| 0.6528 | 7.0 | 140 | 0.7884 | 0.6835 |
| 0.5646 | 8.0 | 160 | 0.7034 | 0.7025 |
| 0.5026 | 9.0 | 180 | 0.6805 | 0.7025 |
| 0.4534 | 10.0 | 200 | 0.6223 | 0.7690 |
| 0.4244 | 11.0 | 220 | 0.6262 | 0.7405 |
| 0.4077 | 12.0 | 240 | 0.6230 | 0.7595 |
| 0.3962 | 13.0 | 260 | 0.6731 | 0.7184 |
| 0.3587 | 14.0 | 280 | 0.5633 | 0.7911 |
| 0.316 | 15.0 | 300 | 0.5808 | 0.7848 |
| 0.2472 | 16.0 | 320 | 0.5478 | 0.7943 |
| 0.277 | 17.0 | 340 | 0.5609 | 0.8038 |
| 0.2586 | 18.0 | 360 | 0.5427 | 0.8133 |
| 0.2405 | 19.0 | 380 | 0.5207 | 0.8165 |
| 0.2141 | 20.0 | 400 | 0.4552 | 0.8323 |
| 0.2052 | 21.0 | 420 | 0.5201 | 0.8006 |
| 0.2182 | 22.0 | 440 | 0.3928 | 0.8544 |
| 0.1698 | 23.0 | 460 | 0.4459 | 0.8449 |
| 0.1618 | 24.0 | 480 | 0.4502 | 0.8323 |
| 0.1915 | 25.0 | 500 | 0.4057 | 0.8703 |
| 0.1596 | 26.0 | 520 | 0.4650 | 0.8386 |
| 0.1446 | 27.0 | 540 | 0.3713 | 0.8766 |
| 0.17 | 28.0 | 560 | 0.4394 | 0.8544 |
| 0.141 | 29.0 | 580 | 0.5494 | 0.8196 |
| 0.1563 | 30.0 | 600 | 0.5431 | 0.8196 |
| 0.1216 | 31.0 | 620 | 0.5010 | 0.8481 |
| 0.1081 | 32.0 | 640 | 0.4454 | 0.8608 |
| 0.1205 | 33.0 | 660 | 0.4664 | 0.8418 |
| 0.1325 | 34.0 | 680 | 0.4690 | 0.8481 |
| 0.1152 | 35.0 | 700 | 0.3433 | 0.9019 |
| 0.1218 | 36.0 | 720 | 0.4063 | 0.8671 |
| 0.1163 | 37.0 | 740 | 0.3552 | 0.8861 |
| 0.0976 | 38.0 | 760 | 0.4137 | 0.8734 |
| 0.1163 | 39.0 | 780 | 0.4193 | 0.8797 |
| 0.1034 | 40.0 | 800 | 0.3740 | 0.8892 |
| 0.1033 | 41.0 | 820 | 0.4036 | 0.8671 |
| 0.0806 | 42.0 | 840 | 0.4396 | 0.8639 |
| 0.0764 | 43.0 | 860 | 0.4137 | 0.8608 |
| 0.0955 | 44.0 | 880 | 0.4019 | 0.8734 |
| 0.0768 | 45.0 | 900 | 0.3778 | 0.8829 |
| 0.0824 | 46.0 | 920 | 0.3930 | 0.8829 |
| 0.0837 | 47.0 | 940 | 0.3524 | 0.8924 |
| 0.0817 | 48.0 | 960 | 0.3113 | 0.9177 |
| 0.0767 | 49.0 | 980 | 0.3881 | 0.8797 |
| 0.0769 | 50.0 | 1000 | 0.3158 | 0.9051 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_overlap_V_0_2_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
Augusto777/vit-base-patch16-224-ve-U11-b-24
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U11-b-24
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4436
- Accuracy: 0.9130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3798 | 0.5435 |
| 1.3792 | 2.0 | 13 | 1.3091 | 0.6522 |
| 1.3792 | 2.92 | 19 | 1.2227 | 0.5870 |
| 1.2783 | 4.0 | 26 | 1.1263 | 0.6087 |
| 1.1226 | 4.92 | 32 | 1.0466 | 0.6522 |
| 1.1226 | 6.0 | 39 | 0.9854 | 0.5870 |
| 0.9881 | 6.92 | 45 | 0.9303 | 0.6957 |
| 0.8707 | 8.0 | 52 | 0.8806 | 0.7826 |
| 0.8707 | 8.92 | 58 | 0.8234 | 0.7826 |
| 0.7604 | 10.0 | 65 | 0.7159 | 0.8261 |
| 0.6452 | 10.92 | 71 | 0.6929 | 0.8478 |
| 0.6452 | 12.0 | 78 | 0.6491 | 0.8696 |
| 0.5576 | 12.92 | 84 | 0.5924 | 0.8478 |
| 0.4708 | 14.0 | 91 | 0.5551 | 0.8478 |
| 0.4708 | 14.92 | 97 | 0.6354 | 0.8043 |
| 0.422 | 16.0 | 104 | 0.5130 | 0.8696 |
| 0.3546 | 16.92 | 110 | 0.5302 | 0.8696 |
| 0.3546 | 18.0 | 117 | 0.4436 | 0.9130 |
| 0.3353 | 18.92 | 123 | 0.5621 | 0.8261 |
| 0.3106 | 20.0 | 130 | 0.4912 | 0.8696 |
| 0.3106 | 20.92 | 136 | 0.4747 | 0.8913 |
| 0.312 | 22.0 | 143 | 0.4603 | 0.8913 |
| 0.312 | 22.15 | 144 | 0.4598 | 0.8913 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
talli96123/meat_calssify_fresh_no_crop_V_0_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_no_crop_V_0_1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5082
- Accuracy: 0.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1032 | 1.0 | 44 | 1.0785 | 0.3636 |
| 1.0983 | 2.0 | 88 | 1.1069 | 0.2727 |
| 1.1025 | 3.0 | 132 | 1.0854 | 0.4545 |
| 1.1036 | 4.0 | 176 | 1.1205 | 0.3636 |
| 1.16 | 5.0 | 220 | 1.0577 | 0.4545 |
| 1.0902 | 6.0 | 264 | 1.1767 | 0.2727 |
| 1.0789 | 7.0 | 308 | 1.2790 | 0.4545 |
| 1.1269 | 8.0 | 352 | 1.1196 | 0.4545 |
| 1.1132 | 9.0 | 396 | 1.1290 | 0.3636 |
| 1.092 | 10.0 | 440 | 1.2584 | 0.2727 |
| 0.988 | 11.0 | 484 | 0.9824 | 0.4545 |
| 0.9695 | 12.0 | 528 | 1.3389 | 0.2727 |
| 0.9343 | 13.0 | 572 | 1.2876 | 0.3636 |
| 0.8517 | 14.0 | 616 | 1.1018 | 0.4545 |
| 0.7473 | 15.0 | 660 | 1.2833 | 0.4545 |
| 0.7194 | 16.0 | 704 | 1.5489 | 0.4545 |
| 0.5832 | 17.0 | 748 | 1.2821 | 0.5455 |
| 0.4905 | 18.0 | 792 | 0.9996 | 0.7273 |
| 0.3587 | 19.0 | 836 | 1.1785 | 0.6364 |
| 0.178 | 20.0 | 880 | 1.3718 | 0.5455 |
| 0.1253 | 21.0 | 924 | 2.1013 | 0.4545 |
| 0.5536 | 22.0 | 968 | 1.4723 | 0.5455 |
| 0.5241 | 23.0 | 1012 | 1.6866 | 0.6364 |
| 0.2453 | 24.0 | 1056 | 1.6747 | 0.5455 |
| 0.1193 | 25.0 | 1100 | 1.3248 | 0.7273 |
| 0.0892 | 26.0 | 1144 | 2.3257 | 0.4545 |
| 0.3273 | 27.0 | 1188 | 1.8027 | 0.5455 |
| 0.3587 | 28.0 | 1232 | 2.1175 | 0.3636 |
| 0.1693 | 29.0 | 1276 | 0.8854 | 0.7273 |
| 0.2323 | 30.0 | 1320 | 1.5909 | 0.6364 |
| 0.1056 | 31.0 | 1364 | 1.5556 | 0.6364 |
| 0.0158 | 32.0 | 1408 | 1.8192 | 0.6364 |
| 0.2085 | 33.0 | 1452 | 2.1498 | 0.5455 |
| 0.1137 | 34.0 | 1496 | 1.8617 | 0.4545 |
| 0.287 | 35.0 | 1540 | 1.5198 | 0.5455 |
| 0.25 | 36.0 | 1584 | 2.1324 | 0.4545 |
| 0.0135 | 37.0 | 1628 | 2.1540 | 0.4545 |
| 0.1104 | 38.0 | 1672 | 2.2697 | 0.5455 |
| 0.2252 | 39.0 | 1716 | 2.5110 | 0.4545 |
| 0.0584 | 40.0 | 1760 | 2.6245 | 0.3636 |
| 0.2366 | 41.0 | 1804 | 2.2701 | 0.5455 |
| 0.089 | 42.0 | 1848 | 2.3318 | 0.5455 |
| 0.1237 | 43.0 | 1892 | 2.2786 | 0.5455 |
| 0.0121 | 44.0 | 1936 | 1.2596 | 0.6364 |
| 0.1234 | 45.0 | 1980 | 1.2882 | 0.7273 |
| 0.0116 | 46.0 | 2024 | 1.4629 | 0.7273 |
| 0.0508 | 47.0 | 2068 | 1.8392 | 0.6364 |
| 0.0221 | 48.0 | 2112 | 1.7354 | 0.6364 |
| 0.3441 | 49.0 | 2156 | 2.0862 | 0.5455 |
| 0.138 | 50.0 | 2200 | 1.5082 | 0.7273 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_no_crop_V_0_1_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
Augusto777/vit-base-patch16-224-ve-U11-b-40
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U11-b-40
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6399
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3827 | 0.3913 |
| 1.3824 | 2.0 | 13 | 1.3319 | 0.6087 |
| 1.3824 | 2.92 | 19 | 1.2476 | 0.5435 |
| 1.3034 | 4.0 | 26 | 1.1450 | 0.5217 |
| 1.1431 | 4.92 | 32 | 1.0679 | 0.5435 |
| 1.1431 | 6.0 | 39 | 1.0006 | 0.6087 |
| 1.0123 | 6.92 | 45 | 0.9617 | 0.6522 |
| 0.8798 | 8.0 | 52 | 0.8575 | 0.7609 |
| 0.8798 | 8.92 | 58 | 0.8074 | 0.6957 |
| 0.7538 | 10.0 | 65 | 0.7447 | 0.7826 |
| 0.6115 | 10.92 | 71 | 0.7204 | 0.7826 |
| 0.6115 | 12.0 | 78 | 0.6399 | 0.8478 |
| 0.5009 | 12.92 | 84 | 0.5726 | 0.8478 |
| 0.389 | 14.0 | 91 | 0.5825 | 0.8478 |
| 0.389 | 14.92 | 97 | 0.6231 | 0.7609 |
| 0.3348 | 16.0 | 104 | 0.5510 | 0.8478 |
| 0.2616 | 16.92 | 110 | 0.5070 | 0.8478 |
| 0.2616 | 18.0 | 117 | 0.5040 | 0.8261 |
| 0.2188 | 18.92 | 123 | 0.5738 | 0.7826 |
| 0.2078 | 20.0 | 130 | 0.5398 | 0.8043 |
| 0.2078 | 20.92 | 136 | 0.5334 | 0.7826 |
| 0.2165 | 22.0 | 143 | 0.6043 | 0.7826 |
| 0.2165 | 22.92 | 149 | 0.5817 | 0.8043 |
| 0.1645 | 24.0 | 156 | 0.6465 | 0.7391 |
| 0.1413 | 24.92 | 162 | 0.6638 | 0.8043 |
| 0.1413 | 26.0 | 169 | 0.5710 | 0.8261 |
| 0.141 | 26.92 | 175 | 0.6494 | 0.8043 |
| 0.1313 | 28.0 | 182 | 0.7649 | 0.6957 |
| 0.1313 | 28.92 | 188 | 0.6130 | 0.7609 |
| 0.14 | 30.0 | 195 | 0.6718 | 0.7609 |
| 0.1284 | 30.92 | 201 | 0.6660 | 0.8261 |
| 0.1284 | 32.0 | 208 | 0.6286 | 0.7826 |
| 0.1135 | 32.92 | 214 | 0.6424 | 0.8043 |
| 0.1024 | 34.0 | 221 | 0.6339 | 0.8043 |
| 0.1024 | 34.92 | 227 | 0.6132 | 0.8043 |
| 0.1108 | 36.0 | 234 | 0.5975 | 0.8478 |
| 0.0944 | 36.92 | 240 | 0.5981 | 0.8478 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-U11-b-80
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U11-b-80
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5456
- Accuracy: 0.8913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3848 | 0.3696 |
| 1.3848 | 2.0 | 13 | 1.3692 | 0.5217 |
| 1.3848 | 2.92 | 19 | 1.3197 | 0.5435 |
| 1.3517 | 4.0 | 26 | 1.2264 | 0.5 |
| 1.2334 | 4.92 | 32 | 1.1280 | 0.4348 |
| 1.2334 | 6.0 | 39 | 1.0437 | 0.5435 |
| 1.073 | 6.92 | 45 | 0.9771 | 0.5870 |
| 0.9358 | 8.0 | 52 | 0.9470 | 0.6739 |
| 0.9358 | 8.92 | 58 | 0.8528 | 0.7826 |
| 0.7955 | 10.0 | 65 | 0.7839 | 0.7609 |
| 0.6429 | 10.92 | 71 | 0.7620 | 0.7391 |
| 0.6429 | 12.0 | 78 | 0.6466 | 0.8043 |
| 0.5096 | 12.92 | 84 | 0.7396 | 0.7174 |
| 0.4086 | 14.0 | 91 | 0.7335 | 0.7174 |
| 0.4086 | 14.92 | 97 | 0.6473 | 0.7391 |
| 0.3355 | 16.0 | 104 | 0.6019 | 0.7391 |
| 0.2511 | 16.92 | 110 | 0.5275 | 0.8261 |
| 0.2511 | 18.0 | 117 | 0.6069 | 0.7826 |
| 0.1925 | 18.92 | 123 | 0.6447 | 0.7826 |
| 0.2121 | 20.0 | 130 | 0.5044 | 0.8261 |
| 0.2121 | 20.92 | 136 | 0.4805 | 0.8478 |
| 0.1883 | 22.0 | 143 | 0.6723 | 0.8043 |
| 0.1883 | 22.92 | 149 | 0.7730 | 0.7391 |
| 0.1693 | 24.0 | 156 | 0.6574 | 0.7609 |
| 0.1252 | 24.92 | 162 | 0.8192 | 0.7391 |
| 0.1252 | 26.0 | 169 | 0.5984 | 0.7826 |
| 0.1439 | 26.92 | 175 | 0.7633 | 0.7826 |
| 0.137 | 28.0 | 182 | 0.6566 | 0.8478 |
| 0.137 | 28.92 | 188 | 0.6550 | 0.8261 |
| 0.1316 | 30.0 | 195 | 0.7163 | 0.7391 |
| 0.1101 | 30.92 | 201 | 0.6241 | 0.7826 |
| 0.1101 | 32.0 | 208 | 0.6360 | 0.8478 |
| 0.0947 | 32.92 | 214 | 0.5273 | 0.8696 |
| 0.0885 | 34.0 | 221 | 0.6579 | 0.8261 |
| 0.0885 | 34.92 | 227 | 0.5920 | 0.8696 |
| 0.0967 | 36.0 | 234 | 0.6779 | 0.8261 |
| 0.0812 | 36.92 | 240 | 0.7354 | 0.8043 |
| 0.0812 | 38.0 | 247 | 0.6825 | 0.8261 |
| 0.0752 | 38.92 | 253 | 0.6348 | 0.8478 |
| 0.0757 | 40.0 | 260 | 0.7726 | 0.8043 |
| 0.0757 | 40.92 | 266 | 0.6737 | 0.8261 |
| 0.086 | 42.0 | 273 | 0.6738 | 0.7826 |
| 0.086 | 42.92 | 279 | 0.7295 | 0.7609 |
| 0.0533 | 44.0 | 286 | 0.6897 | 0.8261 |
| 0.0574 | 44.92 | 292 | 0.6427 | 0.8261 |
| 0.0574 | 46.0 | 299 | 0.6471 | 0.8261 |
| 0.0739 | 46.92 | 305 | 0.6645 | 0.8261 |
| 0.0849 | 48.0 | 312 | 0.6858 | 0.8043 |
| 0.0849 | 48.92 | 318 | 0.7475 | 0.8043 |
| 0.0719 | 50.0 | 325 | 0.6735 | 0.8261 |
| 0.0434 | 50.92 | 331 | 0.6892 | 0.8478 |
| 0.0434 | 52.0 | 338 | 0.6820 | 0.8478 |
| 0.0564 | 52.92 | 344 | 0.6677 | 0.8478 |
| 0.0408 | 54.0 | 351 | 0.7379 | 0.8043 |
| 0.0408 | 54.92 | 357 | 0.5456 | 0.8913 |
| 0.0464 | 56.0 | 364 | 0.7951 | 0.7826 |
| 0.0463 | 56.92 | 370 | 0.6356 | 0.8478 |
| 0.0463 | 58.0 | 377 | 0.7529 | 0.8261 |
| 0.0361 | 58.92 | 383 | 0.8017 | 0.8261 |
| 0.0457 | 60.0 | 390 | 0.7877 | 0.8478 |
| 0.0457 | 60.92 | 396 | 0.8019 | 0.7826 |
| 0.0371 | 62.0 | 403 | 0.8015 | 0.8043 |
| 0.0371 | 62.92 | 409 | 0.8487 | 0.8043 |
| 0.0452 | 64.0 | 416 | 0.9401 | 0.7609 |
| 0.0455 | 64.92 | 422 | 0.9647 | 0.7609 |
| 0.0455 | 66.0 | 429 | 0.8958 | 0.7609 |
| 0.0408 | 66.92 | 435 | 0.8531 | 0.7826 |
| 0.0418 | 68.0 | 442 | 0.8206 | 0.8043 |
| 0.0418 | 68.92 | 448 | 0.8045 | 0.8043 |
| 0.0424 | 70.0 | 455 | 0.8090 | 0.8043 |
| 0.038 | 70.92 | 461 | 0.7902 | 0.8043 |
| 0.038 | 72.0 | 468 | 0.8008 | 0.8261 |
| 0.0401 | 72.92 | 474 | 0.8122 | 0.8043 |
| 0.0347 | 73.85 | 480 | 0.8161 | 0.8043 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/swin-tiny-patch4-window7-224-ve-U11-b-80
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-ve-U11-b-80
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7088
- Accuracy: 0.7826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3860 | 0.1304 |
| 1.3859 | 2.0 | 13 | 1.3832 | 0.2609 |
| 1.3859 | 2.92 | 19 | 1.3773 | 0.2609 |
| 1.3791 | 4.0 | 26 | 1.3569 | 0.2174 |
| 1.3347 | 4.92 | 32 | 1.3177 | 0.2609 |
| 1.3347 | 6.0 | 39 | 1.2093 | 0.3913 |
| 1.2088 | 6.92 | 45 | 1.1083 | 0.4348 |
| 1.0456 | 8.0 | 52 | 1.0340 | 0.4565 |
| 1.0456 | 8.92 | 58 | 1.0120 | 0.5 |
| 0.9278 | 10.0 | 65 | 0.9282 | 0.5652 |
| 0.847 | 10.92 | 71 | 0.9934 | 0.5217 |
| 0.847 | 12.0 | 78 | 1.0171 | 0.4783 |
| 0.7142 | 12.92 | 84 | 0.8889 | 0.5870 |
| 0.5959 | 14.0 | 91 | 0.9392 | 0.5870 |
| 0.5959 | 14.92 | 97 | 0.9018 | 0.6304 |
| 0.5344 | 16.0 | 104 | 0.8327 | 0.6739 |
| 0.4438 | 16.92 | 110 | 0.7308 | 0.7391 |
| 0.4438 | 18.0 | 117 | 0.6834 | 0.7174 |
| 0.4419 | 18.92 | 123 | 0.7909 | 0.6304 |
| 0.3989 | 20.0 | 130 | 0.9103 | 0.6739 |
| 0.3989 | 20.92 | 136 | 0.7534 | 0.7391 |
| 0.3534 | 22.0 | 143 | 0.8043 | 0.7391 |
| 0.3534 | 22.92 | 149 | 0.7648 | 0.7174 |
| 0.3265 | 24.0 | 156 | 0.7088 | 0.7826 |
| 0.2808 | 24.92 | 162 | 0.8845 | 0.6957 |
| 0.2808 | 26.0 | 169 | 0.7756 | 0.7609 |
| 0.2753 | 26.92 | 175 | 0.9944 | 0.6087 |
| 0.2837 | 28.0 | 182 | 0.8091 | 0.7174 |
| 0.2837 | 28.92 | 188 | 0.9966 | 0.6739 |
| 0.2667 | 30.0 | 195 | 0.7711 | 0.7826 |
| 0.2325 | 30.92 | 201 | 0.8946 | 0.6957 |
| 0.2325 | 32.0 | 208 | 0.9079 | 0.6739 |
| 0.2096 | 32.92 | 214 | 1.0338 | 0.6522 |
| 0.1733 | 34.0 | 221 | 0.8191 | 0.7391 |
| 0.1733 | 34.92 | 227 | 1.0068 | 0.6957 |
| 0.1975 | 36.0 | 234 | 0.8644 | 0.7174 |
| 0.1844 | 36.92 | 240 | 0.8682 | 0.6739 |
| 0.1844 | 38.0 | 247 | 0.7915 | 0.7609 |
| 0.1701 | 38.92 | 253 | 0.7554 | 0.7609 |
| 0.1696 | 40.0 | 260 | 0.8762 | 0.7174 |
| 0.1696 | 40.92 | 266 | 1.0173 | 0.6739 |
| 0.1556 | 42.0 | 273 | 0.9080 | 0.7174 |
| 0.1556 | 42.92 | 279 | 1.2456 | 0.6739 |
| 0.153 | 44.0 | 286 | 0.9820 | 0.7391 |
| 0.1343 | 44.92 | 292 | 0.9908 | 0.7174 |
| 0.1343 | 46.0 | 299 | 0.9435 | 0.7391 |
| 0.1513 | 46.92 | 305 | 0.8842 | 0.7826 |
| 0.1402 | 48.0 | 312 | 1.0207 | 0.6739 |
| 0.1402 | 48.92 | 318 | 0.9915 | 0.7174 |
| 0.1648 | 50.0 | 325 | 1.1576 | 0.6739 |
| 0.1047 | 50.92 | 331 | 1.2283 | 0.6739 |
| 0.1047 | 52.0 | 338 | 1.0869 | 0.6957 |
| 0.1223 | 52.92 | 344 | 1.1203 | 0.7174 |
| 0.1223 | 54.0 | 351 | 0.9685 | 0.7174 |
| 0.1223 | 54.92 | 357 | 1.1926 | 0.7174 |
| 0.1236 | 56.0 | 364 | 1.0088 | 0.7174 |
| 0.1115 | 56.92 | 370 | 0.9149 | 0.7391 |
| 0.1115 | 58.0 | 377 | 0.8820 | 0.7391 |
| 0.1173 | 58.92 | 383 | 0.9653 | 0.7391 |
| 0.102 | 60.0 | 390 | 1.0046 | 0.7174 |
| 0.102 | 60.92 | 396 | 1.0585 | 0.6957 |
| 0.1206 | 62.0 | 403 | 1.0490 | 0.6957 |
| 0.1206 | 62.92 | 409 | 0.9683 | 0.7609 |
| 0.1124 | 64.0 | 416 | 0.9627 | 0.7609 |
| 0.0927 | 64.92 | 422 | 0.9771 | 0.7609 |
| 0.0927 | 66.0 | 429 | 1.0002 | 0.7174 |
| 0.0906 | 66.92 | 435 | 0.9607 | 0.7391 |
| 0.084 | 68.0 | 442 | 0.9414 | 0.7391 |
| 0.084 | 68.92 | 448 | 0.9863 | 0.7174 |
| 0.0866 | 70.0 | 455 | 0.9930 | 0.7174 |
| 0.0944 | 70.92 | 461 | 0.9981 | 0.7174 |
| 0.0944 | 72.0 | 468 | 1.0039 | 0.7174 |
| 0.1064 | 72.92 | 474 | 0.9987 | 0.7174 |
| 0.1074 | 73.85 | 480 | 0.9964 | 0.7174 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/swin-tiny-patch4-window7-224-ve-U11-b-12
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-ve-U11-b-12
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9473
- Accuracy: 0.5435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3839 | 0.1304 |
| 1.3821 | 2.0 | 13 | 1.3524 | 0.2391 |
| 1.3821 | 2.92 | 19 | 1.2898 | 0.3043 |
| 1.2875 | 4.0 | 26 | 1.1721 | 0.4348 |
| 1.1072 | 4.92 | 32 | 1.1018 | 0.4348 |
| 1.1072 | 6.0 | 39 | 1.0327 | 0.4783 |
| 0.9941 | 6.92 | 45 | 0.9920 | 0.4565 |
| 0.9132 | 8.0 | 52 | 0.9473 | 0.5435 |
| 0.9132 | 8.92 | 58 | 0.9522 | 0.5217 |
| 0.849 | 10.0 | 65 | 0.9478 | 0.5217 |
| 0.8124 | 10.92 | 71 | 0.9506 | 0.5217 |
| 0.8124 | 11.08 | 72 | 0.9505 | 0.5217 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/swin-tiny-patch4-window7-224-ve-U11-b-40
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-ve-U11-b-40
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6121
- Accuracy: 0.8261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.5799 | 0.4783 |
| 2.1773 | 2.0 | 13 | 1.5648 | 0.3478 |
| 2.1773 | 2.92 | 19 | 1.5182 | 0.3261 |
| 2.1773 | 4.0 | 26 | 1.4232 | 0.3261 |
| 1.8993 | 4.92 | 32 | 1.3505 | 0.3913 |
| 1.8993 | 6.0 | 39 | 1.2747 | 0.3696 |
| 1.5045 | 6.92 | 45 | 1.2452 | 0.3696 |
| 1.2431 | 8.0 | 52 | 1.1982 | 0.2826 |
| 1.2431 | 8.92 | 58 | 1.2112 | 0.3043 |
| 1.1225 | 10.0 | 65 | 1.0160 | 0.5 |
| 0.9942 | 10.92 | 71 | 1.0138 | 0.4783 |
| 0.9942 | 12.0 | 78 | 0.9094 | 0.5652 |
| 0.9212 | 12.92 | 84 | 0.8860 | 0.5217 |
| 0.816 | 14.0 | 91 | 0.7693 | 0.6739 |
| 0.816 | 14.92 | 97 | 0.8290 | 0.6304 |
| 0.741 | 16.0 | 104 | 0.7810 | 0.6739 |
| 0.631 | 16.92 | 110 | 0.6342 | 0.7826 |
| 0.631 | 18.0 | 117 | 0.7677 | 0.6957 |
| 0.6402 | 18.92 | 123 | 0.6283 | 0.7391 |
| 0.5477 | 20.0 | 130 | 0.6687 | 0.7174 |
| 0.5477 | 20.92 | 136 | 0.6369 | 0.7826 |
| 0.5023 | 22.0 | 143 | 0.6334 | 0.7609 |
| 0.5023 | 22.92 | 149 | 0.6355 | 0.8043 |
| 0.4802 | 24.0 | 156 | 0.5976 | 0.8043 |
| 0.4336 | 24.92 | 162 | 0.6112 | 0.7609 |
| 0.4336 | 26.0 | 169 | 0.6148 | 0.8043 |
| 0.4203 | 26.92 | 175 | 0.6380 | 0.7391 |
| 0.429 | 28.0 | 182 | 0.6032 | 0.8043 |
| 0.429 | 28.92 | 188 | 0.6348 | 0.7391 |
| 0.4013 | 30.0 | 195 | 0.6121 | 0.8261 |
| 0.3747 | 30.92 | 201 | 0.6521 | 0.7391 |
| 0.3747 | 32.0 | 208 | 0.6424 | 0.7609 |
| 0.3668 | 32.92 | 214 | 0.6149 | 0.8261 |
| 0.3287 | 34.0 | 221 | 0.6426 | 0.7826 |
| 0.3287 | 34.92 | 227 | 0.6379 | 0.8043 |
| 0.372 | 36.0 | 234 | 0.6435 | 0.8043 |
| 0.3236 | 36.92 | 240 | 0.6450 | 0.8043 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/swin-tiny-patch4-window7-224-ve-U11-b-60
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-ve-U11-b-60
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7230
- Accuracy: 0.8043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3859 | 0.1304 |
| 1.3858 | 2.0 | 13 | 1.3818 | 0.2609 |
| 1.3858 | 2.92 | 19 | 1.3723 | 0.2609 |
| 1.3747 | 4.0 | 26 | 1.3355 | 0.2174 |
| 1.3001 | 4.92 | 32 | 1.2625 | 0.3696 |
| 1.3001 | 6.0 | 39 | 1.1306 | 0.4565 |
| 1.141 | 6.92 | 45 | 1.0510 | 0.4783 |
| 0.9784 | 8.0 | 52 | 0.9585 | 0.5435 |
| 0.9784 | 8.92 | 58 | 0.9895 | 0.4783 |
| 0.8533 | 10.0 | 65 | 0.9512 | 0.5 |
| 0.7564 | 10.92 | 71 | 0.9522 | 0.5217 |
| 0.7564 | 12.0 | 78 | 0.9144 | 0.5 |
| 0.6735 | 12.92 | 84 | 0.9070 | 0.6087 |
| 0.5919 | 14.0 | 91 | 0.7915 | 0.6522 |
| 0.5919 | 14.92 | 97 | 0.7989 | 0.6522 |
| 0.504 | 16.0 | 104 | 0.9510 | 0.6522 |
| 0.4422 | 16.92 | 110 | 0.8196 | 0.6739 |
| 0.4422 | 18.0 | 117 | 0.6629 | 0.7609 |
| 0.4031 | 18.92 | 123 | 0.8767 | 0.6522 |
| 0.3752 | 20.0 | 130 | 0.8253 | 0.6739 |
| 0.3752 | 20.92 | 136 | 0.7183 | 0.7391 |
| 0.3424 | 22.0 | 143 | 0.8852 | 0.6739 |
| 0.3424 | 22.92 | 149 | 0.7360 | 0.7391 |
| 0.3293 | 24.0 | 156 | 0.7230 | 0.8043 |
| 0.2822 | 24.92 | 162 | 0.8271 | 0.6957 |
| 0.2822 | 26.0 | 169 | 0.7443 | 0.8043 |
| 0.2623 | 26.92 | 175 | 0.9371 | 0.6739 |
| 0.2807 | 28.0 | 182 | 0.7392 | 0.7391 |
| 0.2807 | 28.92 | 188 | 0.8754 | 0.6739 |
| 0.223 | 30.0 | 195 | 0.7146 | 0.7826 |
| 0.2185 | 30.92 | 201 | 0.7702 | 0.7391 |
| 0.2185 | 32.0 | 208 | 0.7330 | 0.7174 |
| 0.2157 | 32.92 | 214 | 0.8817 | 0.6957 |
| 0.2011 | 34.0 | 221 | 0.7460 | 0.7174 |
| 0.2011 | 34.92 | 227 | 0.9663 | 0.6739 |
| 0.2204 | 36.0 | 234 | 0.8056 | 0.7174 |
| 0.1856 | 36.92 | 240 | 0.7799 | 0.7174 |
| 0.1856 | 38.0 | 247 | 0.8410 | 0.6957 |
| 0.1678 | 38.92 | 253 | 0.7334 | 0.7391 |
| 0.1682 | 40.0 | 260 | 0.8508 | 0.6957 |
| 0.1682 | 40.92 | 266 | 0.8106 | 0.6957 |
| 0.1638 | 42.0 | 273 | 0.8403 | 0.7174 |
| 0.1638 | 42.92 | 279 | 0.9157 | 0.6957 |
| 0.1573 | 44.0 | 286 | 0.9271 | 0.7391 |
| 0.1476 | 44.92 | 292 | 0.9167 | 0.7174 |
| 0.1476 | 46.0 | 299 | 0.9309 | 0.7174 |
| 0.1466 | 46.92 | 305 | 0.8236 | 0.7826 |
| 0.1457 | 48.0 | 312 | 0.8835 | 0.7826 |
| 0.1457 | 48.92 | 318 | 0.9162 | 0.7391 |
| 0.1625 | 50.0 | 325 | 0.8969 | 0.7391 |
| 0.1163 | 50.92 | 331 | 0.9183 | 0.7391 |
| 0.1163 | 52.0 | 338 | 0.9173 | 0.7391 |
| 0.1375 | 52.92 | 344 | 0.8886 | 0.7609 |
| 0.1379 | 54.0 | 351 | 0.8771 | 0.7391 |
| 0.1379 | 54.92 | 357 | 0.8857 | 0.7391 |
| 0.1321 | 55.38 | 360 | 0.8884 | 0.7391 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-U12-b-24
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U12-b-24
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6456
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3806 | 0.4130 |
| 1.379 | 2.0 | 13 | 1.3103 | 0.5435 |
| 1.379 | 2.92 | 19 | 1.2269 | 0.4130 |
| 1.2758 | 4.0 | 26 | 1.1412 | 0.4565 |
| 1.121 | 4.92 | 32 | 1.0650 | 0.4783 |
| 1.121 | 6.0 | 39 | 1.0084 | 0.5217 |
| 0.9871 | 6.92 | 45 | 0.9395 | 0.6522 |
| 0.8612 | 8.0 | 52 | 0.8798 | 0.7174 |
| 0.8612 | 8.92 | 58 | 0.8219 | 0.7391 |
| 0.7653 | 10.0 | 65 | 0.7712 | 0.7826 |
| 0.6674 | 10.92 | 71 | 0.7328 | 0.7609 |
| 0.6674 | 12.0 | 78 | 0.6968 | 0.7391 |
| 0.568 | 12.92 | 84 | 0.6456 | 0.8478 |
| 0.4723 | 14.0 | 91 | 0.6528 | 0.8043 |
| 0.4723 | 14.92 | 97 | 0.7107 | 0.6739 |
| 0.4256 | 16.0 | 104 | 0.6335 | 0.7609 |
| 0.3524 | 16.92 | 110 | 0.5953 | 0.8261 |
| 0.3524 | 18.0 | 117 | 0.5824 | 0.8261 |
| 0.3282 | 18.92 | 123 | 0.6329 | 0.7174 |
| 0.3074 | 20.0 | 130 | 0.5775 | 0.8043 |
| 0.3074 | 20.92 | 136 | 0.5770 | 0.8043 |
| 0.3076 | 22.0 | 143 | 0.5749 | 0.8261 |
| 0.3076 | 22.15 | 144 | 0.5747 | 0.8261 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-U12-b-80
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U12-b-80
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8139
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3850 | 0.3478 |
| 1.3848 | 2.0 | 13 | 1.3701 | 0.4783 |
| 1.3848 | 2.92 | 19 | 1.3196 | 0.5 |
| 1.3508 | 4.0 | 26 | 1.2287 | 0.4130 |
| 1.2282 | 4.92 | 32 | 1.1280 | 0.3913 |
| 1.2282 | 6.0 | 39 | 1.0625 | 0.3913 |
| 1.0677 | 6.92 | 45 | 0.9840 | 0.5 |
| 0.9278 | 8.0 | 52 | 0.8970 | 0.6957 |
| 0.9278 | 8.92 | 58 | 0.8530 | 0.7391 |
| 0.8003 | 10.0 | 65 | 0.7872 | 0.8043 |
| 0.6486 | 10.92 | 71 | 0.6974 | 0.8043 |
| 0.6486 | 12.0 | 78 | 0.6409 | 0.8043 |
| 0.514 | 12.92 | 84 | 0.6050 | 0.8261 |
| 0.3945 | 14.0 | 91 | 0.6589 | 0.7609 |
| 0.3945 | 14.92 | 97 | 0.6343 | 0.7609 |
| 0.337 | 16.0 | 104 | 0.7340 | 0.7174 |
| 0.2779 | 16.92 | 110 | 0.5629 | 0.8261 |
| 0.2779 | 18.0 | 117 | 0.5934 | 0.8261 |
| 0.2374 | 18.92 | 123 | 0.7080 | 0.7609 |
| 0.2201 | 20.0 | 130 | 0.7100 | 0.7391 |
| 0.2201 | 20.92 | 136 | 0.7673 | 0.7609 |
| 0.1889 | 22.0 | 143 | 0.7889 | 0.7391 |
| 0.1889 | 22.92 | 149 | 0.7971 | 0.7391 |
| 0.1463 | 24.0 | 156 | 0.6888 | 0.7826 |
| 0.1261 | 24.92 | 162 | 0.8399 | 0.7609 |
| 0.1261 | 26.0 | 169 | 0.7244 | 0.7826 |
| 0.1489 | 26.92 | 175 | 0.8311 | 0.7391 |
| 0.1132 | 28.0 | 182 | 0.7987 | 0.7609 |
| 0.1132 | 28.92 | 188 | 0.7380 | 0.8043 |
| 0.1279 | 30.0 | 195 | 0.8103 | 0.8043 |
| 0.0925 | 30.92 | 201 | 0.8462 | 0.7609 |
| 0.0925 | 32.0 | 208 | 0.8233 | 0.8043 |
| 0.0893 | 32.92 | 214 | 0.8241 | 0.7826 |
| 0.083 | 34.0 | 221 | 0.8443 | 0.7826 |
| 0.083 | 34.92 | 227 | 0.8429 | 0.7826 |
| 0.1044 | 36.0 | 234 | 0.9362 | 0.7609 |
| 0.0739 | 36.92 | 240 | 1.1173 | 0.7391 |
| 0.0739 | 38.0 | 247 | 0.7812 | 0.8261 |
| 0.0962 | 38.92 | 253 | 0.7595 | 0.8043 |
| 0.0869 | 40.0 | 260 | 0.8031 | 0.8261 |
| 0.0869 | 40.92 | 266 | 0.8359 | 0.8261 |
| 0.0837 | 42.0 | 273 | 0.8151 | 0.8261 |
| 0.0837 | 42.92 | 279 | 0.8295 | 0.8261 |
| 0.0535 | 44.0 | 286 | 0.8096 | 0.8261 |
| 0.0694 | 44.92 | 292 | 0.8352 | 0.8261 |
| 0.0694 | 46.0 | 299 | 0.8216 | 0.8261 |
| 0.0736 | 46.92 | 305 | 0.8683 | 0.8043 |
| 0.0705 | 48.0 | 312 | 0.8554 | 0.8261 |
| 0.0705 | 48.92 | 318 | 0.8139 | 0.8478 |
| 0.0559 | 50.0 | 325 | 0.9030 | 0.7826 |
| 0.0474 | 50.92 | 331 | 0.9053 | 0.7609 |
| 0.0474 | 52.0 | 338 | 0.8810 | 0.8261 |
| 0.0477 | 52.92 | 344 | 0.8912 | 0.8043 |
| 0.0529 | 54.0 | 351 | 0.9078 | 0.8043 |
| 0.0529 | 54.92 | 357 | 0.8804 | 0.8043 |
| 0.038 | 56.0 | 364 | 0.9498 | 0.7826 |
| 0.0407 | 56.92 | 370 | 0.9134 | 0.8043 |
| 0.0407 | 58.0 | 377 | 0.8452 | 0.8478 |
| 0.0353 | 58.92 | 383 | 0.8735 | 0.8261 |
| 0.0349 | 60.0 | 390 | 0.9153 | 0.8043 |
| 0.0349 | 60.92 | 396 | 0.9209 | 0.8043 |
| 0.0322 | 62.0 | 403 | 0.9091 | 0.8261 |
| 0.0322 | 62.92 | 409 | 0.9137 | 0.8261 |
| 0.0392 | 64.0 | 416 | 0.8896 | 0.8261 |
| 0.0419 | 64.92 | 422 | 0.8613 | 0.8478 |
| 0.0419 | 66.0 | 429 | 0.8844 | 0.8261 |
| 0.0518 | 66.92 | 435 | 0.9093 | 0.8043 |
| 0.0349 | 68.0 | 442 | 0.9082 | 0.8043 |
| 0.0349 | 68.92 | 448 | 0.8879 | 0.8261 |
| 0.0359 | 70.0 | 455 | 0.8809 | 0.8261 |
| 0.0377 | 70.92 | 461 | 0.8777 | 0.8261 |
| 0.0377 | 72.0 | 468 | 0.8845 | 0.8261 |
| 0.0324 | 72.92 | 474 | 0.8845 | 0.8261 |
| 0.0365 | 73.85 | 480 | 0.8850 | 0.8261 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-U13-b-24
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U13-b-24
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5896
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3800 | 0.4565 |
| 1.3792 | 2.0 | 13 | 1.3093 | 0.5870 |
| 1.3792 | 2.92 | 19 | 1.2228 | 0.5 |
| 1.2786 | 4.0 | 26 | 1.1303 | 0.5652 |
| 1.1265 | 4.92 | 32 | 1.0615 | 0.5435 |
| 1.1265 | 6.0 | 39 | 1.0205 | 0.4565 |
| 0.9906 | 6.92 | 45 | 0.9259 | 0.6304 |
| 0.8632 | 8.0 | 52 | 0.8739 | 0.7391 |
| 0.8632 | 8.92 | 58 | 0.8381 | 0.7609 |
| 0.7529 | 10.0 | 65 | 0.7604 | 0.7826 |
| 0.6468 | 10.92 | 71 | 0.7212 | 0.8043 |
| 0.6468 | 12.0 | 78 | 0.6825 | 0.7826 |
| 0.5553 | 12.92 | 84 | 0.6409 | 0.8261 |
| 0.4704 | 14.0 | 91 | 0.6471 | 0.8261 |
| 0.4704 | 14.92 | 97 | 0.6296 | 0.7609 |
| 0.415 | 16.0 | 104 | 0.5896 | 0.8478 |
| 0.3444 | 16.92 | 110 | 0.5828 | 0.8043 |
| 0.3444 | 18.0 | 117 | 0.5771 | 0.8261 |
| 0.3212 | 18.92 | 123 | 0.5672 | 0.8261 |
| 0.3021 | 20.0 | 130 | 0.5596 | 0.8478 |
| 0.3021 | 20.92 | 136 | 0.5527 | 0.8261 |
| 0.3004 | 22.0 | 143 | 0.5429 | 0.8261 |
| 0.3004 | 22.15 | 144 | 0.5427 | 0.8261 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-U13-b-80
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U13-b-80
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5742
- Accuracy: 0.8696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3848 | 0.3478 |
| 1.3848 | 2.0 | 13 | 1.3692 | 0.5217 |
| 1.3848 | 2.92 | 19 | 1.3184 | 0.5870 |
| 1.352 | 4.0 | 26 | 1.2217 | 0.4565 |
| 1.2316 | 4.92 | 32 | 1.1418 | 0.4783 |
| 1.2316 | 6.0 | 39 | 1.0689 | 0.4783 |
| 1.0849 | 6.92 | 45 | 0.9931 | 0.5870 |
| 0.9314 | 8.0 | 52 | 0.9458 | 0.6957 |
| 0.9314 | 8.92 | 58 | 0.8675 | 0.6957 |
| 0.8001 | 10.0 | 65 | 0.8148 | 0.7174 |
| 0.6493 | 10.92 | 71 | 0.7692 | 0.7609 |
| 0.6493 | 12.0 | 78 | 0.6428 | 0.8043 |
| 0.5145 | 12.92 | 84 | 0.6025 | 0.8261 |
| 0.379 | 14.0 | 91 | 0.5621 | 0.8043 |
| 0.379 | 14.92 | 97 | 0.5298 | 0.8478 |
| 0.2942 | 16.0 | 104 | 0.5791 | 0.8043 |
| 0.2096 | 16.92 | 110 | 0.5814 | 0.7826 |
| 0.2096 | 18.0 | 117 | 0.7829 | 0.7174 |
| 0.2113 | 18.92 | 123 | 0.5658 | 0.8478 |
| 0.2143 | 20.0 | 130 | 0.7036 | 0.7609 |
| 0.2143 | 20.92 | 136 | 0.5924 | 0.7826 |
| 0.1752 | 22.0 | 143 | 0.6852 | 0.7609 |
| 0.1752 | 22.92 | 149 | 0.7237 | 0.7609 |
| 0.1238 | 24.0 | 156 | 0.6743 | 0.8043 |
| 0.1401 | 24.92 | 162 | 0.8463 | 0.6957 |
| 0.1401 | 26.0 | 169 | 0.7872 | 0.7609 |
| 0.1544 | 26.92 | 175 | 0.5492 | 0.8261 |
| 0.1163 | 28.0 | 182 | 0.5756 | 0.8043 |
| 0.1163 | 28.92 | 188 | 0.7621 | 0.7609 |
| 0.1121 | 30.0 | 195 | 0.6972 | 0.7826 |
| 0.1065 | 30.92 | 201 | 0.5723 | 0.8261 |
| 0.1065 | 32.0 | 208 | 0.7503 | 0.8261 |
| 0.1021 | 32.92 | 214 | 0.6127 | 0.8043 |
| 0.1048 | 34.0 | 221 | 0.5734 | 0.8478 |
| 0.1048 | 34.92 | 227 | 0.5817 | 0.8478 |
| 0.0848 | 36.0 | 234 | 0.5903 | 0.8261 |
| 0.0769 | 36.92 | 240 | 0.7074 | 0.8261 |
| 0.0769 | 38.0 | 247 | 0.5835 | 0.8478 |
| 0.0825 | 38.92 | 253 | 0.6373 | 0.8043 |
| 0.0676 | 40.0 | 260 | 0.6793 | 0.8261 |
| 0.0676 | 40.92 | 266 | 0.6556 | 0.8261 |
| 0.0703 | 42.0 | 273 | 0.6329 | 0.8478 |
| 0.0703 | 42.92 | 279 | 0.6868 | 0.8261 |
| 0.0574 | 44.0 | 286 | 0.5997 | 0.8043 |
| 0.0523 | 44.92 | 292 | 0.5846 | 0.8261 |
| 0.0523 | 46.0 | 299 | 0.7214 | 0.8478 |
| 0.064 | 46.92 | 305 | 0.5230 | 0.8478 |
| 0.082 | 48.0 | 312 | 0.5850 | 0.8478 |
| 0.082 | 48.92 | 318 | 0.6346 | 0.8478 |
| 0.0694 | 50.0 | 325 | 0.6389 | 0.8261 |
| 0.0462 | 50.92 | 331 | 0.5813 | 0.8478 |
| 0.0462 | 52.0 | 338 | 0.5792 | 0.8478 |
| 0.044 | 52.92 | 344 | 0.5724 | 0.8261 |
| 0.0538 | 54.0 | 351 | 0.6294 | 0.8261 |
| 0.0538 | 54.92 | 357 | 0.5742 | 0.8696 |
| 0.0455 | 56.0 | 364 | 0.6951 | 0.8043 |
| 0.0537 | 56.92 | 370 | 0.6458 | 0.8043 |
| 0.0537 | 58.0 | 377 | 0.6259 | 0.8478 |
| 0.038 | 58.92 | 383 | 0.6748 | 0.8478 |
| 0.039 | 60.0 | 390 | 0.7236 | 0.8261 |
| 0.039 | 60.92 | 396 | 0.7758 | 0.8261 |
| 0.0304 | 62.0 | 403 | 0.7253 | 0.7609 |
| 0.0304 | 62.92 | 409 | 0.7513 | 0.8261 |
| 0.051 | 64.0 | 416 | 0.7547 | 0.8261 |
| 0.0355 | 64.92 | 422 | 0.8115 | 0.7826 |
| 0.0355 | 66.0 | 429 | 0.7768 | 0.8043 |
| 0.0435 | 66.92 | 435 | 0.7829 | 0.8043 |
| 0.0313 | 68.0 | 442 | 0.7787 | 0.8043 |
| 0.0313 | 68.92 | 448 | 0.7721 | 0.8261 |
| 0.0378 | 70.0 | 455 | 0.7672 | 0.8261 |
| 0.0339 | 70.92 | 461 | 0.7634 | 0.8261 |
| 0.0339 | 72.0 | 468 | 0.7615 | 0.8261 |
| 0.0311 | 72.92 | 474 | 0.7605 | 0.8261 |
| 0.0302 | 73.85 | 480 | 0.7603 | 0.8261 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
bismaadh14/emotion_recognition_results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_recognition_results
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0824
- Accuracy: 0.025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5476 | 1.0 | 80 | 4.2262 | 0.0063 |
| 0.7471 | 2.0 | 160 | 4.0593 | 0.0375 |
| 0.3293 | 3.0 | 240 | 4.0824 | 0.025 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"anger",
"contempt",
"disgust",
"fear",
"happy",
"neutral",
"sad",
"surprise"
] |
matthieulel/dinov2-base-imagenet1k-1-layer-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-base-imagenet1k-1-layer-finetuned-galaxy10-decals
This model is a fine-tuned version of [facebook/dinov2-base-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-base-imagenet1k-1-layer) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5267
- Accuracy: 0.8670
- Precision: 0.8645
- Recall: 0.8670
- F1: 0.8650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8157 | 0.99 | 62 | 0.6740 | 0.7813 | 0.8046 | 0.7813 | 0.7853 |
| 0.8091 | 2.0 | 125 | 0.5948 | 0.8021 | 0.8016 | 0.8021 | 0.7950 |
| 0.6983 | 2.99 | 187 | 0.6016 | 0.7965 | 0.8077 | 0.7965 | 0.7909 |
| 0.6701 | 4.0 | 250 | 0.5676 | 0.7982 | 0.8016 | 0.7982 | 0.7954 |
| 0.5998 | 4.99 | 312 | 0.5116 | 0.8286 | 0.8401 | 0.8286 | 0.8302 |
| 0.5521 | 6.0 | 375 | 0.5155 | 0.8354 | 0.8375 | 0.8354 | 0.8325 |
| 0.5441 | 6.99 | 437 | 0.5574 | 0.8033 | 0.8104 | 0.8033 | 0.7980 |
| 0.5142 | 8.0 | 500 | 0.4818 | 0.8410 | 0.8418 | 0.8410 | 0.8376 |
| 0.5136 | 8.99 | 562 | 0.4914 | 0.8337 | 0.8353 | 0.8337 | 0.8317 |
| 0.4533 | 10.0 | 625 | 0.4740 | 0.8320 | 0.8335 | 0.8320 | 0.8295 |
| 0.4904 | 10.99 | 687 | 0.5075 | 0.8399 | 0.8409 | 0.8399 | 0.8375 |
| 0.4361 | 12.0 | 750 | 0.4552 | 0.8563 | 0.8554 | 0.8563 | 0.8540 |
| 0.414 | 12.99 | 812 | 0.5025 | 0.8365 | 0.8455 | 0.8365 | 0.8374 |
| 0.4114 | 14.0 | 875 | 0.4822 | 0.8467 | 0.8437 | 0.8467 | 0.8420 |
| 0.3878 | 14.99 | 937 | 0.4615 | 0.8574 | 0.8552 | 0.8574 | 0.8549 |
| 0.3756 | 16.0 | 1000 | 0.5017 | 0.8444 | 0.8523 | 0.8444 | 0.8449 |
| 0.3056 | 16.99 | 1062 | 0.4910 | 0.8517 | 0.8495 | 0.8517 | 0.8501 |
| 0.3255 | 18.0 | 1125 | 0.5206 | 0.8523 | 0.8505 | 0.8523 | 0.8491 |
| 0.3224 | 18.99 | 1187 | 0.5066 | 0.8450 | 0.8470 | 0.8450 | 0.8438 |
| 0.2763 | 20.0 | 1250 | 0.5043 | 0.8574 | 0.8519 | 0.8574 | 0.8534 |
| 0.2926 | 20.99 | 1312 | 0.5345 | 0.8546 | 0.8542 | 0.8546 | 0.8512 |
| 0.2824 | 22.0 | 1375 | 0.5320 | 0.8529 | 0.8523 | 0.8529 | 0.8517 |
| 0.2613 | 22.99 | 1437 | 0.5254 | 0.8563 | 0.8543 | 0.8563 | 0.8542 |
| 0.2292 | 24.0 | 1500 | 0.5553 | 0.8546 | 0.8529 | 0.8546 | 0.8528 |
| 0.2313 | 24.99 | 1562 | 0.5603 | 0.8602 | 0.8612 | 0.8602 | 0.8593 |
| 0.2143 | 26.0 | 1625 | 0.5267 | 0.8670 | 0.8645 | 0.8670 | 0.8650 |
| 0.2075 | 26.99 | 1687 | 0.5737 | 0.8574 | 0.8589 | 0.8574 | 0.8573 |
| 0.2121 | 28.0 | 1750 | 0.5748 | 0.8619 | 0.8601 | 0.8619 | 0.8604 |
| 0.1944 | 28.99 | 1812 | 0.5666 | 0.8647 | 0.8618 | 0.8647 | 0.8624 |
| 0.1866 | 29.76 | 1860 | 0.5676 | 0.8608 | 0.8583 | 0.8608 | 0.8589 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
matthieulel/swinv2-tiny-patch4-window16-256-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window16-256-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window16-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window16-256) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4595
- Accuracy: 0.8551
- Precision: 0.8536
- Recall: 0.8551
- F1: 0.8518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.723 | 0.99 | 62 | 1.4631 | 0.4803 | 0.5152 | 0.4803 | 0.4359 |
| 1.1597 | 2.0 | 125 | 0.9498 | 0.6759 | 0.6942 | 0.6759 | 0.6657 |
| 0.9305 | 2.99 | 187 | 0.6600 | 0.7728 | 0.7592 | 0.7728 | 0.7620 |
| 0.7634 | 4.0 | 250 | 0.6276 | 0.7875 | 0.7831 | 0.7875 | 0.7765 |
| 0.6924 | 4.99 | 312 | 0.5762 | 0.7943 | 0.7972 | 0.7943 | 0.7934 |
| 0.6992 | 6.0 | 375 | 0.5421 | 0.8123 | 0.8128 | 0.8123 | 0.8059 |
| 0.6731 | 6.99 | 437 | 0.5244 | 0.8129 | 0.8153 | 0.8129 | 0.8108 |
| 0.6274 | 8.0 | 500 | 0.5279 | 0.8055 | 0.8140 | 0.8055 | 0.8019 |
| 0.6096 | 8.99 | 562 | 0.4737 | 0.8354 | 0.8336 | 0.8354 | 0.8321 |
| 0.5906 | 10.0 | 625 | 0.4792 | 0.8382 | 0.8382 | 0.8382 | 0.8357 |
| 0.5839 | 10.99 | 687 | 0.5093 | 0.8224 | 0.8322 | 0.8224 | 0.8199 |
| 0.5478 | 12.0 | 750 | 0.4601 | 0.8433 | 0.8429 | 0.8433 | 0.8411 |
| 0.5678 | 12.99 | 812 | 0.5018 | 0.8269 | 0.8322 | 0.8269 | 0.8233 |
| 0.5586 | 14.0 | 875 | 0.4503 | 0.8439 | 0.8444 | 0.8439 | 0.8423 |
| 0.5267 | 14.99 | 937 | 0.4492 | 0.8444 | 0.8416 | 0.8444 | 0.8424 |
| 0.5143 | 16.0 | 1000 | 0.4543 | 0.8484 | 0.8458 | 0.8484 | 0.8442 |
| 0.4608 | 16.99 | 1062 | 0.4616 | 0.8427 | 0.8419 | 0.8427 | 0.8398 |
| 0.4914 | 18.0 | 1125 | 0.4477 | 0.8501 | 0.8501 | 0.8501 | 0.8479 |
| 0.4889 | 18.99 | 1187 | 0.4738 | 0.8337 | 0.8383 | 0.8337 | 0.8310 |
| 0.4943 | 20.0 | 1250 | 0.4758 | 0.8388 | 0.8373 | 0.8388 | 0.8352 |
| 0.4759 | 20.99 | 1312 | 0.4550 | 0.8478 | 0.8484 | 0.8478 | 0.8456 |
| 0.49 | 22.0 | 1375 | 0.4529 | 0.8512 | 0.8520 | 0.8512 | 0.8489 |
| 0.4546 | 22.99 | 1437 | 0.4567 | 0.8472 | 0.8456 | 0.8472 | 0.8447 |
| 0.4638 | 24.0 | 1500 | 0.4598 | 0.8450 | 0.8438 | 0.8450 | 0.8431 |
| 0.4591 | 24.99 | 1562 | 0.4655 | 0.8529 | 0.8539 | 0.8529 | 0.8507 |
| 0.413 | 26.0 | 1625 | 0.4512 | 0.8546 | 0.8526 | 0.8546 | 0.8514 |
| 0.4268 | 26.99 | 1687 | 0.4511 | 0.8517 | 0.8506 | 0.8517 | 0.8496 |
| 0.4497 | 28.0 | 1750 | 0.4595 | 0.8551 | 0.8536 | 0.8551 | 0.8518 |
| 0.4183 | 28.99 | 1812 | 0.4556 | 0.8540 | 0.8532 | 0.8540 | 0.8512 |
| 0.4211 | 29.76 | 1860 | 0.4567 | 0.8529 | 0.8523 | 0.8529 | 0.8503 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
matthieulel/swinv2-small-patch4-window8-256-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-small-patch4-window8-256-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/swinv2-small-patch4-window8-256](https://huggingface.co/microsoft/swinv2-small-patch4-window8-256) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4138
- Accuracy: 0.8647
- Precision: 0.8650
- Recall: 0.8647
- F1: 0.8612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.6991 | 0.99 | 62 | 1.4106 | 0.5011 | 0.4620 | 0.5011 | 0.4641 |
| 0.9843 | 2.0 | 125 | 0.8254 | 0.7148 | 0.7390 | 0.7148 | 0.7097 |
| 0.8115 | 2.99 | 187 | 0.6271 | 0.7773 | 0.7700 | 0.7773 | 0.7671 |
| 0.6956 | 4.0 | 250 | 0.5679 | 0.8061 | 0.8029 | 0.8061 | 0.7967 |
| 0.6167 | 4.99 | 312 | 0.5484 | 0.8281 | 0.8271 | 0.8281 | 0.8247 |
| 0.6291 | 6.0 | 375 | 0.5184 | 0.8191 | 0.8241 | 0.8191 | 0.8123 |
| 0.6113 | 6.99 | 437 | 0.5175 | 0.8134 | 0.8149 | 0.8134 | 0.8097 |
| 0.5468 | 8.0 | 500 | 0.4897 | 0.8309 | 0.8363 | 0.8309 | 0.8283 |
| 0.567 | 8.99 | 562 | 0.4459 | 0.8568 | 0.8594 | 0.8568 | 0.8529 |
| 0.5449 | 10.0 | 625 | 0.4544 | 0.8393 | 0.8390 | 0.8393 | 0.8353 |
| 0.5437 | 10.99 | 687 | 0.4528 | 0.8388 | 0.8410 | 0.8388 | 0.8375 |
| 0.4754 | 12.0 | 750 | 0.4524 | 0.8422 | 0.8421 | 0.8422 | 0.8396 |
| 0.5121 | 12.99 | 812 | 0.4840 | 0.8382 | 0.8415 | 0.8382 | 0.8349 |
| 0.5074 | 14.0 | 875 | 0.4138 | 0.8647 | 0.8650 | 0.8647 | 0.8612 |
| 0.4567 | 14.99 | 937 | 0.4339 | 0.8484 | 0.8479 | 0.8484 | 0.8473 |
| 0.4686 | 16.0 | 1000 | 0.4391 | 0.8540 | 0.8521 | 0.8540 | 0.8504 |
| 0.414 | 16.99 | 1062 | 0.4626 | 0.8422 | 0.8443 | 0.8422 | 0.8388 |
| 0.4382 | 18.0 | 1125 | 0.4116 | 0.8568 | 0.8558 | 0.8568 | 0.8541 |
| 0.4322 | 18.99 | 1187 | 0.4506 | 0.8512 | 0.8529 | 0.8512 | 0.8496 |
| 0.4424 | 20.0 | 1250 | 0.4300 | 0.8568 | 0.8542 | 0.8568 | 0.8538 |
| 0.4062 | 20.99 | 1312 | 0.4609 | 0.8608 | 0.8597 | 0.8608 | 0.8578 |
| 0.4459 | 22.0 | 1375 | 0.4517 | 0.8568 | 0.8580 | 0.8568 | 0.8551 |
| 0.4109 | 22.99 | 1437 | 0.4490 | 0.8534 | 0.8534 | 0.8534 | 0.8513 |
| 0.3984 | 24.0 | 1500 | 0.4434 | 0.8619 | 0.8606 | 0.8619 | 0.8601 |
| 0.4034 | 24.99 | 1562 | 0.4613 | 0.8596 | 0.8577 | 0.8596 | 0.8571 |
| 0.3682 | 26.0 | 1625 | 0.4493 | 0.8596 | 0.8591 | 0.8596 | 0.8573 |
| 0.3779 | 26.99 | 1687 | 0.4366 | 0.8591 | 0.8581 | 0.8591 | 0.8575 |
| 0.3965 | 28.0 | 1750 | 0.4370 | 0.8636 | 0.8616 | 0.8636 | 0.8609 |
| 0.3712 | 28.99 | 1812 | 0.4380 | 0.8591 | 0.8576 | 0.8591 | 0.8568 |
| 0.3776 | 29.76 | 1860 | 0.4389 | 0.8630 | 0.8624 | 0.8630 | 0.8612 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
matthieulel/swinv2-small-patch4-window16-256-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-small-patch4-window16-256-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/swinv2-small-patch4-window16-256](https://huggingface.co/microsoft/swinv2-small-patch4-window16-256) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4406
- Accuracy: 0.8602
- Precision: 0.8577
- Recall: 0.8602
- F1: 0.8585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.6168 | 0.99 | 62 | 1.3397 | 0.5006 | 0.4880 | 0.5006 | 0.4599 |
| 0.9396 | 2.0 | 125 | 0.7823 | 0.7463 | 0.7602 | 0.7463 | 0.7410 |
| 0.782 | 2.99 | 187 | 0.5995 | 0.7948 | 0.7937 | 0.7948 | 0.7885 |
| 0.6373 | 4.0 | 250 | 0.5227 | 0.8230 | 0.8192 | 0.8230 | 0.8176 |
| 0.6047 | 4.99 | 312 | 0.5238 | 0.8281 | 0.8272 | 0.8281 | 0.8262 |
| 0.6143 | 6.0 | 375 | 0.5091 | 0.8348 | 0.8429 | 0.8348 | 0.8298 |
| 0.5805 | 6.99 | 437 | 0.4921 | 0.8264 | 0.8275 | 0.8264 | 0.8254 |
| 0.5476 | 8.0 | 500 | 0.4832 | 0.8320 | 0.8409 | 0.8320 | 0.8291 |
| 0.5333 | 8.99 | 562 | 0.4456 | 0.8501 | 0.8500 | 0.8501 | 0.8477 |
| 0.5062 | 10.0 | 625 | 0.4493 | 0.8467 | 0.8480 | 0.8467 | 0.8457 |
| 0.5001 | 10.99 | 687 | 0.4617 | 0.8450 | 0.8468 | 0.8450 | 0.8449 |
| 0.4572 | 12.0 | 750 | 0.4497 | 0.8467 | 0.8450 | 0.8467 | 0.8449 |
| 0.4681 | 12.99 | 812 | 0.4588 | 0.8489 | 0.8486 | 0.8489 | 0.8452 |
| 0.4747 | 14.0 | 875 | 0.4281 | 0.8529 | 0.8554 | 0.8529 | 0.8508 |
| 0.4283 | 14.99 | 937 | 0.4406 | 0.8602 | 0.8577 | 0.8602 | 0.8585 |
| 0.4296 | 16.0 | 1000 | 0.4458 | 0.8534 | 0.8512 | 0.8534 | 0.8498 |
| 0.3734 | 16.99 | 1062 | 0.4623 | 0.8416 | 0.8419 | 0.8416 | 0.8386 |
| 0.3921 | 18.0 | 1125 | 0.4438 | 0.8517 | 0.8506 | 0.8517 | 0.8496 |
| 0.3954 | 18.99 | 1187 | 0.4712 | 0.8467 | 0.8487 | 0.8467 | 0.8446 |
| 0.3995 | 20.0 | 1250 | 0.4648 | 0.8484 | 0.8467 | 0.8484 | 0.8448 |
| 0.3859 | 20.99 | 1312 | 0.4728 | 0.8495 | 0.8487 | 0.8495 | 0.8462 |
| 0.4046 | 22.0 | 1375 | 0.4720 | 0.8472 | 0.8467 | 0.8472 | 0.8453 |
| 0.3651 | 22.99 | 1437 | 0.4837 | 0.8416 | 0.8409 | 0.8416 | 0.8396 |
| 0.3481 | 24.0 | 1500 | 0.4742 | 0.8540 | 0.8522 | 0.8540 | 0.8524 |
| 0.3706 | 24.99 | 1562 | 0.4846 | 0.8478 | 0.8477 | 0.8478 | 0.8455 |
| 0.3278 | 26.0 | 1625 | 0.4798 | 0.8506 | 0.8502 | 0.8506 | 0.8484 |
| 0.3484 | 26.99 | 1687 | 0.4675 | 0.8529 | 0.8538 | 0.8529 | 0.8520 |
| 0.3626 | 28.0 | 1750 | 0.4768 | 0.8450 | 0.8446 | 0.8450 | 0.8429 |
| 0.3324 | 28.99 | 1812 | 0.4725 | 0.8484 | 0.8470 | 0.8484 | 0.8460 |
| 0.3462 | 29.76 | 1860 | 0.4737 | 0.8489 | 0.8486 | 0.8489 | 0.8472 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
Augusto777/vit-base-patch16-224-ve-U13-b-120
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U13-b-120
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6378
- Accuracy: 0.8696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3853 | 0.3261 |
| 1.3854 | 2.0 | 13 | 1.3764 | 0.6087 |
| 1.3854 | 2.92 | 19 | 1.3484 | 0.5870 |
| 1.3679 | 4.0 | 26 | 1.2873 | 0.5 |
| 1.2945 | 4.92 | 32 | 1.2122 | 0.4130 |
| 1.2945 | 6.0 | 39 | 1.1105 | 0.4130 |
| 1.1527 | 6.92 | 45 | 1.0386 | 0.5652 |
| 0.9999 | 8.0 | 52 | 0.9454 | 0.7174 |
| 0.9999 | 8.92 | 58 | 0.8886 | 0.7174 |
| 0.8606 | 10.0 | 65 | 0.7935 | 0.8261 |
| 0.7153 | 10.92 | 71 | 0.7424 | 0.7826 |
| 0.7153 | 12.0 | 78 | 0.6803 | 0.8043 |
| 0.5691 | 12.92 | 84 | 0.6104 | 0.8261 |
| 0.4187 | 14.0 | 91 | 0.5848 | 0.8043 |
| 0.4187 | 14.92 | 97 | 0.5254 | 0.8478 |
| 0.3203 | 16.0 | 104 | 0.5790 | 0.8261 |
| 0.2248 | 16.92 | 110 | 0.6315 | 0.7826 |
| 0.2248 | 18.0 | 117 | 0.7864 | 0.7391 |
| 0.2384 | 18.92 | 123 | 0.6028 | 0.8043 |
| 0.2437 | 20.0 | 130 | 0.6135 | 0.8043 |
| 0.2437 | 20.92 | 136 | 0.6210 | 0.7826 |
| 0.2309 | 22.0 | 143 | 0.6329 | 0.8043 |
| 0.2309 | 22.92 | 149 | 0.6236 | 0.8261 |
| 0.1367 | 24.0 | 156 | 0.6919 | 0.7826 |
| 0.1318 | 24.92 | 162 | 0.7770 | 0.7391 |
| 0.1318 | 26.0 | 169 | 0.7394 | 0.7609 |
| 0.1228 | 26.92 | 175 | 0.5662 | 0.8261 |
| 0.1173 | 28.0 | 182 | 0.8995 | 0.7391 |
| 0.1173 | 28.92 | 188 | 0.6780 | 0.7826 |
| 0.129 | 30.0 | 195 | 0.7868 | 0.7826 |
| 0.1043 | 30.92 | 201 | 0.7302 | 0.8261 |
| 0.1043 | 32.0 | 208 | 0.7549 | 0.7826 |
| 0.0917 | 32.92 | 214 | 0.6124 | 0.7826 |
| 0.0843 | 34.0 | 221 | 0.6607 | 0.8261 |
| 0.0843 | 34.92 | 227 | 0.6816 | 0.8261 |
| 0.1054 | 36.0 | 234 | 0.6349 | 0.7826 |
| 0.0923 | 36.92 | 240 | 0.7346 | 0.8261 |
| 0.0923 | 38.0 | 247 | 0.7571 | 0.8043 |
| 0.0879 | 38.92 | 253 | 0.7625 | 0.7826 |
| 0.0632 | 40.0 | 260 | 0.7908 | 0.7826 |
| 0.0632 | 40.92 | 266 | 0.8490 | 0.7826 |
| 0.0533 | 42.0 | 273 | 0.8177 | 0.8043 |
| 0.0533 | 42.92 | 279 | 0.8878 | 0.7826 |
| 0.0633 | 44.0 | 286 | 0.6725 | 0.8043 |
| 0.0526 | 44.92 | 292 | 0.7090 | 0.8261 |
| 0.0526 | 46.0 | 299 | 0.7725 | 0.8043 |
| 0.0716 | 46.92 | 305 | 0.7965 | 0.8043 |
| 0.0783 | 48.0 | 312 | 0.9016 | 0.8043 |
| 0.0783 | 48.92 | 318 | 0.9555 | 0.7826 |
| 0.0789 | 50.0 | 325 | 0.9379 | 0.7609 |
| 0.0418 | 50.92 | 331 | 0.7863 | 0.8043 |
| 0.0418 | 52.0 | 338 | 0.7688 | 0.8261 |
| 0.0483 | 52.92 | 344 | 0.7040 | 0.8261 |
| 0.0493 | 54.0 | 351 | 0.7560 | 0.8043 |
| 0.0493 | 54.92 | 357 | 0.9141 | 0.7609 |
| 0.0554 | 56.0 | 364 | 0.7642 | 0.8043 |
| 0.0612 | 56.92 | 370 | 0.7923 | 0.8478 |
| 0.0612 | 58.0 | 377 | 0.8156 | 0.8478 |
| 0.0468 | 58.92 | 383 | 0.6847 | 0.8043 |
| 0.0419 | 60.0 | 390 | 0.6378 | 0.8696 |
| 0.0419 | 60.92 | 396 | 0.8031 | 0.8261 |
| 0.0436 | 62.0 | 403 | 0.7883 | 0.8478 |
| 0.0436 | 62.92 | 409 | 0.8270 | 0.8478 |
| 0.0429 | 64.0 | 416 | 0.8654 | 0.8261 |
| 0.0438 | 64.92 | 422 | 0.7054 | 0.8478 |
| 0.0438 | 66.0 | 429 | 0.6511 | 0.8696 |
| 0.0378 | 66.92 | 435 | 0.7341 | 0.8478 |
| 0.0294 | 68.0 | 442 | 0.8695 | 0.8478 |
| 0.0294 | 68.92 | 448 | 0.8984 | 0.8043 |
| 0.0362 | 70.0 | 455 | 0.9207 | 0.8261 |
| 0.0367 | 70.92 | 461 | 0.9426 | 0.7826 |
| 0.0367 | 72.0 | 468 | 0.9156 | 0.8261 |
| 0.0332 | 72.92 | 474 | 0.9034 | 0.8043 |
| 0.0294 | 74.0 | 481 | 0.9086 | 0.7826 |
| 0.0294 | 74.92 | 487 | 0.8890 | 0.8043 |
| 0.0285 | 76.0 | 494 | 0.8999 | 0.8261 |
| 0.0232 | 76.92 | 500 | 0.9546 | 0.7826 |
| 0.0232 | 78.0 | 507 | 0.9126 | 0.8043 |
| 0.0349 | 78.92 | 513 | 0.9537 | 0.8043 |
| 0.0393 | 80.0 | 520 | 0.9870 | 0.8043 |
| 0.0393 | 80.92 | 526 | 0.9763 | 0.8043 |
| 0.0225 | 82.0 | 533 | 0.9384 | 0.8043 |
| 0.0225 | 82.92 | 539 | 0.8600 | 0.8478 |
| 0.0304 | 84.0 | 546 | 0.8530 | 0.8478 |
| 0.0263 | 84.92 | 552 | 0.8588 | 0.8043 |
| 0.0263 | 86.0 | 559 | 0.8635 | 0.8043 |
| 0.0186 | 86.92 | 565 | 0.8602 | 0.8261 |
| 0.0258 | 88.0 | 572 | 0.8514 | 0.8261 |
| 0.0258 | 88.92 | 578 | 0.8431 | 0.8261 |
| 0.0161 | 90.0 | 585 | 0.8046 | 0.8261 |
| 0.0208 | 90.92 | 591 | 0.8082 | 0.8261 |
| 0.0208 | 92.0 | 598 | 0.8276 | 0.8043 |
| 0.0331 | 92.92 | 604 | 0.7698 | 0.8261 |
| 0.0322 | 94.0 | 611 | 0.8191 | 0.8261 |
| 0.0322 | 94.92 | 617 | 0.9046 | 0.8043 |
| 0.0284 | 96.0 | 624 | 0.9535 | 0.8043 |
| 0.0187 | 96.92 | 630 | 0.9304 | 0.8043 |
| 0.0187 | 98.0 | 637 | 0.8834 | 0.8043 |
| 0.0209 | 98.92 | 643 | 0.8519 | 0.8043 |
| 0.027 | 100.0 | 650 | 0.8522 | 0.8261 |
| 0.027 | 100.92 | 656 | 0.8978 | 0.8261 |
| 0.0218 | 102.0 | 663 | 0.9194 | 0.8261 |
| 0.0218 | 102.92 | 669 | 0.9140 | 0.8261 |
| 0.021 | 104.0 | 676 | 0.9173 | 0.8261 |
| 0.0179 | 104.92 | 682 | 0.9279 | 0.8261 |
| 0.0179 | 106.0 | 689 | 0.9263 | 0.8261 |
| 0.0167 | 106.92 | 695 | 0.9158 | 0.8261 |
| 0.0229 | 108.0 | 702 | 0.9109 | 0.8261 |
| 0.0229 | 108.92 | 708 | 0.9065 | 0.8261 |
| 0.0219 | 110.0 | 715 | 0.9011 | 0.8261 |
| 0.0271 | 110.77 | 720 | 0.9002 | 0.8261 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-U14-b-24
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U14-b-24
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6698
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.97 | 7 | 1.3673 | 0.4783 |
| 1.3789 | 1.93 | 14 | 1.2760 | 0.5435 |
| 1.2878 | 2.9 | 21 | 1.1732 | 0.5435 |
| 1.2878 | 4.0 | 29 | 1.0471 | 0.5435 |
| 1.128 | 4.97 | 36 | 0.9960 | 0.5435 |
| 0.9873 | 5.93 | 43 | 0.8939 | 0.6304 |
| 0.8611 | 6.9 | 50 | 0.8650 | 0.6087 |
| 0.8611 | 8.0 | 58 | 0.8442 | 0.6304 |
| 0.7397 | 8.97 | 65 | 0.7331 | 0.7174 |
| 0.6326 | 9.93 | 72 | 0.6698 | 0.8478 |
| 0.6326 | 10.9 | 79 | 0.7430 | 0.7391 |
| 0.5424 | 12.0 | 87 | 0.7030 | 0.7609 |
| 0.4687 | 12.97 | 94 | 0.6352 | 0.8043 |
| 0.404 | 13.93 | 101 | 0.5498 | 0.8043 |
| 0.404 | 14.9 | 108 | 0.5631 | 0.8043 |
| 0.3244 | 16.0 | 116 | 0.5706 | 0.8261 |
| 0.305 | 16.97 | 123 | 0.6010 | 0.8043 |
| 0.2819 | 17.93 | 130 | 0.5845 | 0.7826 |
| 0.2819 | 18.9 | 137 | 0.5594 | 0.8043 |
| 0.2487 | 20.0 | 145 | 0.5567 | 0.8043 |
| 0.2297 | 20.97 | 152 | 0.5489 | 0.8043 |
| 0.2297 | 21.93 | 159 | 0.5556 | 0.7826 |
| 0.2177 | 22.9 | 166 | 0.5519 | 0.8043 |
| 0.2177 | 23.17 | 168 | 0.5515 | 0.8043 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
matthieulel/swinv2-base-patch4-window8-256-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-base-patch4-window8-256-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window8-256](https://huggingface.co/microsoft/swinv2-base-patch4-window8-256) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4829
- Accuracy: 0.8540
- Precision: 0.8529
- Recall: 0.8540
- F1: 0.8520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.6195 | 0.99 | 62 | 1.4006 | 0.5101 | 0.4910 | 0.5101 | 0.4782 |
| 0.9423 | 2.0 | 125 | 0.7209 | 0.7616 | 0.7617 | 0.7616 | 0.7531 |
| 0.8171 | 2.99 | 187 | 0.5842 | 0.8010 | 0.7950 | 0.8010 | 0.7938 |
| 0.6609 | 4.0 | 250 | 0.5000 | 0.8224 | 0.8159 | 0.8224 | 0.8143 |
| 0.5927 | 4.99 | 312 | 0.5367 | 0.8191 | 0.8211 | 0.8191 | 0.8184 |
| 0.624 | 6.0 | 375 | 0.4946 | 0.8286 | 0.8295 | 0.8286 | 0.8212 |
| 0.5891 | 6.99 | 437 | 0.5068 | 0.8219 | 0.8244 | 0.8219 | 0.8201 |
| 0.5597 | 8.0 | 500 | 0.5071 | 0.8230 | 0.8382 | 0.8230 | 0.8198 |
| 0.5292 | 8.99 | 562 | 0.4464 | 0.8444 | 0.8462 | 0.8444 | 0.8426 |
| 0.5143 | 10.0 | 625 | 0.4556 | 0.8371 | 0.8420 | 0.8371 | 0.8350 |
| 0.5122 | 10.99 | 687 | 0.4765 | 0.8382 | 0.8433 | 0.8382 | 0.8369 |
| 0.4647 | 12.0 | 750 | 0.4900 | 0.8365 | 0.8443 | 0.8365 | 0.8348 |
| 0.4769 | 12.99 | 812 | 0.4639 | 0.8427 | 0.8475 | 0.8427 | 0.8396 |
| 0.4804 | 14.0 | 875 | 0.4468 | 0.8484 | 0.8499 | 0.8484 | 0.8461 |
| 0.4452 | 14.99 | 937 | 0.4492 | 0.8512 | 0.8522 | 0.8512 | 0.8505 |
| 0.4283 | 16.0 | 1000 | 0.4660 | 0.8433 | 0.8446 | 0.8433 | 0.8401 |
| 0.3788 | 16.99 | 1062 | 0.4689 | 0.8478 | 0.8454 | 0.8478 | 0.8444 |
| 0.41 | 18.0 | 1125 | 0.4543 | 0.8506 | 0.8502 | 0.8506 | 0.8480 |
| 0.4007 | 18.99 | 1187 | 0.4766 | 0.8478 | 0.8511 | 0.8478 | 0.8455 |
| 0.406 | 20.0 | 1250 | 0.4716 | 0.8478 | 0.8474 | 0.8478 | 0.8444 |
| 0.3777 | 20.99 | 1312 | 0.5026 | 0.8455 | 0.8454 | 0.8455 | 0.8430 |
| 0.3972 | 22.0 | 1375 | 0.5108 | 0.8393 | 0.8402 | 0.8393 | 0.8371 |
| 0.3665 | 22.99 | 1437 | 0.4934 | 0.8489 | 0.8498 | 0.8489 | 0.8474 |
| 0.3569 | 24.0 | 1500 | 0.4989 | 0.8495 | 0.8495 | 0.8495 | 0.8478 |
| 0.3735 | 24.99 | 1562 | 0.4918 | 0.8495 | 0.8468 | 0.8495 | 0.8468 |
| 0.3301 | 26.0 | 1625 | 0.4927 | 0.8512 | 0.8512 | 0.8512 | 0.8488 |
| 0.3438 | 26.99 | 1687 | 0.4829 | 0.8540 | 0.8529 | 0.8540 | 0.8520 |
| 0.3553 | 28.0 | 1750 | 0.4935 | 0.8540 | 0.8530 | 0.8540 | 0.8512 |
| 0.3312 | 28.99 | 1812 | 0.4882 | 0.8517 | 0.8509 | 0.8517 | 0.8491 |
| 0.3319 | 29.76 | 1860 | 0.4876 | 0.8517 | 0.8516 | 0.8517 | 0.8497 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
Augusto777/vit-base-patch16-224-ve-U15-b-80
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U15-b-80
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5357
- Accuracy: 0.8696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.97 | 7 | 1.3872 | 0.1304 |
| 1.3844 | 1.93 | 14 | 1.3826 | 0.1739 |
| 1.3552 | 2.9 | 21 | 1.3500 | 0.2391 |
| 1.3552 | 4.0 | 29 | 1.2528 | 0.2174 |
| 1.2458 | 4.97 | 36 | 1.1474 | 0.2391 |
| 1.0668 | 5.93 | 43 | 1.1376 | 0.3913 |
| 0.9335 | 6.9 | 50 | 1.0063 | 0.4348 |
| 0.9335 | 8.0 | 58 | 0.9238 | 0.5870 |
| 0.8059 | 8.97 | 65 | 0.8241 | 0.8043 |
| 0.6774 | 9.93 | 72 | 0.7625 | 0.7826 |
| 0.6774 | 10.9 | 79 | 0.7096 | 0.8043 |
| 0.5346 | 12.0 | 87 | 0.6368 | 0.8261 |
| 0.4427 | 12.97 | 94 | 0.5741 | 0.8261 |
| 0.3557 | 13.93 | 101 | 0.5441 | 0.8261 |
| 0.3557 | 14.9 | 108 | 0.5258 | 0.8478 |
| 0.2637 | 16.0 | 116 | 0.5430 | 0.8261 |
| 0.2356 | 16.97 | 123 | 0.5773 | 0.8261 |
| 0.1844 | 17.93 | 130 | 0.7222 | 0.7391 |
| 0.1844 | 18.9 | 137 | 0.6537 | 0.7826 |
| 0.1765 | 20.0 | 145 | 0.5458 | 0.8043 |
| 0.1362 | 20.97 | 152 | 0.5777 | 0.8478 |
| 0.1362 | 21.93 | 159 | 0.6256 | 0.7826 |
| 0.1467 | 22.9 | 166 | 0.7330 | 0.7826 |
| 0.1614 | 24.0 | 174 | 0.7743 | 0.7609 |
| 0.1246 | 24.97 | 181 | 0.5763 | 0.8261 |
| 0.1246 | 25.93 | 188 | 0.5994 | 0.8261 |
| 0.1058 | 26.9 | 195 | 0.6926 | 0.8043 |
| 0.0943 | 28.0 | 203 | 0.6406 | 0.8478 |
| 0.1 | 28.97 | 210 | 0.6940 | 0.7609 |
| 0.1 | 29.93 | 217 | 0.6193 | 0.8261 |
| 0.0865 | 30.9 | 224 | 0.5357 | 0.8696 |
| 0.0852 | 32.0 | 232 | 0.8015 | 0.7826 |
| 0.0852 | 32.97 | 239 | 0.6680 | 0.8478 |
| 0.0721 | 33.93 | 246 | 0.8469 | 0.7826 |
| 0.0749 | 34.9 | 253 | 0.6682 | 0.8261 |
| 0.0876 | 36.0 | 261 | 0.7474 | 0.8261 |
| 0.0876 | 36.97 | 268 | 0.6501 | 0.8696 |
| 0.0677 | 37.93 | 275 | 0.6918 | 0.8043 |
| 0.0574 | 38.9 | 282 | 0.7001 | 0.8478 |
| 0.0573 | 40.0 | 290 | 0.7119 | 0.8261 |
| 0.0573 | 40.97 | 297 | 0.8317 | 0.8043 |
| 0.0663 | 41.93 | 304 | 0.7456 | 0.8043 |
| 0.0685 | 42.9 | 311 | 0.7242 | 0.8261 |
| 0.0685 | 44.0 | 319 | 0.6971 | 0.8043 |
| 0.0431 | 44.97 | 326 | 0.7439 | 0.8261 |
| 0.0529 | 45.93 | 333 | 0.8210 | 0.8043 |
| 0.0698 | 46.9 | 340 | 0.7114 | 0.8043 |
| 0.0698 | 48.0 | 348 | 0.6985 | 0.8478 |
| 0.054 | 48.97 | 355 | 0.8860 | 0.8261 |
| 0.0528 | 49.93 | 362 | 0.8942 | 0.8043 |
| 0.0528 | 50.9 | 369 | 0.9411 | 0.8043 |
| 0.0478 | 52.0 | 377 | 0.8705 | 0.7826 |
| 0.041 | 52.97 | 384 | 0.8130 | 0.8261 |
| 0.0321 | 53.93 | 391 | 0.7682 | 0.8043 |
| 0.0321 | 54.9 | 398 | 0.8696 | 0.7826 |
| 0.0318 | 56.0 | 406 | 0.9598 | 0.8043 |
| 0.0416 | 56.97 | 413 | 0.7291 | 0.8261 |
| 0.0477 | 57.93 | 420 | 0.6869 | 0.8478 |
| 0.0477 | 58.9 | 427 | 0.7055 | 0.8478 |
| 0.0307 | 60.0 | 435 | 0.7415 | 0.8478 |
| 0.032 | 60.97 | 442 | 0.8024 | 0.8261 |
| 0.032 | 61.93 | 449 | 0.7856 | 0.8478 |
| 0.0232 | 62.9 | 456 | 0.7251 | 0.8043 |
| 0.0267 | 64.0 | 464 | 0.7231 | 0.8478 |
| 0.0456 | 64.97 | 471 | 0.7326 | 0.8696 |
| 0.0456 | 65.93 | 478 | 0.7300 | 0.8696 |
| 0.0359 | 66.9 | 485 | 0.7293 | 0.8696 |
| 0.0199 | 68.0 | 493 | 0.7361 | 0.8696 |
| 0.0235 | 68.97 | 500 | 0.7362 | 0.8696 |
| 0.0235 | 69.93 | 507 | 0.7513 | 0.8696 |
| 0.0368 | 70.9 | 514 | 0.7513 | 0.8696 |
| 0.0254 | 72.0 | 522 | 0.7586 | 0.8696 |
| 0.0254 | 72.97 | 529 | 0.7574 | 0.8696 |
| 0.029 | 73.93 | 536 | 0.7685 | 0.8478 |
| 0.0302 | 74.9 | 543 | 0.7653 | 0.8478 |
| 0.0305 | 76.0 | 551 | 0.7637 | 0.8261 |
| 0.0305 | 76.97 | 558 | 0.7645 | 0.8478 |
| 0.0301 | 77.24 | 560 | 0.7645 | 0.8478 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/vit-base-patch16-224-ve-U16-b-80
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U16-b-80
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5265
- Accuracy: 0.8696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 1.3828 | 0.4565 |
| 1.3846 | 2.0 | 16 | 1.3610 | 0.5 |
| 1.3611 | 3.0 | 24 | 1.2967 | 0.4348 |
| 1.2759 | 4.0 | 32 | 1.1830 | 0.3913 |
| 1.1164 | 5.0 | 40 | 1.0824 | 0.3696 |
| 1.1164 | 6.0 | 48 | 0.9665 | 0.5 |
| 0.98 | 7.0 | 56 | 0.9036 | 0.5652 |
| 0.8533 | 8.0 | 64 | 0.8348 | 0.7826 |
| 0.7321 | 9.0 | 72 | 0.7397 | 0.8261 |
| 0.6075 | 10.0 | 80 | 0.7155 | 0.7174 |
| 0.6075 | 11.0 | 88 | 0.6006 | 0.8261 |
| 0.4901 | 12.0 | 96 | 0.5265 | 0.8696 |
| 0.3967 | 13.0 | 104 | 0.5214 | 0.8043 |
| 0.2746 | 14.0 | 112 | 0.5433 | 0.7826 |
| 0.2366 | 15.0 | 120 | 0.6141 | 0.7826 |
| 0.2366 | 16.0 | 128 | 0.6658 | 0.7826 |
| 0.2247 | 17.0 | 136 | 0.6327 | 0.7609 |
| 0.2047 | 18.0 | 144 | 0.5339 | 0.8261 |
| 0.1592 | 19.0 | 152 | 0.6647 | 0.8043 |
| 0.1349 | 20.0 | 160 | 0.7483 | 0.7609 |
| 0.1349 | 21.0 | 168 | 0.7387 | 0.8043 |
| 0.1285 | 22.0 | 176 | 0.8261 | 0.7609 |
| 0.1104 | 23.0 | 184 | 0.7151 | 0.8043 |
| 0.1191 | 24.0 | 192 | 0.7785 | 0.7609 |
| 0.1074 | 25.0 | 200 | 0.8902 | 0.7391 |
| 0.1074 | 26.0 | 208 | 0.7757 | 0.7826 |
| 0.0947 | 27.0 | 216 | 0.7157 | 0.7826 |
| 0.0973 | 28.0 | 224 | 0.8198 | 0.7826 |
| 0.0992 | 29.0 | 232 | 0.7240 | 0.8261 |
| 0.0766 | 30.0 | 240 | 0.6993 | 0.8043 |
| 0.0766 | 31.0 | 248 | 0.5688 | 0.8261 |
| 0.0606 | 32.0 | 256 | 0.6202 | 0.8478 |
| 0.0633 | 33.0 | 264 | 0.6740 | 0.8261 |
| 0.0681 | 34.0 | 272 | 0.6782 | 0.8261 |
| 0.0591 | 35.0 | 280 | 0.8370 | 0.7826 |
| 0.0591 | 36.0 | 288 | 0.6995 | 0.8261 |
| 0.0731 | 37.0 | 296 | 0.7560 | 0.8261 |
| 0.0618 | 38.0 | 304 | 0.6730 | 0.8261 |
| 0.0543 | 39.0 | 312 | 0.7166 | 0.8261 |
| 0.0574 | 40.0 | 320 | 0.7332 | 0.8261 |
| 0.0574 | 41.0 | 328 | 0.6982 | 0.8261 |
| 0.0707 | 42.0 | 336 | 0.7183 | 0.7826 |
| 0.0646 | 43.0 | 344 | 0.7568 | 0.8043 |
| 0.0476 | 44.0 | 352 | 0.8521 | 0.8043 |
| 0.047 | 45.0 | 360 | 0.8992 | 0.8043 |
| 0.047 | 46.0 | 368 | 0.8749 | 0.7826 |
| 0.0406 | 47.0 | 376 | 0.9928 | 0.7826 |
| 0.0361 | 48.0 | 384 | 0.9659 | 0.7826 |
| 0.042 | 49.0 | 392 | 0.8839 | 0.8043 |
| 0.0421 | 50.0 | 400 | 0.8613 | 0.7391 |
| 0.0421 | 51.0 | 408 | 0.9006 | 0.7826 |
| 0.0396 | 52.0 | 416 | 0.8627 | 0.7826 |
| 0.0255 | 53.0 | 424 | 0.8717 | 0.7609 |
| 0.0359 | 54.0 | 432 | 1.0508 | 0.7609 |
| 0.0424 | 55.0 | 440 | 0.9745 | 0.7826 |
| 0.0424 | 56.0 | 448 | 0.9511 | 0.8043 |
| 0.0364 | 57.0 | 456 | 0.9239 | 0.8043 |
| 0.0444 | 58.0 | 464 | 0.9500 | 0.7826 |
| 0.0445 | 59.0 | 472 | 0.9266 | 0.8261 |
| 0.0368 | 60.0 | 480 | 0.9346 | 0.8043 |
| 0.0368 | 61.0 | 488 | 0.9513 | 0.8043 |
| 0.0278 | 62.0 | 496 | 0.9505 | 0.8043 |
| 0.0324 | 63.0 | 504 | 0.9625 | 0.8261 |
| 0.0308 | 64.0 | 512 | 0.9720 | 0.8261 |
| 0.0185 | 65.0 | 520 | 0.9515 | 0.8043 |
| 0.0185 | 66.0 | 528 | 0.9278 | 0.8043 |
| 0.0323 | 67.0 | 536 | 0.9315 | 0.8261 |
| 0.0251 | 68.0 | 544 | 0.9794 | 0.8043 |
| 0.0297 | 69.0 | 552 | 1.0378 | 0.7609 |
| 0.0257 | 70.0 | 560 | 1.0336 | 0.7609 |
| 0.0257 | 71.0 | 568 | 1.0577 | 0.7826 |
| 0.02 | 72.0 | 576 | 1.0332 | 0.8043 |
| 0.0226 | 73.0 | 584 | 1.0165 | 0.8043 |
| 0.0257 | 74.0 | 592 | 1.0194 | 0.8043 |
| 0.0232 | 75.0 | 600 | 1.0026 | 0.8043 |
| 0.0232 | 76.0 | 608 | 1.0073 | 0.8043 |
| 0.0274 | 77.0 | 616 | 1.0099 | 0.8043 |
| 0.0182 | 78.0 | 624 | 1.0170 | 0.8043 |
| 0.0375 | 79.0 | 632 | 1.0139 | 0.8043 |
| 0.029 | 80.0 | 640 | 1.0128 | 0.8043 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
mateoluksenberg/dit-base-Classifier_CM05
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base-Classifier_CM05
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0653
- Accuracy: 1.0
- Weighted f1: 1.0
- Micro f1: 1.0
- Macro f1: 1.0
- Weighted recall: 1.0
- Micro recall: 1.0
- Macro recall: 1.0
- Weighted precision: 1.0
- Micro precision: 1.0
- Macro precision: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 18
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.5553 | 1.0 | 1 | 2.7914 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5553 | 2.0 | 2 | 2.4681 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5553 | 3.0 | 3 | 1.8688 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5553 | 4.0 | 4 | 1.3606 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.5553 | 5.0 | 5 | 0.9827 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.5553 | 6.0 | 6 | 0.7992 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.5553 | 7.0 | 7 | 0.5435 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.3458 | 8.0 | 8 | 0.3466 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.3458 | 9.0 | 9 | 0.2157 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.3458 | 10.0 | 10 | 0.1521 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.3458 | 11.0 | 11 | 0.1251 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.3458 | 12.0 | 12 | 0.1059 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.3458 | 13.0 | 13 | 0.0910 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.3458 | 14.0 | 14 | 0.0807 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.3458 | 15.0 | 15 | 0.0739 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.1206 | 16.0 | 16 | 0.0693 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.1206 | 17.0 | 17 | 0.0666 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.1206 | 18.0 | 18 | 0.0653 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"cm05",
"factura",
"advertisement",
"handwritten",
"scientific_report",
"budget",
"scientific_publication",
"presentation",
"file_folder",
"memo",
"resume",
"invoice",
"letter",
"questionnaire",
"form",
"news_article"
] |
MNCH1/traffic_sign_detection
|

precision recall f1-score support
National Speed Limit 0.3044 0.6650 0.4177 206
Bicycles crossing 0.9317 0.7282 0.8174 206
Children crossing 0.3607 0.2136 0.2683 206
Danger Ahead 0.9333 0.3398 0.4982 206
Dont Go Left or Right 0.9293 0.8932 0.9109 206
Dont Go Right 0.0000 0.0000 0.0000 206
Dont Go straight 0.8850 0.9709 0.9259 206
Dont Go straight or left 0.6667 0.7902 0.7232 205
Dont overtake from Left 0.0440 0.0683 0.0535 205
Fences 0.6573 0.7913 0.7181 206
Go Left 0.7778 0.0680 0.1250 206
Go Left or right 0.0000 0.0000 0.0000 205
Go Right 0.3942 0.9854 0.5631 206
Go left or straight 0.4671 0.3805 0.4194 205
Go right or straight 0.1042 0.0244 0.0395 205
Go straight 0.6719 0.2098 0.3197 205
Go straight or right 0.4218 1.0000 0.5933 205
Heavy Vehicle Accidents 0.3089 0.1845 0.2310 206
Horn 0.4808 0.4878 0.4843 205
No Car 0.8730 0.8010 0.8354 206
No Uturn 0.9342 0.6927 0.7955 205
No entry 0.9600 0.5825 0.7251 206
No horn 1.0000 0.0680 0.1273 206
No stopping 0.9290 0.8293 0.8763 205
Road Divider 0.4573 0.4439 0.4505 205
Roundabout mandatory 0.4455 0.6976 0.5437 205
Speed limit (15km/h) 0.0000 0.0000 0.0000 205
Speed limit (30km/h) 0.7702 0.6019 0.6757 206
Speed limit (40km/h) 0.2105 0.1553 0.1788 206
Speed limit (50km/h) 0.0299 0.0341 0.0319 205
Speed limit (5km/h) 0.2294 0.5146 0.3174 206
Speed limit (60km/h) 0.3745 0.9417 0.5359 206
Speed limit (70km/h) 0.0000 0.0000 0.0000 206
Train Crossing 0.3046 0.8932 0.4543 206
Under Construction 0.7071 0.8204 0.7596 206
Unknown 0.3875 0.3010 0.3388 206
Uturn 1.0000 0.0732 0.1364 205
Zebra Crossing 0.2018 0.1122 0.1442 205
ZigZag Curve 0.6519 0.5000 0.5659 206
keep Left 0.0488 0.0388 0.0432 206
keep Right 0.0000 0.0000 0.0000 206
watch out for cars 0.5438 1.0000 0.7045 205
speed limit (80km/h) 0.9062 0.4223 0.5762 206
Dangerous curve to the left 0.2251 0.5854 0.3252 205
Dangerous curve to the right 0.3443 0.3058 0.3239 206
Dont Go Left 0.1541 0.2573 0.1927 206
accuracy 0.4451 9458
macro avg 0.4789 0.4451 0.4080 9458
weighted avg 0.4790 0.4451 0.4081 9458
|
[
"bicycles crossing",
"children crossing",
"danger ahead",
"dangerous curve to the left",
"dangerous curve to the right",
"dont go left",
"dont go left or right",
"dont go right",
"dont go straight",
"dont go straight or left",
"dont overtake from left",
"fences",
"go left",
"go left or right",
"go right",
"go left or straight",
"go right or straight",
"go straight",
"go straight or right",
"heavy vehicle accidents",
"horn",
"national speed limit",
"no car",
"no uturn",
"no entry",
"no horn",
"no stopping",
"road divider",
"roundabout mandatory",
"speed limit (15km/h)",
"speed limit (30km/h)",
"speed limit (40km/h)",
"speed limit (50km/h)",
"speed limit (5km/h)",
"speed limit (60km/h)",
"speed limit (70km/h)",
"train crossing",
"under construction",
"unknown",
"uturn",
"zebra crossing",
"zigzag curve",
"keep left",
"keep right",
"speed limit (80km/h)",
"watch out for cars"
] |
talli96123/meat_calssify_fresh_crop_fixed_epoch100_V_0_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_crop_fixed_epoch100_V_0_1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5742
- Accuracy: 0.8228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0985 | 1.0 | 10 | 1.1041 | 0.2975 |
| 1.0824 | 2.0 | 20 | 1.0871 | 0.3544 |
| 1.0624 | 3.0 | 30 | 1.0830 | 0.3418 |
| 1.0341 | 4.0 | 40 | 1.0578 | 0.3797 |
| 0.9842 | 5.0 | 50 | 1.0021 | 0.4873 |
| 0.9433 | 6.0 | 60 | 1.0362 | 0.4557 |
| 0.8904 | 7.0 | 70 | 0.9132 | 0.5886 |
| 0.8515 | 8.0 | 80 | 0.8612 | 0.6582 |
| 0.8094 | 9.0 | 90 | 0.8405 | 0.6646 |
| 0.732 | 10.0 | 100 | 0.8722 | 0.5886 |
| 0.7408 | 11.0 | 110 | 0.7310 | 0.7215 |
| 0.6286 | 12.0 | 120 | 0.7742 | 0.6456 |
| 0.58 | 13.0 | 130 | 0.8024 | 0.6646 |
| 0.5519 | 14.0 | 140 | 0.7325 | 0.6835 |
| 0.498 | 15.0 | 150 | 0.6873 | 0.7215 |
| 0.4682 | 16.0 | 160 | 0.6757 | 0.7152 |
| 0.4201 | 17.0 | 170 | 0.7193 | 0.7025 |
| 0.4186 | 18.0 | 180 | 0.6616 | 0.7278 |
| 0.3793 | 19.0 | 190 | 0.6906 | 0.7532 |
| 0.322 | 20.0 | 200 | 0.7668 | 0.7089 |
| 0.3441 | 21.0 | 210 | 0.6939 | 0.7468 |
| 0.3146 | 22.0 | 220 | 0.6748 | 0.7342 |
| 0.3451 | 23.0 | 230 | 0.6401 | 0.7658 |
| 0.3001 | 24.0 | 240 | 0.6490 | 0.7089 |
| 0.2884 | 25.0 | 250 | 0.6640 | 0.7405 |
| 0.251 | 26.0 | 260 | 0.6769 | 0.7532 |
| 0.2386 | 27.0 | 270 | 0.6259 | 0.7595 |
| 0.236 | 28.0 | 280 | 0.7949 | 0.6962 |
| 0.2493 | 29.0 | 290 | 0.7442 | 0.7405 |
| 0.2249 | 30.0 | 300 | 0.7024 | 0.7532 |
| 0.2959 | 31.0 | 310 | 0.6887 | 0.7595 |
| 0.2601 | 32.0 | 320 | 0.7209 | 0.7025 |
| 0.2116 | 33.0 | 330 | 0.6414 | 0.7975 |
| 0.1982 | 34.0 | 340 | 0.8802 | 0.6899 |
| 0.2018 | 35.0 | 350 | 0.6697 | 0.7468 |
| 0.2038 | 36.0 | 360 | 0.6632 | 0.7532 |
| 0.2074 | 37.0 | 370 | 0.6776 | 0.7911 |
| 0.1718 | 38.0 | 380 | 0.5189 | 0.7975 |
| 0.1699 | 39.0 | 390 | 0.6332 | 0.7532 |
| 0.1563 | 40.0 | 400 | 0.5616 | 0.8038 |
| 0.1703 | 41.0 | 410 | 0.6547 | 0.7658 |
| 0.212 | 42.0 | 420 | 0.8257 | 0.7215 |
| 0.2272 | 43.0 | 430 | 0.6542 | 0.7468 |
| 0.2264 | 44.0 | 440 | 0.7265 | 0.7278 |
| 0.1506 | 45.0 | 450 | 0.7796 | 0.7468 |
| 0.1459 | 46.0 | 460 | 0.7290 | 0.7468 |
| 0.171 | 47.0 | 470 | 0.6025 | 0.7975 |
| 0.1387 | 48.0 | 480 | 0.7577 | 0.7532 |
| 0.1591 | 49.0 | 490 | 0.7600 | 0.7658 |
| 0.1378 | 50.0 | 500 | 0.7683 | 0.7468 |
| 0.1364 | 51.0 | 510 | 0.7009 | 0.7658 |
| 0.1514 | 52.0 | 520 | 0.6590 | 0.7722 |
| 0.1638 | 53.0 | 530 | 0.6948 | 0.7785 |
| 0.1263 | 54.0 | 540 | 0.6051 | 0.7785 |
| 0.1391 | 55.0 | 550 | 0.7105 | 0.7785 |
| 0.1449 | 56.0 | 560 | 0.6240 | 0.7785 |
| 0.1065 | 57.0 | 570 | 0.6473 | 0.7911 |
| 0.1704 | 58.0 | 580 | 0.7020 | 0.7848 |
| 0.1323 | 59.0 | 590 | 0.7223 | 0.7468 |
| 0.1574 | 60.0 | 600 | 0.7592 | 0.7658 |
| 0.0914 | 61.0 | 610 | 0.6220 | 0.8038 |
| 0.1664 | 62.0 | 620 | 0.7561 | 0.7658 |
| 0.1293 | 63.0 | 630 | 0.6690 | 0.7848 |
| 0.0981 | 64.0 | 640 | 0.7014 | 0.7722 |
| 0.1098 | 65.0 | 650 | 0.6289 | 0.8038 |
| 0.1615 | 66.0 | 660 | 0.7278 | 0.7532 |
| 0.1164 | 67.0 | 670 | 0.7004 | 0.7658 |
| 0.1279 | 68.0 | 680 | 0.7258 | 0.7911 |
| 0.133 | 69.0 | 690 | 0.5725 | 0.8291 |
| 0.0848 | 70.0 | 700 | 0.4775 | 0.8544 |
| 0.1125 | 71.0 | 710 | 0.5514 | 0.8165 |
| 0.0869 | 72.0 | 720 | 0.5685 | 0.7848 |
| 0.0801 | 73.0 | 730 | 0.6424 | 0.7975 |
| 0.0954 | 74.0 | 740 | 0.6587 | 0.7848 |
| 0.078 | 75.0 | 750 | 0.5911 | 0.7975 |
| 0.0913 | 76.0 | 760 | 0.6705 | 0.7911 |
| 0.0712 | 77.0 | 770 | 0.7348 | 0.8038 |
| 0.0781 | 78.0 | 780 | 0.7332 | 0.7658 |
| 0.0898 | 79.0 | 790 | 0.6067 | 0.8101 |
| 0.0715 | 80.0 | 800 | 0.5991 | 0.8038 |
| 0.0713 | 81.0 | 810 | 0.6865 | 0.7975 |
| 0.0899 | 82.0 | 820 | 0.6586 | 0.7975 |
| 0.0739 | 83.0 | 830 | 0.7000 | 0.7975 |
| 0.0861 | 84.0 | 840 | 0.6194 | 0.8101 |
| 0.0752 | 85.0 | 850 | 0.6037 | 0.7911 |
| 0.0665 | 86.0 | 860 | 0.5696 | 0.8228 |
| 0.0693 | 87.0 | 870 | 0.5897 | 0.8291 |
| 0.0736 | 88.0 | 880 | 0.6536 | 0.8228 |
| 0.0618 | 89.0 | 890 | 0.6509 | 0.8038 |
| 0.0835 | 90.0 | 900 | 0.6343 | 0.8038 |
| 0.0826 | 91.0 | 910 | 0.7298 | 0.7722 |
| 0.0736 | 92.0 | 920 | 0.5925 | 0.8165 |
| 0.0656 | 93.0 | 930 | 0.6565 | 0.8228 |
| 0.0605 | 94.0 | 940 | 0.5389 | 0.8481 |
| 0.0451 | 95.0 | 950 | 0.4811 | 0.8481 |
| 0.0807 | 96.0 | 960 | 0.6439 | 0.7848 |
| 0.0579 | 97.0 | 970 | 0.7767 | 0.7468 |
| 0.0639 | 98.0 | 980 | 0.5558 | 0.8038 |
| 0.051 | 99.0 | 990 | 0.6174 | 0.8038 |
| 0.044 | 100.0 | 1000 | 0.5742 | 0.8228 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
Augusto777/swin-tiny-patch4-window7-224-ve-U13-b-80
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-ve-U13-b-80
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9190
- Accuracy: 0.8043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3859 | 0.1304 |
| 1.3859 | 2.0 | 13 | 1.3828 | 0.2826 |
| 1.3859 | 2.92 | 19 | 1.3769 | 0.3261 |
| 1.379 | 4.0 | 26 | 1.3566 | 0.2826 |
| 1.3356 | 4.92 | 32 | 1.3162 | 0.2391 |
| 1.3356 | 6.0 | 39 | 1.2093 | 0.3478 |
| 1.2023 | 6.92 | 45 | 1.1349 | 0.4565 |
| 1.0274 | 8.0 | 52 | 1.0414 | 0.4783 |
| 1.0274 | 8.92 | 58 | 0.9788 | 0.5217 |
| 0.9125 | 10.0 | 65 | 1.0071 | 0.4348 |
| 0.7688 | 10.92 | 71 | 1.0416 | 0.5217 |
| 0.7688 | 12.0 | 78 | 1.0480 | 0.4130 |
| 0.6891 | 12.92 | 84 | 0.9351 | 0.5870 |
| 0.5795 | 14.0 | 91 | 1.0683 | 0.6304 |
| 0.5795 | 14.92 | 97 | 1.0698 | 0.6087 |
| 0.5337 | 16.0 | 104 | 0.9603 | 0.6304 |
| 0.4337 | 16.92 | 110 | 0.7188 | 0.6957 |
| 0.4337 | 18.0 | 117 | 0.7620 | 0.6739 |
| 0.4258 | 18.92 | 123 | 0.9433 | 0.6739 |
| 0.4045 | 20.0 | 130 | 1.0823 | 0.6522 |
| 0.4045 | 20.92 | 136 | 0.7059 | 0.7174 |
| 0.4135 | 22.0 | 143 | 0.7467 | 0.7391 |
| 0.4135 | 22.92 | 149 | 0.7637 | 0.7391 |
| 0.3525 | 24.0 | 156 | 0.8157 | 0.7391 |
| 0.263 | 24.92 | 162 | 0.9995 | 0.7174 |
| 0.263 | 26.0 | 169 | 0.8719 | 0.7609 |
| 0.272 | 26.92 | 175 | 0.9939 | 0.6957 |
| 0.262 | 28.0 | 182 | 0.8639 | 0.7174 |
| 0.262 | 28.92 | 188 | 1.0737 | 0.6522 |
| 0.2282 | 30.0 | 195 | 0.8416 | 0.7174 |
| 0.2098 | 30.92 | 201 | 0.9744 | 0.6739 |
| 0.2098 | 32.0 | 208 | 1.0593 | 0.6087 |
| 0.2141 | 32.92 | 214 | 1.0997 | 0.7174 |
| 0.1759 | 34.0 | 221 | 0.9735 | 0.5870 |
| 0.1759 | 34.92 | 227 | 1.0789 | 0.6957 |
| 0.2042 | 36.0 | 234 | 1.0664 | 0.6957 |
| 0.1591 | 36.92 | 240 | 0.9417 | 0.7609 |
| 0.1591 | 38.0 | 247 | 1.1042 | 0.6739 |
| 0.1579 | 38.92 | 253 | 0.9732 | 0.7609 |
| 0.1626 | 40.0 | 260 | 0.9960 | 0.6957 |
| 0.1626 | 40.92 | 266 | 0.9763 | 0.7391 |
| 0.1458 | 42.0 | 273 | 0.9790 | 0.7391 |
| 0.1458 | 42.92 | 279 | 1.0952 | 0.7174 |
| 0.1317 | 44.0 | 286 | 0.9190 | 0.8043 |
| 0.1255 | 44.92 | 292 | 0.9420 | 0.7391 |
| 0.1255 | 46.0 | 299 | 0.9085 | 0.7391 |
| 0.1352 | 46.92 | 305 | 0.9184 | 0.7174 |
| 0.1311 | 48.0 | 312 | 1.0567 | 0.7609 |
| 0.1311 | 48.92 | 318 | 1.1507 | 0.7174 |
| 0.1501 | 50.0 | 325 | 1.2068 | 0.7174 |
| 0.1088 | 50.92 | 331 | 1.4607 | 0.6957 |
| 0.1088 | 52.0 | 338 | 1.1036 | 0.6739 |
| 0.1152 | 52.92 | 344 | 1.1081 | 0.6957 |
| 0.1141 | 54.0 | 351 | 1.1006 | 0.6957 |
| 0.1141 | 54.92 | 357 | 1.1470 | 0.7174 |
| 0.1307 | 56.0 | 364 | 1.0715 | 0.7609 |
| 0.1273 | 56.92 | 370 | 1.1021 | 0.7174 |
| 0.1273 | 58.0 | 377 | 1.1176 | 0.6957 |
| 0.1066 | 58.92 | 383 | 1.0948 | 0.7174 |
| 0.1046 | 60.0 | 390 | 1.0563 | 0.7391 |
| 0.1046 | 60.92 | 396 | 1.1155 | 0.6957 |
| 0.1129 | 62.0 | 403 | 1.0922 | 0.6957 |
| 0.1129 | 62.92 | 409 | 1.0364 | 0.6957 |
| 0.1031 | 64.0 | 416 | 1.0675 | 0.7174 |
| 0.0808 | 64.92 | 422 | 1.1133 | 0.6957 |
| 0.0808 | 66.0 | 429 | 1.2029 | 0.7174 |
| 0.0783 | 66.92 | 435 | 1.1453 | 0.7174 |
| 0.09 | 68.0 | 442 | 1.0925 | 0.6957 |
| 0.09 | 68.92 | 448 | 1.0999 | 0.7174 |
| 0.0796 | 70.0 | 455 | 1.0971 | 0.7391 |
| 0.0828 | 70.92 | 461 | 1.0923 | 0.7391 |
| 0.0828 | 72.0 | 468 | 1.1061 | 0.7391 |
| 0.0923 | 72.92 | 474 | 1.1173 | 0.7391 |
| 0.092 | 73.85 | 480 | 1.1208 | 0.7391 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/swin-tiny-patch4-window7-224-ve-U13-b-12
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-ve-U13-b-12
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9160
- Accuracy: 0.5435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 1.3788 | 0.4348 |
| 1.3828 | 2.0 | 16 | 1.3084 | 0.5 |
| 1.2902 | 3.0 | 24 | 1.1908 | 0.4783 |
| 1.1227 | 4.0 | 32 | 1.1055 | 0.4130 |
| 0.9806 | 5.0 | 40 | 1.0173 | 0.5217 |
| 0.9806 | 6.0 | 48 | 0.9396 | 0.5217 |
| 0.8629 | 7.0 | 56 | 0.9529 | 0.5 |
| 0.7707 | 8.0 | 64 | 0.9449 | 0.5217 |
| 0.7411 | 9.0 | 72 | 0.9160 | 0.5435 |
| 0.671 | 10.0 | 80 | 0.9073 | 0.5435 |
| 0.671 | 11.0 | 88 | 0.9192 | 0.5435 |
| 0.6501 | 12.0 | 96 | 0.9456 | 0.5 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
talli96123/meat_calssify_fresh_crop_fixed_epoch100_V_0_1_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
Augusto777/swin-tiny-patch4-window7-224-ve-U13-b-80b
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-ve-U13-b-80b
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6122
- Accuracy: 0.7826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3855 | 0.1304 |
| 1.3852 | 2.0 | 13 | 1.3762 | 0.2826 |
| 1.3852 | 2.92 | 19 | 1.3521 | 0.2826 |
| 1.3565 | 4.0 | 26 | 1.2510 | 0.3478 |
| 1.2024 | 4.92 | 32 | 1.1528 | 0.3478 |
| 1.2024 | 6.0 | 39 | 1.0294 | 0.5 |
| 1.0453 | 6.92 | 45 | 0.9608 | 0.5217 |
| 0.8827 | 8.0 | 52 | 0.8801 | 0.6087 |
| 0.8827 | 8.92 | 58 | 0.9884 | 0.5652 |
| 0.7887 | 10.0 | 65 | 0.7927 | 0.6522 |
| 0.6795 | 10.92 | 71 | 0.7237 | 0.6522 |
| 0.6795 | 12.0 | 78 | 0.7250 | 0.6739 |
| 0.5777 | 12.92 | 84 | 0.7140 | 0.6957 |
| 0.496 | 14.0 | 91 | 0.8014 | 0.6957 |
| 0.496 | 14.92 | 97 | 0.8701 | 0.6739 |
| 0.4224 | 16.0 | 104 | 0.9384 | 0.6522 |
| 0.3744 | 16.92 | 110 | 0.7594 | 0.7174 |
| 0.3744 | 18.0 | 117 | 0.6122 | 0.7826 |
| 0.3775 | 18.92 | 123 | 0.8143 | 0.7174 |
| 0.3275 | 20.0 | 130 | 0.9981 | 0.6522 |
| 0.3275 | 20.92 | 136 | 0.8603 | 0.7174 |
| 0.3202 | 22.0 | 143 | 0.8412 | 0.6957 |
| 0.3202 | 22.92 | 149 | 0.8654 | 0.7174 |
| 0.2849 | 24.0 | 156 | 0.9650 | 0.6957 |
| 0.2518 | 24.92 | 162 | 0.8102 | 0.7609 |
| 0.2518 | 26.0 | 169 | 0.7203 | 0.7826 |
| 0.2467 | 26.92 | 175 | 0.9435 | 0.7391 |
| 0.2218 | 28.0 | 182 | 0.8905 | 0.7391 |
| 0.2218 | 28.92 | 188 | 1.0828 | 0.6957 |
| 0.2075 | 30.0 | 195 | 0.8936 | 0.7174 |
| 0.1893 | 30.92 | 201 | 0.8836 | 0.7826 |
| 0.1893 | 32.0 | 208 | 0.9692 | 0.7174 |
| 0.194 | 32.92 | 214 | 1.0390 | 0.7609 |
| 0.1739 | 34.0 | 221 | 0.8695 | 0.7609 |
| 0.1739 | 34.92 | 227 | 1.1836 | 0.6739 |
| 0.1895 | 36.0 | 234 | 1.0131 | 0.7391 |
| 0.1428 | 36.92 | 240 | 0.9618 | 0.7609 |
| 0.1428 | 38.0 | 247 | 0.9950 | 0.7609 |
| 0.1443 | 38.92 | 253 | 0.9113 | 0.7826 |
| 0.1574 | 40.0 | 260 | 0.9213 | 0.7174 |
| 0.1574 | 40.92 | 266 | 0.9437 | 0.7391 |
| 0.1442 | 42.0 | 273 | 0.9226 | 0.7609 |
| 0.1442 | 42.92 | 279 | 0.9430 | 0.7391 |
| 0.1186 | 44.0 | 286 | 0.9759 | 0.7826 |
| 0.1135 | 44.92 | 292 | 0.9651 | 0.7391 |
| 0.1135 | 46.0 | 299 | 0.9536 | 0.7609 |
| 0.1299 | 46.92 | 305 | 0.9118 | 0.7609 |
| 0.134 | 48.0 | 312 | 0.9848 | 0.7826 |
| 0.134 | 48.92 | 318 | 0.8641 | 0.7609 |
| 0.1418 | 50.0 | 325 | 1.0553 | 0.7609 |
| 0.1074 | 50.92 | 331 | 1.2511 | 0.6957 |
| 0.1074 | 52.0 | 338 | 1.0186 | 0.7391 |
| 0.1144 | 52.92 | 344 | 1.0467 | 0.7174 |
| 0.0999 | 54.0 | 351 | 0.9898 | 0.7391 |
| 0.0999 | 54.92 | 357 | 1.1780 | 0.7391 |
| 0.1131 | 56.0 | 364 | 1.0015 | 0.7609 |
| 0.1152 | 56.92 | 370 | 1.0759 | 0.7609 |
| 0.1152 | 58.0 | 377 | 1.1294 | 0.7174 |
| 0.1012 | 58.92 | 383 | 1.0894 | 0.7391 |
| 0.0938 | 60.0 | 390 | 1.0764 | 0.7391 |
| 0.0938 | 60.92 | 396 | 1.1784 | 0.7174 |
| 0.0944 | 62.0 | 403 | 1.1581 | 0.7174 |
| 0.0944 | 62.92 | 409 | 1.0444 | 0.7391 |
| 0.1015 | 64.0 | 416 | 1.0996 | 0.7391 |
| 0.0762 | 64.92 | 422 | 1.1235 | 0.7609 |
| 0.0762 | 66.0 | 429 | 1.0999 | 0.7391 |
| 0.0775 | 66.92 | 435 | 1.0776 | 0.7391 |
| 0.0787 | 68.0 | 442 | 1.0879 | 0.7391 |
| 0.0787 | 68.92 | 448 | 1.0913 | 0.7391 |
| 0.081 | 70.0 | 455 | 1.0558 | 0.7391 |
| 0.0749 | 70.92 | 461 | 1.0401 | 0.7391 |
| 0.0749 | 72.0 | 468 | 1.0539 | 0.7391 |
| 0.0841 | 72.92 | 474 | 1.0663 | 0.7391 |
| 0.0928 | 73.85 | 480 | 1.0712 | 0.7391 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
yutocame/vit-base-oxford-iiit-pets
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2057
- Accuracy: 0.9378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3677 | 1.0 | 370 | 0.3033 | 0.9188 |
| 0.211 | 2.0 | 740 | 0.2351 | 0.9283 |
| 0.1656 | 3.0 | 1110 | 0.2082 | 0.9323 |
| 0.1525 | 4.0 | 1480 | 0.2017 | 0.9310 |
| 0.1443 | 5.0 | 1850 | 0.2004 | 0.9364 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"siamese",
"birman",
"shiba inu",
"staffordshire bull terrier",
"basset hound",
"bombay",
"japanese chin",
"chihuahua",
"german shorthaired",
"pomeranian",
"beagle",
"english cocker spaniel",
"american pit bull terrier",
"ragdoll",
"persian",
"egyptian mau",
"miniature pinscher",
"sphynx",
"maine coon",
"keeshond",
"yorkshire terrier",
"havanese",
"leonberger",
"wheaten terrier",
"american bulldog",
"english setter",
"boxer",
"newfoundland",
"bengal",
"samoyed",
"british shorthair",
"great pyrenees",
"abyssinian",
"pug",
"saint bernard",
"russian blue",
"scottish terrier"
] |
matthieulel/swinv2-base-patch4-window16-256-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-base-patch4-window16-256-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window16-256](https://huggingface.co/microsoft/swinv2-base-patch4-window16-256) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4341
- Accuracy: 0.8574
- Precision: 0.8589
- Recall: 0.8574
- F1: 0.8546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.5098 | 0.99 | 62 | 1.2358 | 0.5569 | 0.5493 | 0.5569 | 0.5321 |
| 0.8845 | 2.0 | 125 | 0.7391 | 0.7599 | 0.7800 | 0.7599 | 0.7497 |
| 0.753 | 2.99 | 187 | 0.5997 | 0.7971 | 0.8062 | 0.7971 | 0.7903 |
| 0.6149 | 4.0 | 250 | 0.4920 | 0.8331 | 0.8285 | 0.8331 | 0.8276 |
| 0.5807 | 4.99 | 312 | 0.4623 | 0.8326 | 0.8323 | 0.8326 | 0.8315 |
| 0.5938 | 6.0 | 375 | 0.4857 | 0.8365 | 0.8403 | 0.8365 | 0.8294 |
| 0.5583 | 6.99 | 437 | 0.4680 | 0.8264 | 0.8314 | 0.8264 | 0.8243 |
| 0.5103 | 8.0 | 500 | 0.4882 | 0.8191 | 0.8312 | 0.8191 | 0.8180 |
| 0.5186 | 8.99 | 562 | 0.4341 | 0.8574 | 0.8589 | 0.8574 | 0.8546 |
| 0.4696 | 10.0 | 625 | 0.4293 | 0.8495 | 0.8484 | 0.8495 | 0.8481 |
| 0.4711 | 10.99 | 687 | 0.4396 | 0.8422 | 0.8431 | 0.8422 | 0.8414 |
| 0.4271 | 12.0 | 750 | 0.4547 | 0.8489 | 0.8500 | 0.8489 | 0.8480 |
| 0.4576 | 12.99 | 812 | 0.4424 | 0.8489 | 0.8522 | 0.8489 | 0.8473 |
| 0.4483 | 14.0 | 875 | 0.4355 | 0.8495 | 0.8531 | 0.8495 | 0.8492 |
| 0.3914 | 14.99 | 937 | 0.4360 | 0.8540 | 0.8533 | 0.8540 | 0.8532 |
| 0.3883 | 16.0 | 1000 | 0.4464 | 0.8546 | 0.8550 | 0.8546 | 0.8526 |
| 0.3421 | 16.99 | 1062 | 0.4473 | 0.8489 | 0.8486 | 0.8489 | 0.8479 |
| 0.3666 | 18.0 | 1125 | 0.4455 | 0.8540 | 0.8541 | 0.8540 | 0.8528 |
| 0.3737 | 18.99 | 1187 | 0.4587 | 0.8574 | 0.8560 | 0.8574 | 0.8561 |
| 0.3694 | 20.0 | 1250 | 0.4583 | 0.8551 | 0.8528 | 0.8551 | 0.8523 |
| 0.3269 | 20.99 | 1312 | 0.4883 | 0.8506 | 0.8494 | 0.8506 | 0.8487 |
| 0.3699 | 22.0 | 1375 | 0.4808 | 0.8501 | 0.8514 | 0.8501 | 0.8486 |
| 0.3395 | 22.99 | 1437 | 0.4706 | 0.8484 | 0.8493 | 0.8484 | 0.8477 |
| 0.3147 | 24.0 | 1500 | 0.4676 | 0.8568 | 0.8556 | 0.8568 | 0.8557 |
| 0.3352 | 24.99 | 1562 | 0.4868 | 0.8557 | 0.8543 | 0.8557 | 0.8538 |
| 0.3007 | 26.0 | 1625 | 0.4887 | 0.8489 | 0.8492 | 0.8489 | 0.8475 |
| 0.3049 | 26.99 | 1687 | 0.4838 | 0.8534 | 0.8532 | 0.8534 | 0.8526 |
| 0.3228 | 28.0 | 1750 | 0.4910 | 0.8551 | 0.8539 | 0.8551 | 0.8536 |
| 0.3005 | 28.99 | 1812 | 0.4846 | 0.8534 | 0.8517 | 0.8534 | 0.8518 |
| 0.2972 | 29.76 | 1860 | 0.4826 | 0.8557 | 0.8544 | 0.8557 | 0.8543 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
Augusto777/swinv2-tiny-patch4-window8-256-ve-U13-b-80
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-ve-U13-b-80
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7882
- Accuracy: 0.7391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3858 | 0.1304 |
| 1.3856 | 2.0 | 13 | 1.3777 | 0.3696 |
| 1.3856 | 2.92 | 19 | 1.3488 | 0.2391 |
| 1.361 | 4.0 | 26 | 1.2503 | 0.2826 |
| 1.2088 | 4.92 | 32 | 1.1317 | 0.4130 |
| 1.2088 | 6.0 | 39 | 1.0244 | 0.4565 |
| 1.0729 | 6.92 | 45 | 1.0413 | 0.4565 |
| 0.9554 | 8.0 | 52 | 0.9286 | 0.5652 |
| 0.9554 | 8.92 | 58 | 0.9103 | 0.5652 |
| 0.8221 | 10.0 | 65 | 0.8519 | 0.6522 |
| 0.732 | 10.92 | 71 | 0.8300 | 0.5870 |
| 0.732 | 12.0 | 78 | 0.8103 | 0.6304 |
| 0.6491 | 12.92 | 84 | 0.9533 | 0.5870 |
| 0.5724 | 14.0 | 91 | 0.7882 | 0.7391 |
| 0.5724 | 14.92 | 97 | 0.8072 | 0.6957 |
| 0.5305 | 16.0 | 104 | 0.7651 | 0.7391 |
| 0.4879 | 16.92 | 110 | 0.7379 | 0.7174 |
| 0.4879 | 18.0 | 117 | 0.7590 | 0.6739 |
| 0.4346 | 18.92 | 123 | 0.9283 | 0.6739 |
| 0.3671 | 20.0 | 130 | 1.0188 | 0.6304 |
| 0.3671 | 20.92 | 136 | 0.8959 | 0.7391 |
| 0.3725 | 22.0 | 143 | 0.9502 | 0.6957 |
| 0.3725 | 22.92 | 149 | 0.9627 | 0.6522 |
| 0.3321 | 24.0 | 156 | 0.9619 | 0.6957 |
| 0.3376 | 24.92 | 162 | 1.0459 | 0.6739 |
| 0.3376 | 26.0 | 169 | 1.0167 | 0.6522 |
| 0.3699 | 26.92 | 175 | 0.9949 | 0.6304 |
| 0.3098 | 28.0 | 182 | 0.9944 | 0.6739 |
| 0.3098 | 28.92 | 188 | 1.0860 | 0.6304 |
| 0.253 | 30.0 | 195 | 1.1721 | 0.6522 |
| 0.2615 | 30.92 | 201 | 1.1626 | 0.6739 |
| 0.2615 | 32.0 | 208 | 1.2464 | 0.6304 |
| 0.242 | 32.92 | 214 | 1.2179 | 0.6522 |
| 0.2173 | 34.0 | 221 | 1.2407 | 0.6304 |
| 0.2173 | 34.92 | 227 | 1.1585 | 0.6739 |
| 0.2305 | 36.0 | 234 | 1.3048 | 0.6522 |
| 0.2114 | 36.92 | 240 | 1.1776 | 0.6522 |
| 0.2114 | 38.0 | 247 | 1.1460 | 0.6522 |
| 0.2243 | 38.92 | 253 | 1.2424 | 0.6957 |
| 0.1822 | 40.0 | 260 | 1.2804 | 0.6739 |
| 0.1822 | 40.92 | 266 | 1.3472 | 0.6739 |
| 0.2065 | 42.0 | 273 | 1.3632 | 0.6739 |
| 0.2065 | 42.92 | 279 | 1.2832 | 0.6739 |
| 0.1942 | 44.0 | 286 | 1.3500 | 0.6739 |
| 0.1699 | 44.92 | 292 | 1.3242 | 0.6739 |
| 0.1699 | 46.0 | 299 | 1.3189 | 0.6957 |
| 0.1764 | 46.92 | 305 | 1.2840 | 0.6739 |
| 0.1771 | 48.0 | 312 | 1.3069 | 0.6957 |
| 0.1771 | 48.92 | 318 | 1.1585 | 0.6957 |
| 0.2095 | 50.0 | 325 | 1.3702 | 0.6957 |
| 0.1404 | 50.92 | 331 | 1.3539 | 0.6957 |
| 0.1404 | 52.0 | 338 | 1.3723 | 0.6957 |
| 0.1449 | 52.92 | 344 | 1.3877 | 0.6957 |
| 0.1348 | 54.0 | 351 | 1.3381 | 0.6739 |
| 0.1348 | 54.92 | 357 | 1.3700 | 0.6739 |
| 0.1683 | 56.0 | 364 | 1.2871 | 0.6957 |
| 0.1577 | 56.92 | 370 | 1.3214 | 0.6957 |
| 0.1577 | 58.0 | 377 | 1.3992 | 0.6522 |
| 0.1474 | 58.92 | 383 | 1.3800 | 0.6522 |
| 0.1267 | 60.0 | 390 | 1.2535 | 0.6739 |
| 0.1267 | 60.92 | 396 | 1.3200 | 0.6739 |
| 0.1171 | 62.0 | 403 | 1.3730 | 0.6739 |
| 0.1171 | 62.92 | 409 | 1.3678 | 0.6739 |
| 0.1461 | 64.0 | 416 | 1.3788 | 0.6739 |
| 0.1124 | 64.92 | 422 | 1.3944 | 0.6739 |
| 0.1124 | 66.0 | 429 | 1.3724 | 0.6739 |
| 0.1168 | 66.92 | 435 | 1.3553 | 0.6522 |
| 0.1243 | 68.0 | 442 | 1.3829 | 0.6739 |
| 0.1243 | 68.92 | 448 | 1.4040 | 0.6739 |
| 0.1375 | 70.0 | 455 | 1.4127 | 0.6522 |
| 0.1017 | 70.92 | 461 | 1.4070 | 0.6522 |
| 0.1017 | 72.0 | 468 | 1.3989 | 0.6739 |
| 0.1346 | 72.92 | 474 | 1.3995 | 0.6739 |
| 0.1382 | 73.85 | 480 | 1.3988 | 0.6739 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
talli96123/meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_1
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1958
- Accuracy: 0.9470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1042 | 1.0 | 21 | 1.0970 | 0.3489 |
| 1.0711 | 2.0 | 42 | 1.0849 | 0.3863 |
| 1.0265 | 3.0 | 63 | 1.0484 | 0.4424 |
| 0.9343 | 4.0 | 84 | 0.9641 | 0.5202 |
| 0.9037 | 5.0 | 105 | 0.8967 | 0.5639 |
| 0.7794 | 6.0 | 126 | 0.8947 | 0.5607 |
| 0.7517 | 7.0 | 147 | 0.7444 | 0.6854 |
| 0.6897 | 8.0 | 168 | 0.7788 | 0.6542 |
| 0.6639 | 9.0 | 189 | 0.9096 | 0.5981 |
| 0.7325 | 10.0 | 210 | 0.7083 | 0.6947 |
| 0.6112 | 11.0 | 231 | 0.6447 | 0.7383 |
| 0.5403 | 12.0 | 252 | 0.7023 | 0.6978 |
| 0.446 | 13.0 | 273 | 0.7003 | 0.7009 |
| 0.481 | 14.0 | 294 | 0.7621 | 0.7103 |
| 0.4406 | 15.0 | 315 | 0.5552 | 0.7882 |
| 0.3905 | 16.0 | 336 | 0.6945 | 0.7445 |
| 0.3573 | 17.0 | 357 | 0.4493 | 0.8318 |
| 0.4212 | 18.0 | 378 | 0.5443 | 0.7975 |
| 0.3467 | 19.0 | 399 | 0.4746 | 0.8006 |
| 0.275 | 20.0 | 420 | 0.4475 | 0.8318 |
| 0.2847 | 21.0 | 441 | 0.5923 | 0.7882 |
| 0.3065 | 22.0 | 462 | 0.4106 | 0.8629 |
| 0.2062 | 23.0 | 483 | 0.7714 | 0.7352 |
| 0.281 | 24.0 | 504 | 0.3681 | 0.8816 |
| 0.2239 | 25.0 | 525 | 0.3852 | 0.8411 |
| 0.2132 | 26.0 | 546 | 0.5309 | 0.8037 |
| 0.2846 | 27.0 | 567 | 0.4192 | 0.8349 |
| 0.1943 | 28.0 | 588 | 0.5426 | 0.7913 |
| 0.1594 | 29.0 | 609 | 0.4108 | 0.8380 |
| 0.2061 | 30.0 | 630 | 0.3541 | 0.8692 |
| 0.1616 | 31.0 | 651 | 0.2926 | 0.9097 |
| 0.1755 | 32.0 | 672 | 0.4088 | 0.8442 |
| 0.156 | 33.0 | 693 | 0.5390 | 0.8131 |
| 0.3471 | 34.0 | 714 | 0.4996 | 0.8255 |
| 0.1627 | 35.0 | 735 | 0.3407 | 0.8847 |
| 0.1332 | 36.0 | 756 | 0.2970 | 0.8972 |
| 0.2394 | 37.0 | 777 | 0.4211 | 0.8411 |
| 0.1086 | 38.0 | 798 | 0.3548 | 0.8847 |
| 0.1369 | 39.0 | 819 | 0.3874 | 0.8754 |
| 0.1235 | 40.0 | 840 | 0.2755 | 0.9065 |
| 0.132 | 41.0 | 861 | 0.3844 | 0.8816 |
| 0.1109 | 42.0 | 882 | 0.4368 | 0.8629 |
| 0.1291 | 43.0 | 903 | 0.3609 | 0.8754 |
| 0.1071 | 44.0 | 924 | 0.2968 | 0.9065 |
| 0.0967 | 45.0 | 945 | 0.3095 | 0.8847 |
| 0.1031 | 46.0 | 966 | 0.2942 | 0.9034 |
| 0.1124 | 47.0 | 987 | 0.2314 | 0.9252 |
| 0.085 | 48.0 | 1008 | 0.3651 | 0.8879 |
| 0.1031 | 49.0 | 1029 | 0.4515 | 0.8598 |
| 0.112 | 50.0 | 1050 | 0.2458 | 0.9190 |
| 0.11 | 51.0 | 1071 | 0.3609 | 0.8972 |
| 0.0742 | 52.0 | 1092 | 0.3382 | 0.9065 |
| 0.0783 | 53.0 | 1113 | 0.3365 | 0.9097 |
| 0.0748 | 54.0 | 1134 | 0.3213 | 0.9065 |
| 0.0983 | 55.0 | 1155 | 0.3021 | 0.9034 |
| 0.0651 | 56.0 | 1176 | 0.2968 | 0.9128 |
| 0.0762 | 57.0 | 1197 | 0.3651 | 0.8660 |
| 0.0831 | 58.0 | 1218 | 0.3192 | 0.9003 |
| 0.0989 | 59.0 | 1239 | 0.3015 | 0.8910 |
| 0.0936 | 60.0 | 1260 | 0.3160 | 0.8879 |
| 0.0725 | 61.0 | 1281 | 0.2810 | 0.8972 |
| 0.0779 | 62.0 | 1302 | 0.2108 | 0.9252 |
| 0.0565 | 63.0 | 1323 | 0.2214 | 0.9315 |
| 0.1442 | 64.0 | 1344 | 0.2242 | 0.9221 |
| 0.051 | 65.0 | 1365 | 0.2143 | 0.9128 |
| 0.0901 | 66.0 | 1386 | 0.3374 | 0.8941 |
| 0.08 | 67.0 | 1407 | 0.2368 | 0.9252 |
| 0.0425 | 68.0 | 1428 | 0.3098 | 0.9003 |
| 0.0603 | 69.0 | 1449 | 0.2638 | 0.9097 |
| 0.0604 | 70.0 | 1470 | 0.2012 | 0.9439 |
| 0.0405 | 71.0 | 1491 | 0.1795 | 0.9439 |
| 0.0526 | 72.0 | 1512 | 0.4807 | 0.8505 |
| 0.0728 | 73.0 | 1533 | 0.2587 | 0.9190 |
| 0.0406 | 74.0 | 1554 | 0.2363 | 0.9408 |
| 0.063 | 75.0 | 1575 | 0.3414 | 0.9034 |
| 0.0498 | 76.0 | 1596 | 0.2719 | 0.9159 |
| 0.0377 | 77.0 | 1617 | 0.2965 | 0.9128 |
| 0.0573 | 78.0 | 1638 | 0.2341 | 0.9377 |
| 0.096 | 79.0 | 1659 | 0.2503 | 0.9252 |
| 0.0373 | 80.0 | 1680 | 0.3416 | 0.9003 |
| 0.0485 | 81.0 | 1701 | 0.3115 | 0.9159 |
| 0.0502 | 82.0 | 1722 | 0.2318 | 0.9346 |
| 0.0559 | 83.0 | 1743 | 0.2506 | 0.9097 |
| 0.0446 | 84.0 | 1764 | 0.2691 | 0.9159 |
| 0.0344 | 85.0 | 1785 | 0.2695 | 0.9221 |
| 0.0306 | 86.0 | 1806 | 0.2747 | 0.9097 |
| 0.0404 | 87.0 | 1827 | 0.1680 | 0.9564 |
| 0.036 | 88.0 | 1848 | 0.2653 | 0.9221 |
| 0.0364 | 89.0 | 1869 | 0.1936 | 0.9408 |
| 0.0418 | 90.0 | 1890 | 0.2321 | 0.9346 |
| 0.0443 | 91.0 | 1911 | 0.2132 | 0.9408 |
| 0.0409 | 92.0 | 1932 | 0.2601 | 0.9190 |
| 0.0431 | 93.0 | 1953 | 0.2282 | 0.9439 |
| 0.0258 | 94.0 | 1974 | 0.2003 | 0.9346 |
| 0.0294 | 95.0 | 1995 | 0.2033 | 0.9439 |
| 0.0336 | 96.0 | 2016 | 0.1445 | 0.9595 |
| 0.0241 | 97.0 | 2037 | 0.2268 | 0.9346 |
| 0.037 | 98.0 | 2058 | 0.1920 | 0.9408 |
| 0.03 | 99.0 | 2079 | 0.1676 | 0.9533 |
| 0.0227 | 100.0 | 2100 | 0.1958 | 0.9470 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_1_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
Augusto777/swinv2-tiny-patch4-window8-256-ve-U13-b-80b
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-ve-U13-b-80b
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7057
- Accuracy: 0.7391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3861 | 0.1304 |
| 1.386 | 2.0 | 13 | 1.3837 | 0.4348 |
| 1.386 | 2.92 | 19 | 1.3776 | 0.3043 |
| 1.3807 | 4.0 | 26 | 1.3570 | 0.2391 |
| 1.3386 | 4.92 | 32 | 1.3224 | 0.2174 |
| 1.3386 | 6.0 | 39 | 1.2085 | 0.3478 |
| 1.209 | 6.92 | 45 | 1.1056 | 0.4565 |
| 1.0561 | 8.0 | 52 | 1.0507 | 0.4783 |
| 1.0561 | 8.92 | 58 | 1.0161 | 0.4565 |
| 0.9157 | 10.0 | 65 | 0.8613 | 0.6304 |
| 0.8002 | 10.92 | 71 | 0.9073 | 0.5652 |
| 0.8002 | 12.0 | 78 | 0.8300 | 0.6304 |
| 0.7181 | 12.92 | 84 | 0.8958 | 0.5870 |
| 0.6405 | 14.0 | 91 | 0.8075 | 0.7174 |
| 0.6405 | 14.92 | 97 | 0.7478 | 0.6957 |
| 0.6064 | 16.0 | 104 | 0.7370 | 0.7174 |
| 0.5556 | 16.92 | 110 | 0.7057 | 0.7391 |
| 0.5556 | 18.0 | 117 | 0.7395 | 0.6522 |
| 0.4822 | 18.92 | 123 | 0.8734 | 0.6957 |
| 0.4241 | 20.0 | 130 | 0.9991 | 0.6739 |
| 0.4241 | 20.92 | 136 | 0.8416 | 0.7174 |
| 0.4307 | 22.0 | 143 | 0.9195 | 0.6957 |
| 0.4307 | 22.92 | 149 | 0.9211 | 0.6522 |
| 0.381 | 24.0 | 156 | 0.9683 | 0.6087 |
| 0.3707 | 24.92 | 162 | 1.0067 | 0.6739 |
| 0.3707 | 26.0 | 169 | 0.9793 | 0.6522 |
| 0.3918 | 26.92 | 175 | 0.9758 | 0.6739 |
| 0.3513 | 28.0 | 182 | 0.9761 | 0.6739 |
| 0.3513 | 28.92 | 188 | 1.0745 | 0.6304 |
| 0.2739 | 30.0 | 195 | 1.0775 | 0.6739 |
| 0.2882 | 30.92 | 201 | 1.1521 | 0.6739 |
| 0.2882 | 32.0 | 208 | 1.2072 | 0.6522 |
| 0.2588 | 32.92 | 214 | 1.1374 | 0.6739 |
| 0.2498 | 34.0 | 221 | 1.2131 | 0.6522 |
| 0.2498 | 34.92 | 227 | 1.1309 | 0.7391 |
| 0.2584 | 36.0 | 234 | 1.2828 | 0.6957 |
| 0.2228 | 36.92 | 240 | 1.1381 | 0.6739 |
| 0.2228 | 38.0 | 247 | 1.2116 | 0.6522 |
| 0.2408 | 38.92 | 253 | 1.1962 | 0.6739 |
| 0.2042 | 40.0 | 260 | 1.2557 | 0.6739 |
| 0.2042 | 40.92 | 266 | 1.3511 | 0.6739 |
| 0.2141 | 42.0 | 273 | 1.3636 | 0.6304 |
| 0.2141 | 42.92 | 279 | 1.3084 | 0.6304 |
| 0.2135 | 44.0 | 286 | 1.3847 | 0.6087 |
| 0.191 | 44.92 | 292 | 1.2408 | 0.6957 |
| 0.191 | 46.0 | 299 | 1.1750 | 0.7174 |
| 0.1833 | 46.92 | 305 | 1.1804 | 0.6957 |
| 0.189 | 48.0 | 312 | 1.1867 | 0.7174 |
| 0.189 | 48.92 | 318 | 1.0623 | 0.7391 |
| 0.2196 | 50.0 | 325 | 1.2626 | 0.6957 |
| 0.1505 | 50.92 | 331 | 1.2745 | 0.6957 |
| 0.1505 | 52.0 | 338 | 1.3473 | 0.6957 |
| 0.1604 | 52.92 | 344 | 1.3535 | 0.6522 |
| 0.1377 | 54.0 | 351 | 1.3873 | 0.6522 |
| 0.1377 | 54.92 | 357 | 1.4287 | 0.6522 |
| 0.1752 | 56.0 | 364 | 1.3014 | 0.6957 |
| 0.1684 | 56.92 | 370 | 1.3564 | 0.6739 |
| 0.1684 | 58.0 | 377 | 1.4165 | 0.6957 |
| 0.1597 | 58.92 | 383 | 1.3624 | 0.6739 |
| 0.1393 | 60.0 | 390 | 1.3018 | 0.6957 |
| 0.1393 | 60.92 | 396 | 1.3197 | 0.6739 |
| 0.1347 | 62.0 | 403 | 1.3542 | 0.6739 |
| 0.1347 | 62.92 | 409 | 1.3460 | 0.6739 |
| 0.155 | 64.0 | 416 | 1.3998 | 0.6739 |
| 0.1198 | 64.92 | 422 | 1.3982 | 0.6739 |
| 0.1198 | 66.0 | 429 | 1.3989 | 0.6522 |
| 0.1318 | 66.92 | 435 | 1.4035 | 0.6522 |
| 0.1382 | 68.0 | 442 | 1.3626 | 0.6522 |
| 0.1382 | 68.92 | 448 | 1.3714 | 0.6522 |
| 0.1451 | 70.0 | 455 | 1.4174 | 0.6739 |
| 0.1203 | 70.92 | 461 | 1.4343 | 0.6739 |
| 0.1203 | 72.0 | 468 | 1.4045 | 0.6522 |
| 0.141 | 72.92 | 474 | 1.3904 | 0.6522 |
| 0.1516 | 73.85 | 480 | 1.3849 | 0.6522 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/swinv2-tiny-patch4-window8-256-ve-U13-b-80c
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-ve-U13-b-80c
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7710
- Accuracy: 0.7826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.6333 | 0.1087 |
| 1.4981 | 2.0 | 13 | 1.6225 | 0.1087 |
| 1.4981 | 2.92 | 19 | 1.5921 | 0.1087 |
| 1.4704 | 4.0 | 26 | 1.5001 | 0.1087 |
| 1.4116 | 4.92 | 32 | 1.4078 | 0.1087 |
| 1.4116 | 6.0 | 39 | 1.2960 | 0.3478 |
| 1.3094 | 6.92 | 45 | 1.2926 | 0.3043 |
| 1.2014 | 8.0 | 52 | 1.1532 | 0.5435 |
| 1.2014 | 8.92 | 58 | 1.1059 | 0.4783 |
| 1.0577 | 10.0 | 65 | 0.9510 | 0.6304 |
| 0.9185 | 10.92 | 71 | 0.9695 | 0.4783 |
| 0.9185 | 12.0 | 78 | 0.8792 | 0.6087 |
| 0.8369 | 12.92 | 84 | 0.8616 | 0.6957 |
| 0.7406 | 14.0 | 91 | 0.7816 | 0.6957 |
| 0.7406 | 14.92 | 97 | 0.7638 | 0.7609 |
| 0.6929 | 16.0 | 104 | 0.7710 | 0.7826 |
| 0.6192 | 16.92 | 110 | 0.7471 | 0.6957 |
| 0.6192 | 18.0 | 117 | 0.7265 | 0.7391 |
| 0.5936 | 18.92 | 123 | 0.7841 | 0.7609 |
| 0.5125 | 20.0 | 130 | 0.9320 | 0.6739 |
| 0.5125 | 20.92 | 136 | 0.7512 | 0.7609 |
| 0.4905 | 22.0 | 143 | 0.7466 | 0.6957 |
| 0.4905 | 22.92 | 149 | 0.8030 | 0.6957 |
| 0.4315 | 24.0 | 156 | 0.8184 | 0.7391 |
| 0.4272 | 24.92 | 162 | 0.8196 | 0.6957 |
| 0.4272 | 26.0 | 169 | 0.8712 | 0.6957 |
| 0.4261 | 26.92 | 175 | 0.7834 | 0.6957 |
| 0.4217 | 28.0 | 182 | 0.8394 | 0.6739 |
| 0.4217 | 28.92 | 188 | 0.9941 | 0.6739 |
| 0.3502 | 30.0 | 195 | 0.8909 | 0.7174 |
| 0.368 | 30.92 | 201 | 0.9995 | 0.7174 |
| 0.368 | 32.0 | 208 | 0.9418 | 0.6739 |
| 0.3473 | 32.92 | 214 | 0.8595 | 0.6739 |
| 0.3079 | 34.0 | 221 | 0.9562 | 0.6957 |
| 0.3079 | 34.92 | 227 | 0.8992 | 0.6739 |
| 0.3226 | 36.0 | 234 | 0.9908 | 0.6739 |
| 0.2603 | 36.92 | 240 | 0.9469 | 0.6957 |
| 0.2603 | 38.0 | 247 | 0.9942 | 0.6739 |
| 0.3028 | 38.92 | 253 | 1.0084 | 0.6739 |
| 0.2576 | 40.0 | 260 | 0.9908 | 0.6957 |
| 0.2576 | 40.92 | 266 | 1.0661 | 0.6957 |
| 0.2713 | 42.0 | 273 | 1.1347 | 0.6522 |
| 0.2713 | 42.92 | 279 | 1.1054 | 0.6739 |
| 0.2578 | 44.0 | 286 | 1.1089 | 0.6957 |
| 0.2367 | 44.92 | 292 | 1.1452 | 0.6739 |
| 0.2367 | 46.0 | 299 | 1.0272 | 0.6957 |
| 0.2301 | 46.92 | 305 | 1.1043 | 0.6739 |
| 0.2191 | 48.0 | 312 | 1.0815 | 0.6739 |
| 0.2191 | 48.92 | 318 | 0.9934 | 0.6957 |
| 0.2635 | 50.0 | 325 | 1.0866 | 0.6957 |
| 0.1874 | 50.92 | 331 | 1.0507 | 0.7174 |
| 0.1874 | 52.0 | 338 | 1.1002 | 0.7174 |
| 0.2057 | 52.92 | 344 | 1.0400 | 0.6739 |
| 0.1808 | 54.0 | 351 | 1.1092 | 0.7174 |
| 0.1808 | 54.92 | 357 | 1.1550 | 0.7174 |
| 0.2107 | 56.0 | 364 | 1.0579 | 0.6957 |
| 0.2149 | 56.92 | 370 | 1.0936 | 0.6957 |
| 0.2149 | 58.0 | 377 | 1.1692 | 0.6957 |
| 0.1865 | 58.92 | 383 | 1.1357 | 0.7174 |
| 0.1832 | 60.0 | 390 | 1.1549 | 0.6739 |
| 0.1832 | 60.92 | 396 | 1.1631 | 0.6957 |
| 0.1732 | 62.0 | 403 | 1.1312 | 0.6957 |
| 0.1732 | 62.92 | 409 | 1.1210 | 0.6957 |
| 0.1856 | 64.0 | 416 | 1.1835 | 0.6739 |
| 0.1503 | 64.92 | 422 | 1.1892 | 0.7174 |
| 0.1503 | 66.0 | 429 | 1.1865 | 0.6739 |
| 0.1713 | 66.92 | 435 | 1.1608 | 0.6739 |
| 0.1804 | 68.0 | 442 | 1.1699 | 0.6739 |
| 0.1804 | 68.92 | 448 | 1.1694 | 0.7174 |
| 0.1761 | 70.0 | 455 | 1.1744 | 0.6957 |
| 0.1619 | 70.92 | 461 | 1.1783 | 0.6957 |
| 0.1619 | 72.0 | 468 | 1.1797 | 0.6957 |
| 0.1649 | 72.92 | 474 | 1.1788 | 0.6957 |
| 0.1843 | 73.85 | 480 | 1.1780 | 0.6957 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
habibi26/ktp-spoof-clip
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ktp-spoof-clip
This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0740
- Accuracy: 0.9853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.8889 | 4 | 0.5501 | 0.8088 |
| No log | 2.0 | 9 | 0.3671 | 0.8529 |
| 0.5611 | 2.8889 | 13 | 0.3852 | 0.8235 |
| 0.5611 | 4.0 | 18 | 0.2422 | 0.9118 |
| 0.4558 | 4.8889 | 22 | 0.3534 | 0.8824 |
| 0.4558 | 6.0 | 27 | 0.1137 | 0.9412 |
| 0.3562 | 6.8889 | 31 | 0.5266 | 0.7941 |
| 0.3562 | 8.0 | 36 | 0.1918 | 0.9118 |
| 0.1201 | 8.8889 | 40 | 0.0301 | 1.0 |
| 0.1201 | 10.0 | 45 | 0.0450 | 0.9853 |
| 0.1201 | 10.8889 | 49 | 0.0327 | 0.9853 |
| 0.0604 | 12.0 | 54 | 0.0898 | 0.9706 |
| 0.0604 | 12.8889 | 58 | 0.0789 | 0.9853 |
| 0.0322 | 13.3333 | 60 | 0.0740 | 0.9853 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"attack",
"real"
] |
Augusto777/swiftformer-xs-ve-U13-b-80
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swiftformer-xs-ve-U13-b-80
This model is a fine-tuned version of [MBZUAI/swiftformer-xs](https://huggingface.co/MBZUAI/swiftformer-xs) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7132
- Accuracy: 0.8261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3859 | 0.2391 |
| 1.3857 | 2.0 | 13 | 1.3834 | 0.2826 |
| 1.3857 | 2.92 | 19 | 1.3789 | 0.1957 |
| 1.3767 | 4.0 | 26 | 1.3666 | 0.1522 |
| 1.3226 | 4.92 | 32 | 1.3565 | 0.1522 |
| 1.3226 | 6.0 | 39 | 1.3902 | 0.1087 |
| 1.1987 | 6.92 | 45 | 1.3712 | 0.2174 |
| 1.1075 | 8.0 | 52 | 1.3197 | 0.3478 |
| 1.1075 | 8.92 | 58 | 1.3649 | 0.3696 |
| 0.9988 | 10.0 | 65 | 1.2583 | 0.3696 |
| 0.8863 | 10.92 | 71 | 1.2484 | 0.3696 |
| 0.8863 | 12.0 | 78 | 1.2869 | 0.4130 |
| 0.8228 | 12.92 | 84 | 1.1678 | 0.4783 |
| 0.7456 | 14.0 | 91 | 1.0275 | 0.6739 |
| 0.7456 | 14.92 | 97 | 0.9702 | 0.7174 |
| 0.6595 | 16.0 | 104 | 0.9103 | 0.6957 |
| 0.5995 | 16.92 | 110 | 0.8506 | 0.7391 |
| 0.5995 | 18.0 | 117 | 0.8514 | 0.7174 |
| 0.5826 | 18.92 | 123 | 0.8964 | 0.7391 |
| 0.4818 | 20.0 | 130 | 0.8550 | 0.7609 |
| 0.4818 | 20.92 | 136 | 0.7132 | 0.8261 |
| 0.4553 | 22.0 | 143 | 0.6973 | 0.7826 |
| 0.4553 | 22.92 | 149 | 0.7496 | 0.7391 |
| 0.4276 | 24.0 | 156 | 0.9087 | 0.6957 |
| 0.3375 | 24.92 | 162 | 0.7787 | 0.8261 |
| 0.3375 | 26.0 | 169 | 0.7132 | 0.8043 |
| 0.3199 | 26.92 | 175 | 0.7570 | 0.7391 |
| 0.2756 | 28.0 | 182 | 0.7873 | 0.6957 |
| 0.2756 | 28.92 | 188 | 0.7895 | 0.7609 |
| 0.2254 | 30.0 | 195 | 0.7443 | 0.8043 |
| 0.2576 | 30.92 | 201 | 0.9623 | 0.6739 |
| 0.2576 | 32.0 | 208 | 0.7349 | 0.7826 |
| 0.2113 | 32.92 | 214 | 0.7887 | 0.7609 |
| 0.1978 | 34.0 | 221 | 0.8921 | 0.7391 |
| 0.1978 | 34.92 | 227 | 0.8102 | 0.7391 |
| 0.2455 | 36.0 | 234 | 0.8947 | 0.7391 |
| 0.1809 | 36.92 | 240 | 0.8144 | 0.7826 |
| 0.1809 | 38.0 | 247 | 0.8290 | 0.7174 |
| 0.1967 | 38.92 | 253 | 0.8135 | 0.7391 |
| 0.1608 | 40.0 | 260 | 0.8065 | 0.7609 |
| 0.1608 | 40.92 | 266 | 0.7399 | 0.7609 |
| 0.1704 | 42.0 | 273 | 0.7099 | 0.8043 |
| 0.1704 | 42.92 | 279 | 0.7569 | 0.7826 |
| 0.1682 | 44.0 | 286 | 0.8459 | 0.7826 |
| 0.1607 | 44.92 | 292 | 0.7311 | 0.7609 |
| 0.1607 | 46.0 | 299 | 0.7833 | 0.7174 |
| 0.1589 | 46.92 | 305 | 0.8073 | 0.6957 |
| 0.1524 | 48.0 | 312 | 0.7473 | 0.7609 |
| 0.1524 | 48.92 | 318 | 0.6780 | 0.8043 |
| 0.1586 | 50.0 | 325 | 0.7573 | 0.7174 |
| 0.128 | 50.92 | 331 | 0.7614 | 0.7391 |
| 0.128 | 52.0 | 338 | 0.7338 | 0.7609 |
| 0.1254 | 52.92 | 344 | 0.7666 | 0.7391 |
| 0.1206 | 54.0 | 351 | 0.8433 | 0.7174 |
| 0.1206 | 54.92 | 357 | 0.8747 | 0.6957 |
| 0.1398 | 56.0 | 364 | 0.8940 | 0.7174 |
| 0.1536 | 56.92 | 370 | 0.7781 | 0.7826 |
| 0.1536 | 58.0 | 377 | 0.7351 | 0.7391 |
| 0.1281 | 58.92 | 383 | 0.7601 | 0.7174 |
| 0.1156 | 60.0 | 390 | 0.7991 | 0.7174 |
| 0.1156 | 60.92 | 396 | 0.7776 | 0.7609 |
| 0.0852 | 62.0 | 403 | 0.7838 | 0.7391 |
| 0.0852 | 62.92 | 409 | 0.7752 | 0.7609 |
| 0.1106 | 64.0 | 416 | 0.7541 | 0.7609 |
| 0.0817 | 64.92 | 422 | 0.7536 | 0.7391 |
| 0.0817 | 66.0 | 429 | 0.8129 | 0.7609 |
| 0.1211 | 66.92 | 435 | 0.7884 | 0.7609 |
| 0.0944 | 68.0 | 442 | 0.8011 | 0.7609 |
| 0.0944 | 68.92 | 448 | 0.8068 | 0.7391 |
| 0.1187 | 70.0 | 455 | 0.7796 | 0.7391 |
| 0.0935 | 70.92 | 461 | 0.7934 | 0.7391 |
| 0.0935 | 72.0 | 468 | 0.7367 | 0.7391 |
| 0.109 | 72.92 | 474 | 0.7515 | 0.7391 |
| 0.1006 | 73.85 | 480 | 0.7888 | 0.7174 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
matthieulel/beit-large-patch16-224-pt22k-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-large-patch16-224-pt22k-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/beit-large-patch16-224-pt22k](https://huggingface.co/microsoft/beit-large-patch16-224-pt22k) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5047
- Accuracy: 0.8771
- Precision: 0.8770
- Recall: 0.8771
- F1: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.5632 | 0.99 | 62 | 1.3358 | 0.5265 | 0.5377 | 0.5265 | 0.4840 |
| 0.8801 | 2.0 | 125 | 0.7053 | 0.7717 | 0.7710 | 0.7717 | 0.7559 |
| 0.7408 | 2.99 | 187 | 0.5995 | 0.7897 | 0.7878 | 0.7897 | 0.7803 |
| 0.6124 | 4.0 | 250 | 0.5448 | 0.8140 | 0.8178 | 0.8140 | 0.8076 |
| 0.5799 | 4.99 | 312 | 0.5354 | 0.8174 | 0.8224 | 0.8174 | 0.8165 |
| 0.567 | 6.0 | 375 | 0.5044 | 0.8247 | 0.8314 | 0.8247 | 0.8194 |
| 0.5237 | 6.99 | 437 | 0.4913 | 0.8388 | 0.8429 | 0.8388 | 0.8371 |
| 0.4674 | 8.0 | 500 | 0.4927 | 0.8484 | 0.8541 | 0.8484 | 0.8477 |
| 0.4869 | 8.99 | 562 | 0.4167 | 0.8546 | 0.8570 | 0.8546 | 0.8526 |
| 0.4442 | 10.0 | 625 | 0.4086 | 0.8579 | 0.8583 | 0.8579 | 0.8564 |
| 0.4294 | 10.99 | 687 | 0.4743 | 0.8489 | 0.8516 | 0.8489 | 0.8489 |
| 0.4032 | 12.0 | 750 | 0.4350 | 0.8664 | 0.8651 | 0.8664 | 0.8647 |
| 0.4028 | 12.99 | 812 | 0.4443 | 0.8568 | 0.8623 | 0.8568 | 0.8561 |
| 0.3939 | 14.0 | 875 | 0.4193 | 0.8608 | 0.8605 | 0.8608 | 0.8593 |
| 0.3447 | 14.99 | 937 | 0.4289 | 0.8698 | 0.8692 | 0.8698 | 0.8688 |
| 0.354 | 16.0 | 1000 | 0.4471 | 0.8653 | 0.8661 | 0.8653 | 0.8648 |
| 0.2934 | 16.99 | 1062 | 0.4888 | 0.8574 | 0.8573 | 0.8574 | 0.8546 |
| 0.3262 | 18.0 | 1125 | 0.4605 | 0.8602 | 0.8602 | 0.8602 | 0.8588 |
| 0.3287 | 18.99 | 1187 | 0.4439 | 0.8681 | 0.8682 | 0.8681 | 0.8673 |
| 0.2848 | 20.0 | 1250 | 0.4986 | 0.8641 | 0.8633 | 0.8641 | 0.8615 |
| 0.283 | 20.99 | 1312 | 0.4663 | 0.8692 | 0.8681 | 0.8692 | 0.8676 |
| 0.3106 | 22.0 | 1375 | 0.4668 | 0.8720 | 0.8735 | 0.8720 | 0.8697 |
| 0.2785 | 22.99 | 1437 | 0.4899 | 0.8664 | 0.8649 | 0.8664 | 0.8650 |
| 0.2635 | 24.0 | 1500 | 0.5047 | 0.8771 | 0.8770 | 0.8771 | 0.8764 |
| 0.2573 | 24.99 | 1562 | 0.5144 | 0.8732 | 0.8730 | 0.8732 | 0.8723 |
| 0.238 | 26.0 | 1625 | 0.5012 | 0.8732 | 0.8729 | 0.8732 | 0.8723 |
| 0.2358 | 26.99 | 1687 | 0.5021 | 0.8681 | 0.8709 | 0.8681 | 0.8690 |
| 0.2624 | 28.0 | 1750 | 0.5154 | 0.8715 | 0.8711 | 0.8715 | 0.8705 |
| 0.229 | 28.99 | 1812 | 0.5087 | 0.8698 | 0.8690 | 0.8698 | 0.8689 |
| 0.227 | 29.76 | 1860 | 0.5104 | 0.8726 | 0.8725 | 0.8726 | 0.8718 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
Augusto777/swiftformer-xs-ve-U13-b-80b
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swiftformer-xs-ve-U13-b-80b
This model is a fine-tuned version of [MBZUAI/swiftformer-xs](https://huggingface.co/MBZUAI/swiftformer-xs) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2197
- Accuracy: 0.6522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3862 | 0.2174 |
| 1.3862 | 2.0 | 13 | 1.3856 | 0.3261 |
| 1.3862 | 2.92 | 19 | 1.3849 | 0.2826 |
| 1.3848 | 4.0 | 26 | 1.3830 | 0.2609 |
| 1.3806 | 4.92 | 32 | 1.3804 | 0.1739 |
| 1.3806 | 6.0 | 39 | 1.3758 | 0.1957 |
| 1.3662 | 6.92 | 45 | 1.3700 | 0.1739 |
| 1.3261 | 8.0 | 52 | 1.3652 | 0.1739 |
| 1.3261 | 8.92 | 58 | 1.3625 | 0.1522 |
| 1.2588 | 10.0 | 65 | 1.3629 | 0.1304 |
| 1.1972 | 10.92 | 71 | 1.3592 | 0.1304 |
| 1.1972 | 12.0 | 78 | 1.3570 | 0.2174 |
| 1.1578 | 12.92 | 84 | 1.3590 | 0.1957 |
| 1.124 | 14.0 | 91 | 1.3731 | 0.2174 |
| 1.124 | 14.92 | 97 | 1.3718 | 0.1522 |
| 1.1045 | 16.0 | 104 | 1.3736 | 0.1739 |
| 1.0703 | 16.92 | 110 | 1.4983 | 0.2174 |
| 1.0703 | 18.0 | 117 | 1.5455 | 0.1739 |
| 1.0663 | 18.92 | 123 | 1.4473 | 0.1739 |
| 1.01 | 20.0 | 130 | 1.4011 | 0.2609 |
| 1.01 | 20.92 | 136 | 1.4053 | 0.2826 |
| 0.9961 | 22.0 | 143 | 1.4186 | 0.2174 |
| 0.9961 | 22.92 | 149 | 1.5168 | 0.2609 |
| 0.9754 | 24.0 | 156 | 1.3873 | 0.2826 |
| 0.9417 | 24.92 | 162 | 1.4656 | 0.3261 |
| 0.9417 | 26.0 | 169 | 1.3499 | 0.2609 |
| 0.9286 | 26.92 | 175 | 1.3902 | 0.3043 |
| 0.9216 | 28.0 | 182 | 1.4819 | 0.3261 |
| 0.9216 | 28.92 | 188 | 1.4133 | 0.3043 |
| 0.8868 | 30.0 | 195 | 1.4124 | 0.4130 |
| 0.8908 | 30.92 | 201 | 1.4421 | 0.3478 |
| 0.8908 | 32.0 | 208 | 1.5085 | 0.3043 |
| 0.8729 | 32.92 | 214 | 1.3854 | 0.3478 |
| 0.8685 | 34.0 | 221 | 1.3264 | 0.3043 |
| 0.8685 | 34.92 | 227 | 1.3947 | 0.3043 |
| 0.8739 | 36.0 | 234 | 1.3455 | 0.3913 |
| 0.8288 | 36.92 | 240 | 1.3621 | 0.3913 |
| 0.8288 | 38.0 | 247 | 1.3875 | 0.3913 |
| 0.8369 | 38.92 | 253 | 1.4274 | 0.3696 |
| 0.8101 | 40.0 | 260 | 1.3251 | 0.4565 |
| 0.8101 | 40.92 | 266 | 1.3039 | 0.4783 |
| 0.8126 | 42.0 | 273 | 1.2523 | 0.5435 |
| 0.8126 | 42.92 | 279 | 1.3060 | 0.5217 |
| 0.7971 | 44.0 | 286 | 1.2678 | 0.5217 |
| 0.7806 | 44.92 | 292 | 1.3332 | 0.5 |
| 0.7806 | 46.0 | 299 | 1.2550 | 0.5652 |
| 0.7899 | 46.92 | 305 | 1.2517 | 0.5870 |
| 0.7602 | 48.0 | 312 | 1.2627 | 0.5870 |
| 0.7602 | 48.92 | 318 | 1.2620 | 0.6087 |
| 0.7748 | 50.0 | 325 | 1.2286 | 0.5652 |
| 0.7613 | 50.92 | 331 | 1.1997 | 0.6087 |
| 0.7613 | 52.0 | 338 | 1.2353 | 0.5870 |
| 0.7514 | 52.92 | 344 | 1.2466 | 0.5870 |
| 0.7581 | 54.0 | 351 | 1.2161 | 0.5870 |
| 0.7581 | 54.92 | 357 | 1.2396 | 0.5435 |
| 0.7401 | 56.0 | 364 | 1.1859 | 0.6087 |
| 0.7421 | 56.92 | 370 | 1.1757 | 0.6304 |
| 0.7421 | 58.0 | 377 | 1.1754 | 0.5870 |
| 0.7261 | 58.92 | 383 | 1.1630 | 0.6304 |
| 0.709 | 60.0 | 390 | 1.2157 | 0.5870 |
| 0.709 | 60.92 | 396 | 1.2124 | 0.6087 |
| 0.7075 | 62.0 | 403 | 1.2095 | 0.6087 |
| 0.7075 | 62.92 | 409 | 1.2543 | 0.5652 |
| 0.7141 | 64.0 | 416 | 1.2210 | 0.6087 |
| 0.6907 | 64.92 | 422 | 1.3190 | 0.5435 |
| 0.6907 | 66.0 | 429 | 1.2197 | 0.6522 |
| 0.7237 | 66.92 | 435 | 1.2365 | 0.5652 |
| 0.6918 | 68.0 | 442 | 1.1570 | 0.6304 |
| 0.6918 | 68.92 | 448 | 1.1790 | 0.6087 |
| 0.7137 | 70.0 | 455 | 1.1968 | 0.6087 |
| 0.6954 | 70.92 | 461 | 1.1959 | 0.6304 |
| 0.6954 | 72.0 | 468 | 1.1782 | 0.6304 |
| 0.6961 | 72.92 | 474 | 1.1935 | 0.5652 |
| 0.6889 | 73.85 | 480 | 1.1835 | 0.6087 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
matthieulel/swinv2-base-patch4-window12to16-192to256-22kto1k-ft-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-base-patch4-window12to16-192to256-22kto1k-ft-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-base-patch4-window12to16-192to256-22kto1k-ft) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6138
- Accuracy: 0.8653
- Precision: 0.8633
- Recall: 0.8653
- F1: 0.8633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.1028 | 0.99 | 62 | 0.8747 | 0.6815 | 0.7019 | 0.6815 | 0.6725 |
| 0.7637 | 2.0 | 125 | 0.6110 | 0.7993 | 0.8032 | 0.7993 | 0.7944 |
| 0.702 | 2.99 | 187 | 0.5407 | 0.8179 | 0.8282 | 0.8179 | 0.8201 |
| 0.6131 | 4.0 | 250 | 0.5038 | 0.8326 | 0.8356 | 0.8326 | 0.8276 |
| 0.5453 | 4.99 | 312 | 0.4523 | 0.8534 | 0.8547 | 0.8534 | 0.8528 |
| 0.5409 | 6.0 | 375 | 0.4908 | 0.8377 | 0.8389 | 0.8377 | 0.8339 |
| 0.5246 | 6.99 | 437 | 0.4583 | 0.8478 | 0.8509 | 0.8478 | 0.8486 |
| 0.478 | 8.0 | 500 | 0.4417 | 0.8506 | 0.8529 | 0.8506 | 0.8486 |
| 0.4845 | 8.99 | 562 | 0.4344 | 0.8596 | 0.8591 | 0.8596 | 0.8565 |
| 0.4228 | 10.0 | 625 | 0.4580 | 0.8478 | 0.8488 | 0.8478 | 0.8462 |
| 0.4414 | 10.99 | 687 | 0.4520 | 0.8534 | 0.8539 | 0.8534 | 0.8525 |
| 0.3783 | 12.0 | 750 | 0.4776 | 0.8517 | 0.8504 | 0.8517 | 0.8501 |
| 0.407 | 12.99 | 812 | 0.4800 | 0.8478 | 0.8482 | 0.8478 | 0.8444 |
| 0.3944 | 14.0 | 875 | 0.4541 | 0.8630 | 0.8639 | 0.8630 | 0.8618 |
| 0.3563 | 14.99 | 937 | 0.4848 | 0.8534 | 0.8531 | 0.8534 | 0.8523 |
| 0.3576 | 16.0 | 1000 | 0.4877 | 0.8540 | 0.8526 | 0.8540 | 0.8522 |
| 0.317 | 16.99 | 1062 | 0.5122 | 0.8551 | 0.8572 | 0.8551 | 0.8546 |
| 0.3439 | 18.0 | 1125 | 0.5073 | 0.8484 | 0.8509 | 0.8484 | 0.8466 |
| 0.3199 | 18.99 | 1187 | 0.5183 | 0.8574 | 0.8552 | 0.8574 | 0.8555 |
| 0.3121 | 20.0 | 1250 | 0.5367 | 0.8484 | 0.8471 | 0.8484 | 0.8451 |
| 0.2942 | 20.99 | 1312 | 0.5905 | 0.8534 | 0.8506 | 0.8534 | 0.8509 |
| 0.3253 | 22.0 | 1375 | 0.5762 | 0.8495 | 0.8498 | 0.8495 | 0.8478 |
| 0.2917 | 22.99 | 1437 | 0.5865 | 0.8433 | 0.8452 | 0.8433 | 0.8428 |
| 0.2708 | 24.0 | 1500 | 0.5802 | 0.8568 | 0.8532 | 0.8568 | 0.8539 |
| 0.2801 | 24.99 | 1562 | 0.6005 | 0.8557 | 0.8521 | 0.8557 | 0.8525 |
| 0.2608 | 26.0 | 1625 | 0.5916 | 0.8636 | 0.8606 | 0.8636 | 0.8612 |
| 0.2625 | 26.99 | 1687 | 0.5932 | 0.8568 | 0.8551 | 0.8568 | 0.8552 |
| 0.2759 | 28.0 | 1750 | 0.6277 | 0.8568 | 0.8557 | 0.8568 | 0.8546 |
| 0.2483 | 28.99 | 1812 | 0.6055 | 0.8630 | 0.8607 | 0.8630 | 0.8608 |
| 0.2554 | 29.76 | 1860 | 0.6138 | 0.8653 | 0.8633 | 0.8653 | 0.8633 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
Augusto777/swiftformer-xs-ve-U13-b-80c
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swiftformer-xs-ve-U13-b-80c
This model is a fine-tuned version of [MBZUAI/swiftformer-xs](https://huggingface.co/MBZUAI/swiftformer-xs) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7032
- Accuracy: 0.8043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3860 | 0.2391 |
| 1.3859 | 2.0 | 13 | 1.3844 | 0.3043 |
| 1.3859 | 2.92 | 19 | 1.3820 | 0.1957 |
| 1.381 | 4.0 | 26 | 1.3746 | 0.1739 |
| 1.3573 | 4.92 | 32 | 1.3643 | 0.1957 |
| 1.3573 | 6.0 | 39 | 1.3561 | 0.1522 |
| 1.2692 | 6.92 | 45 | 1.3583 | 0.1522 |
| 1.1682 | 8.0 | 52 | 1.3623 | 0.1739 |
| 1.1682 | 8.92 | 58 | 1.3296 | 0.2609 |
| 1.1005 | 10.0 | 65 | 1.2663 | 0.3913 |
| 0.9884 | 10.92 | 71 | 1.3160 | 0.3696 |
| 0.9884 | 12.0 | 78 | 1.1806 | 0.4783 |
| 0.9111 | 12.92 | 84 | 1.1560 | 0.6087 |
| 0.8464 | 14.0 | 91 | 1.1350 | 0.5870 |
| 0.8464 | 14.92 | 97 | 1.0768 | 0.6304 |
| 0.7768 | 16.0 | 104 | 0.9707 | 0.6087 |
| 0.6754 | 16.92 | 110 | 0.9544 | 0.6522 |
| 0.6754 | 18.0 | 117 | 0.9885 | 0.6739 |
| 0.657 | 18.92 | 123 | 0.8578 | 0.6957 |
| 0.5408 | 20.0 | 130 | 0.7794 | 0.7391 |
| 0.5408 | 20.92 | 136 | 0.8072 | 0.7391 |
| 0.5094 | 22.0 | 143 | 0.7917 | 0.6739 |
| 0.5094 | 22.92 | 149 | 0.7975 | 0.6739 |
| 0.4546 | 24.0 | 156 | 0.7583 | 0.7609 |
| 0.3722 | 24.92 | 162 | 0.7074 | 0.7826 |
| 0.3722 | 26.0 | 169 | 0.6909 | 0.7391 |
| 0.3494 | 26.92 | 175 | 0.7032 | 0.8043 |
| 0.3092 | 28.0 | 182 | 0.8149 | 0.7826 |
| 0.3092 | 28.92 | 188 | 0.7898 | 0.7826 |
| 0.2643 | 30.0 | 195 | 0.7312 | 0.8043 |
| 0.2659 | 30.92 | 201 | 0.7598 | 0.7174 |
| 0.2659 | 32.0 | 208 | 0.7531 | 0.7609 |
| 0.2298 | 32.92 | 214 | 0.6877 | 0.8043 |
| 0.2147 | 34.0 | 221 | 0.6864 | 0.8043 |
| 0.2147 | 34.92 | 227 | 0.7656 | 0.7391 |
| 0.2457 | 36.0 | 234 | 0.8494 | 0.7391 |
| 0.1905 | 36.92 | 240 | 0.7319 | 0.7609 |
| 0.1905 | 38.0 | 247 | 0.8290 | 0.6957 |
| 0.2073 | 38.92 | 253 | 0.7963 | 0.7609 |
| 0.1603 | 40.0 | 260 | 0.8693 | 0.6957 |
| 0.1603 | 40.92 | 266 | 0.7138 | 0.8043 |
| 0.1852 | 42.0 | 273 | 0.7274 | 0.7609 |
| 0.1852 | 42.92 | 279 | 0.8353 | 0.6739 |
| 0.1641 | 44.0 | 286 | 0.9382 | 0.6957 |
| 0.1568 | 44.92 | 292 | 0.8655 | 0.7174 |
| 0.1568 | 46.0 | 299 | 0.7621 | 0.7391 |
| 0.1498 | 46.92 | 305 | 0.7944 | 0.7174 |
| 0.1563 | 48.0 | 312 | 0.8433 | 0.6957 |
| 0.1563 | 48.92 | 318 | 0.8633 | 0.7609 |
| 0.1554 | 50.0 | 325 | 0.8543 | 0.7391 |
| 0.1316 | 50.92 | 331 | 0.9127 | 0.7174 |
| 0.1316 | 52.0 | 338 | 0.9248 | 0.6957 |
| 0.1264 | 52.92 | 344 | 0.9349 | 0.6957 |
| 0.1082 | 54.0 | 351 | 0.9785 | 0.6739 |
| 0.1082 | 54.92 | 357 | 1.0165 | 0.6739 |
| 0.1366 | 56.0 | 364 | 0.8369 | 0.6957 |
| 0.1546 | 56.92 | 370 | 0.8372 | 0.7174 |
| 0.1546 | 58.0 | 377 | 0.8596 | 0.6957 |
| 0.1218 | 58.92 | 383 | 0.8054 | 0.7174 |
| 0.1162 | 60.0 | 390 | 0.7963 | 0.7391 |
| 0.1162 | 60.92 | 396 | 0.7953 | 0.7391 |
| 0.0876 | 62.0 | 403 | 0.8229 | 0.7391 |
| 0.0876 | 62.92 | 409 | 0.8365 | 0.7391 |
| 0.1032 | 64.0 | 416 | 0.8162 | 0.7609 |
| 0.0825 | 64.92 | 422 | 0.8646 | 0.7391 |
| 0.0825 | 66.0 | 429 | 0.9135 | 0.7391 |
| 0.1119 | 66.92 | 435 | 0.9164 | 0.7391 |
| 0.0949 | 68.0 | 442 | 0.9232 | 0.7391 |
| 0.0949 | 68.92 | 448 | 0.9381 | 0.7391 |
| 0.1227 | 70.0 | 455 | 0.8998 | 0.7391 |
| 0.0872 | 70.92 | 461 | 0.9632 | 0.7174 |
| 0.0872 | 72.0 | 468 | 0.8566 | 0.7174 |
| 0.1033 | 72.92 | 474 | 0.8909 | 0.7174 |
| 0.0876 | 73.85 | 480 | 0.8869 | 0.7609 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/swiftformer-xs-ve-U13-b-80d
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swiftformer-xs-ve-U13-b-80d
This model is a fine-tuned version of [MBZUAI/swiftformer-xs](https://huggingface.co/MBZUAI/swiftformer-xs) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7669
- Accuracy: 0.8261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 70
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3848 | 0.2174 |
| 1.3838 | 2.0 | 13 | 1.3723 | 0.1957 |
| 1.3838 | 2.92 | 19 | 1.3540 | 0.1739 |
| 1.3023 | 4.0 | 26 | 1.3327 | 0.2391 |
| 1.1398 | 4.92 | 32 | 1.2555 | 0.2391 |
| 1.1398 | 6.0 | 39 | 1.3010 | 0.3913 |
| 1.0076 | 6.92 | 45 | 1.1957 | 0.5 |
| 0.8823 | 8.0 | 52 | 1.0565 | 0.5870 |
| 0.8823 | 8.92 | 58 | 0.9410 | 0.7391 |
| 0.7637 | 10.0 | 65 | 0.9274 | 0.7391 |
| 0.6688 | 10.92 | 71 | 0.8492 | 0.7826 |
| 0.6688 | 12.0 | 78 | 0.8906 | 0.6739 |
| 0.5855 | 12.92 | 84 | 0.8929 | 0.6522 |
| 0.4921 | 14.0 | 91 | 0.8338 | 0.7391 |
| 0.4921 | 14.92 | 97 | 0.7686 | 0.7826 |
| 0.4318 | 16.0 | 104 | 0.8430 | 0.7609 |
| 0.386 | 16.92 | 110 | 0.8315 | 0.7826 |
| 0.386 | 18.0 | 117 | 0.7669 | 0.8261 |
| 0.3483 | 18.92 | 123 | 0.8347 | 0.7174 |
| 0.3023 | 20.0 | 130 | 1.1037 | 0.6304 |
| 0.3023 | 20.92 | 136 | 0.9024 | 0.7174 |
| 0.2973 | 22.0 | 143 | 0.7760 | 0.7826 |
| 0.2973 | 22.92 | 149 | 0.7400 | 0.7826 |
| 0.2529 | 24.0 | 156 | 1.0058 | 0.7174 |
| 0.2086 | 24.92 | 162 | 0.9260 | 0.7609 |
| 0.2086 | 26.0 | 169 | 0.8370 | 0.7174 |
| 0.2265 | 26.92 | 175 | 0.8060 | 0.7391 |
| 0.1942 | 28.0 | 182 | 0.9812 | 0.6957 |
| 0.1942 | 28.92 | 188 | 0.8996 | 0.7391 |
| 0.1708 | 30.0 | 195 | 0.9630 | 0.6957 |
| 0.1747 | 30.92 | 201 | 0.9691 | 0.7174 |
| 0.1747 | 32.0 | 208 | 1.0017 | 0.7391 |
| 0.1461 | 32.92 | 214 | 0.9965 | 0.6957 |
| 0.1457 | 34.0 | 221 | 0.9506 | 0.7391 |
| 0.1457 | 34.92 | 227 | 0.9834 | 0.7391 |
| 0.1814 | 36.0 | 234 | 1.0191 | 0.7609 |
| 0.1383 | 36.92 | 240 | 0.8807 | 0.7609 |
| 0.1383 | 38.0 | 247 | 0.8724 | 0.7609 |
| 0.1718 | 38.92 | 253 | 0.8090 | 0.7391 |
| 0.1289 | 40.0 | 260 | 0.8709 | 0.7609 |
| 0.1289 | 40.92 | 266 | 0.9704 | 0.7391 |
| 0.1193 | 42.0 | 273 | 1.0518 | 0.7391 |
| 0.1193 | 42.92 | 279 | 1.0157 | 0.7174 |
| 0.1224 | 44.0 | 286 | 1.0794 | 0.7391 |
| 0.1104 | 44.92 | 292 | 1.0402 | 0.7391 |
| 0.1104 | 46.0 | 299 | 0.9837 | 0.7609 |
| 0.1055 | 46.92 | 305 | 1.0323 | 0.7174 |
| 0.1242 | 48.0 | 312 | 0.9907 | 0.7391 |
| 0.1242 | 48.92 | 318 | 1.0436 | 0.7609 |
| 0.1283 | 50.0 | 325 | 0.9829 | 0.7391 |
| 0.1035 | 50.92 | 331 | 1.0400 | 0.7174 |
| 0.1035 | 52.0 | 338 | 1.0414 | 0.7174 |
| 0.1066 | 52.92 | 344 | 1.0958 | 0.6957 |
| 0.0863 | 54.0 | 351 | 1.0914 | 0.7174 |
| 0.0863 | 54.92 | 357 | 1.0816 | 0.7174 |
| 0.1062 | 56.0 | 364 | 1.0087 | 0.7174 |
| 0.1214 | 56.92 | 370 | 1.0454 | 0.7391 |
| 0.1214 | 58.0 | 377 | 1.0324 | 0.7391 |
| 0.0984 | 58.92 | 383 | 1.0591 | 0.6739 |
| 0.0966 | 60.0 | 390 | 1.0037 | 0.6957 |
| 0.0966 | 60.92 | 396 | 0.9887 | 0.6957 |
| 0.0626 | 62.0 | 403 | 1.0294 | 0.6739 |
| 0.0626 | 62.92 | 409 | 0.9939 | 0.7174 |
| 0.085 | 64.0 | 416 | 0.9886 | 0.6957 |
| 0.068 | 64.62 | 420 | 1.0773 | 0.6739 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/beit-base-patch16-224-ve-U13-b-80
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-ve-U13-b-80
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6397
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3187 | 0.4565 |
| 1.6193 | 2.0 | 13 | 1.3087 | 0.4565 |
| 1.6193 | 2.92 | 19 | 1.2939 | 0.4565 |
| 1.6044 | 4.0 | 26 | 1.2802 | 0.4565 |
| 1.5061 | 4.92 | 32 | 1.2867 | 0.4565 |
| 1.5061 | 6.0 | 39 | 1.2813 | 0.4565 |
| 1.3879 | 6.92 | 45 | 1.2511 | 0.4565 |
| 1.3007 | 8.0 | 52 | 1.1294 | 0.5652 |
| 1.3007 | 8.92 | 58 | 1.0096 | 0.5435 |
| 1.1213 | 10.0 | 65 | 0.9308 | 0.5217 |
| 0.9968 | 10.92 | 71 | 0.9280 | 0.5435 |
| 0.9968 | 12.0 | 78 | 0.8034 | 0.6087 |
| 0.8771 | 12.92 | 84 | 0.7791 | 0.6522 |
| 0.7383 | 14.0 | 91 | 0.8005 | 0.6739 |
| 0.7383 | 14.92 | 97 | 0.7408 | 0.7391 |
| 0.6658 | 16.0 | 104 | 0.9305 | 0.6304 |
| 0.5879 | 16.92 | 110 | 0.7136 | 0.7609 |
| 0.5879 | 18.0 | 117 | 0.7106 | 0.7609 |
| 0.4609 | 18.92 | 123 | 0.6998 | 0.6957 |
| 0.4123 | 20.0 | 130 | 0.7931 | 0.7609 |
| 0.4123 | 20.92 | 136 | 0.9417 | 0.6739 |
| 0.3552 | 22.0 | 143 | 0.7868 | 0.7174 |
| 0.3552 | 22.92 | 149 | 0.9073 | 0.6957 |
| 0.2896 | 24.0 | 156 | 0.8542 | 0.7174 |
| 0.2316 | 24.92 | 162 | 0.7159 | 0.7391 |
| 0.2316 | 26.0 | 169 | 0.7219 | 0.7174 |
| 0.2339 | 26.92 | 175 | 0.7071 | 0.7609 |
| 0.2055 | 28.0 | 182 | 1.0110 | 0.6739 |
| 0.2055 | 28.92 | 188 | 0.6397 | 0.8478 |
| 0.1995 | 30.0 | 195 | 0.6922 | 0.8478 |
| 0.169 | 30.92 | 201 | 0.6171 | 0.8478 |
| 0.169 | 32.0 | 208 | 0.6632 | 0.8261 |
| 0.1586 | 32.92 | 214 | 0.6475 | 0.8261 |
| 0.1439 | 34.0 | 221 | 0.8332 | 0.6957 |
| 0.1439 | 34.92 | 227 | 0.6816 | 0.7826 |
| 0.1698 | 36.0 | 234 | 0.8066 | 0.7609 |
| 0.1362 | 36.92 | 240 | 0.7150 | 0.8043 |
| 0.1362 | 38.0 | 247 | 0.7193 | 0.8043 |
| 0.1344 | 38.92 | 253 | 0.8181 | 0.7609 |
| 0.1317 | 40.0 | 260 | 0.6547 | 0.8261 |
| 0.1317 | 40.92 | 266 | 0.8459 | 0.7609 |
| 0.123 | 42.0 | 273 | 0.7700 | 0.8261 |
| 0.123 | 42.92 | 279 | 0.9338 | 0.7391 |
| 0.102 | 44.0 | 286 | 0.8536 | 0.8043 |
| 0.1015 | 44.92 | 292 | 0.9725 | 0.7391 |
| 0.1015 | 46.0 | 299 | 0.8865 | 0.8043 |
| 0.1313 | 46.92 | 305 | 0.8947 | 0.8261 |
| 0.1312 | 48.0 | 312 | 0.8235 | 0.8043 |
| 0.1312 | 48.92 | 318 | 0.7326 | 0.8261 |
| 0.1168 | 50.0 | 325 | 0.8654 | 0.7609 |
| 0.09 | 50.92 | 331 | 0.7645 | 0.8261 |
| 0.09 | 52.0 | 338 | 0.7632 | 0.8478 |
| 0.0872 | 52.92 | 344 | 0.7496 | 0.8043 |
| 0.0813 | 54.0 | 351 | 0.8846 | 0.8043 |
| 0.0813 | 54.92 | 357 | 0.9214 | 0.7826 |
| 0.0955 | 56.0 | 364 | 0.9284 | 0.7826 |
| 0.1031 | 56.92 | 370 | 0.8855 | 0.7826 |
| 0.1031 | 58.0 | 377 | 0.8619 | 0.8043 |
| 0.0962 | 58.92 | 383 | 0.8187 | 0.8261 |
| 0.0891 | 60.0 | 390 | 0.7430 | 0.8478 |
| 0.0891 | 60.92 | 396 | 0.7530 | 0.8478 |
| 0.0679 | 62.0 | 403 | 0.7790 | 0.8261 |
| 0.0679 | 62.92 | 409 | 0.7905 | 0.8261 |
| 0.0805 | 64.0 | 416 | 0.8286 | 0.8261 |
| 0.0619 | 64.92 | 422 | 0.8371 | 0.8043 |
| 0.0619 | 66.0 | 429 | 0.8655 | 0.8043 |
| 0.0778 | 66.92 | 435 | 0.8897 | 0.8043 |
| 0.0712 | 68.0 | 442 | 0.9385 | 0.8043 |
| 0.0712 | 68.92 | 448 | 0.9611 | 0.8043 |
| 0.0659 | 70.0 | 455 | 0.9597 | 0.8043 |
| 0.0602 | 70.92 | 461 | 0.9635 | 0.8043 |
| 0.0602 | 72.0 | 468 | 0.9733 | 0.8043 |
| 0.0641 | 72.92 | 474 | 0.9754 | 0.8043 |
| 0.0653 | 73.85 | 480 | 0.9753 | 0.8043 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/beit-base-patch16-224-ve-U13-b-80b
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-ve-U13-b-80b
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7122
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3182 | 0.4565 |
| 1.6182 | 2.0 | 13 | 1.3056 | 0.4565 |
| 1.6182 | 2.92 | 19 | 1.2884 | 0.4565 |
| 1.592 | 4.0 | 26 | 1.2807 | 0.4565 |
| 1.4756 | 4.92 | 32 | 1.2991 | 0.4565 |
| 1.4756 | 6.0 | 39 | 1.2451 | 0.5 |
| 1.352 | 6.92 | 45 | 1.1845 | 0.5217 |
| 1.2143 | 8.0 | 52 | 1.0315 | 0.6087 |
| 1.2143 | 8.92 | 58 | 0.9289 | 0.5435 |
| 1.0327 | 10.0 | 65 | 0.8925 | 0.5435 |
| 0.8878 | 10.92 | 71 | 0.8633 | 0.5652 |
| 0.8878 | 12.0 | 78 | 0.7566 | 0.6304 |
| 0.7712 | 12.92 | 84 | 0.7669 | 0.7609 |
| 0.6808 | 14.0 | 91 | 0.7635 | 0.7609 |
| 0.6808 | 14.92 | 97 | 0.8653 | 0.6304 |
| 0.5844 | 16.0 | 104 | 0.7193 | 0.7174 |
| 0.4332 | 16.92 | 110 | 0.6186 | 0.7826 |
| 0.4332 | 18.0 | 117 | 1.0295 | 0.6739 |
| 0.3607 | 18.92 | 123 | 0.8007 | 0.7609 |
| 0.3134 | 20.0 | 130 | 0.6790 | 0.7826 |
| 0.3134 | 20.92 | 136 | 0.8013 | 0.7391 |
| 0.2988 | 22.0 | 143 | 0.7481 | 0.7609 |
| 0.2988 | 22.92 | 149 | 0.9280 | 0.6739 |
| 0.2487 | 24.0 | 156 | 0.6542 | 0.7391 |
| 0.1912 | 24.92 | 162 | 0.7134 | 0.7609 |
| 0.1912 | 26.0 | 169 | 0.8421 | 0.7609 |
| 0.1946 | 26.92 | 175 | 0.7284 | 0.7391 |
| 0.1685 | 28.0 | 182 | 0.7507 | 0.8261 |
| 0.1685 | 28.92 | 188 | 0.7610 | 0.8043 |
| 0.1646 | 30.0 | 195 | 0.8013 | 0.7826 |
| 0.166 | 30.92 | 201 | 0.8803 | 0.7826 |
| 0.166 | 32.0 | 208 | 0.7895 | 0.7391 |
| 0.1372 | 32.92 | 214 | 0.7760 | 0.7174 |
| 0.1424 | 34.0 | 221 | 0.9390 | 0.7174 |
| 0.1424 | 34.92 | 227 | 0.7839 | 0.8043 |
| 0.1399 | 36.0 | 234 | 0.9422 | 0.7609 |
| 0.1238 | 36.92 | 240 | 0.8710 | 0.7174 |
| 0.1238 | 38.0 | 247 | 0.8684 | 0.7826 |
| 0.123 | 38.92 | 253 | 0.8194 | 0.7609 |
| 0.1381 | 40.0 | 260 | 0.9698 | 0.7391 |
| 0.1381 | 40.92 | 266 | 0.8545 | 0.7609 |
| 0.1081 | 42.0 | 273 | 0.9925 | 0.6739 |
| 0.1081 | 42.92 | 279 | 0.9320 | 0.8043 |
| 0.0929 | 44.0 | 286 | 1.0242 | 0.7609 |
| 0.0898 | 44.92 | 292 | 0.9411 | 0.7609 |
| 0.0898 | 46.0 | 299 | 0.8995 | 0.7609 |
| 0.12 | 46.92 | 305 | 0.7741 | 0.7826 |
| 0.1126 | 48.0 | 312 | 0.7122 | 0.8478 |
| 0.1126 | 48.92 | 318 | 0.9099 | 0.7826 |
| 0.1088 | 50.0 | 325 | 1.1148 | 0.6957 |
| 0.0851 | 50.92 | 331 | 0.9297 | 0.8043 |
| 0.0851 | 52.0 | 338 | 0.8801 | 0.8043 |
| 0.1001 | 52.92 | 344 | 0.8428 | 0.8261 |
| 0.0718 | 54.0 | 351 | 0.9721 | 0.7826 |
| 0.0718 | 54.92 | 357 | 0.8771 | 0.8043 |
| 0.0842 | 56.0 | 364 | 0.9982 | 0.7826 |
| 0.1069 | 56.92 | 370 | 1.1083 | 0.7391 |
| 0.1069 | 58.0 | 377 | 0.9072 | 0.7826 |
| 0.0803 | 58.92 | 383 | 0.7979 | 0.8261 |
| 0.0752 | 60.0 | 390 | 0.7489 | 0.8478 |
| 0.0752 | 60.92 | 396 | 0.8023 | 0.8261 |
| 0.0646 | 62.0 | 403 | 0.8027 | 0.8261 |
| 0.0646 | 62.92 | 409 | 0.8275 | 0.7826 |
| 0.0829 | 64.0 | 416 | 0.8587 | 0.8043 |
| 0.0616 | 64.92 | 422 | 0.8870 | 0.8043 |
| 0.0616 | 66.0 | 429 | 0.8928 | 0.8043 |
| 0.0693 | 66.92 | 435 | 0.9289 | 0.7826 |
| 0.0657 | 68.0 | 442 | 0.9604 | 0.7609 |
| 0.0657 | 68.92 | 448 | 0.9560 | 0.7826 |
| 0.0588 | 70.0 | 455 | 0.9544 | 0.7609 |
| 0.0578 | 70.92 | 461 | 0.9419 | 0.7826 |
| 0.0578 | 72.0 | 468 | 0.9474 | 0.7826 |
| 0.0638 | 72.92 | 474 | 0.9540 | 0.7826 |
| 0.0592 | 73.85 | 480 | 0.9549 | 0.7826 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
Augusto777/swiftformer-xs-ve-U13-b-80e
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swiftformer-xs-ve-U13-b-80e
This model is a fine-tuned version of [MBZUAI/swiftformer-xs](https://huggingface.co/MBZUAI/swiftformer-xs) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6618
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 1.3859 | 0.2391 |
| 1.3857 | 2.0 | 13 | 1.3834 | 0.3261 |
| 1.3857 | 2.92 | 19 | 1.3789 | 0.1957 |
| 1.3767 | 4.0 | 26 | 1.3666 | 0.1739 |
| 1.3227 | 4.92 | 32 | 1.3565 | 0.1522 |
| 1.3227 | 6.0 | 39 | 1.3887 | 0.1087 |
| 1.1987 | 6.92 | 45 | 1.3719 | 0.2174 |
| 1.1071 | 8.0 | 52 | 1.3271 | 0.3043 |
| 1.1071 | 8.92 | 58 | 1.3562 | 0.2609 |
| 0.9926 | 10.0 | 65 | 1.2306 | 0.4130 |
| 0.8721 | 10.92 | 71 | 1.1953 | 0.4565 |
| 0.8721 | 12.0 | 78 | 1.0754 | 0.5652 |
| 0.7746 | 12.92 | 84 | 0.9931 | 0.6739 |
| 0.6859 | 14.0 | 91 | 0.9979 | 0.6739 |
| 0.6859 | 14.92 | 97 | 0.8964 | 0.6957 |
| 0.5777 | 16.0 | 104 | 0.9186 | 0.6522 |
| 0.5136 | 16.92 | 110 | 0.7950 | 0.7609 |
| 0.5136 | 18.0 | 117 | 0.7794 | 0.7391 |
| 0.5019 | 18.92 | 123 | 0.8645 | 0.7174 |
| 0.3879 | 20.0 | 130 | 0.8773 | 0.6957 |
| 0.3879 | 20.92 | 136 | 0.7304 | 0.7609 |
| 0.3532 | 22.0 | 143 | 0.6918 | 0.7609 |
| 0.3532 | 22.92 | 149 | 0.7882 | 0.7609 |
| 0.3288 | 24.0 | 156 | 0.7132 | 0.7609 |
| 0.2573 | 24.92 | 162 | 0.6645 | 0.8043 |
| 0.2573 | 26.0 | 169 | 0.6618 | 0.8478 |
| 0.239 | 26.92 | 175 | 0.6780 | 0.8043 |
| 0.2018 | 28.0 | 182 | 0.8138 | 0.6957 |
| 0.2018 | 28.92 | 188 | 0.8797 | 0.6957 |
| 0.1961 | 30.0 | 195 | 0.8602 | 0.7174 |
| 0.214 | 30.92 | 201 | 0.8188 | 0.7391 |
| 0.214 | 32.0 | 208 | 0.6956 | 0.7609 |
| 0.1596 | 32.92 | 214 | 0.7981 | 0.7391 |
| 0.172 | 34.0 | 221 | 0.6845 | 0.7609 |
| 0.172 | 34.92 | 227 | 0.9340 | 0.7174 |
| 0.1852 | 36.0 | 234 | 0.9548 | 0.6522 |
| 0.1492 | 36.92 | 240 | 0.7747 | 0.7609 |
| 0.1492 | 38.0 | 247 | 0.9907 | 0.6304 |
| 0.1735 | 38.92 | 253 | 0.8040 | 0.7174 |
| 0.1405 | 40.0 | 260 | 0.6946 | 0.7609 |
| 0.1405 | 40.92 | 266 | 0.7019 | 0.7609 |
| 0.1269 | 42.0 | 273 | 0.8246 | 0.7174 |
| 0.1269 | 42.92 | 279 | 0.9238 | 0.6739 |
| 0.1237 | 44.0 | 286 | 0.9354 | 0.6957 |
| 0.1201 | 44.92 | 292 | 0.7543 | 0.7391 |
| 0.1201 | 46.0 | 299 | 0.7151 | 0.7174 |
| 0.1134 | 46.92 | 305 | 0.7284 | 0.7174 |
| 0.1141 | 48.0 | 312 | 0.7791 | 0.7609 |
| 0.1141 | 48.92 | 318 | 0.7824 | 0.7391 |
| 0.1253 | 50.0 | 325 | 0.7319 | 0.7609 |
| 0.0968 | 50.92 | 331 | 0.7151 | 0.7609 |
| 0.0968 | 52.0 | 338 | 0.7662 | 0.7609 |
| 0.0996 | 52.92 | 344 | 0.8086 | 0.7826 |
| 0.0844 | 54.0 | 351 | 0.8921 | 0.7609 |
| 0.0844 | 54.92 | 357 | 0.8782 | 0.7609 |
| 0.1141 | 56.0 | 364 | 0.7864 | 0.7391 |
| 0.1263 | 56.92 | 370 | 0.7125 | 0.7609 |
| 0.1263 | 58.0 | 377 | 0.6758 | 0.7609 |
| 0.0966 | 58.92 | 383 | 0.7243 | 0.7609 |
| 0.0771 | 60.0 | 390 | 0.7090 | 0.7609 |
| 0.0771 | 60.92 | 396 | 0.7157 | 0.7609 |
| 0.0497 | 62.0 | 403 | 0.7549 | 0.7609 |
| 0.0497 | 62.92 | 409 | 0.7806 | 0.7609 |
| 0.0848 | 64.0 | 416 | 0.7902 | 0.7391 |
| 0.0477 | 64.92 | 422 | 0.7684 | 0.7391 |
| 0.0477 | 66.0 | 429 | 0.8038 | 0.6957 |
| 0.0823 | 66.92 | 435 | 0.7503 | 0.6957 |
| 0.0726 | 68.0 | 442 | 0.7634 | 0.7609 |
| 0.0726 | 68.92 | 448 | 0.7860 | 0.7826 |
| 0.0799 | 70.0 | 455 | 0.7630 | 0.7609 |
| 0.067 | 70.92 | 461 | 0.8094 | 0.7391 |
| 0.067 | 72.0 | 468 | 0.7511 | 0.7391 |
| 0.0893 | 72.92 | 474 | 0.7738 | 0.7391 |
| 0.0738 | 73.85 | 480 | 0.7971 | 0.7391 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
[
"avanzada",
"leve",
"moderada",
"no dmae"
] |
matthieulel/beit-base-patch16-224-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4611
- Accuracy: 0.8687
- Precision: 0.8668
- Recall: 0.8687
- F1: 0.8672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.1109 | 0.99 | 62 | 0.8733 | 0.7108 | 0.7055 | 0.7108 | 0.6891 |
| 0.7936 | 2.0 | 125 | 0.6203 | 0.7903 | 0.7882 | 0.7903 | 0.7816 |
| 0.7011 | 2.99 | 187 | 0.5244 | 0.8269 | 0.8322 | 0.8269 | 0.8246 |
| 0.5824 | 4.0 | 250 | 0.5032 | 0.8298 | 0.8327 | 0.8298 | 0.8264 |
| 0.5333 | 4.99 | 312 | 0.5270 | 0.8236 | 0.8299 | 0.8236 | 0.8246 |
| 0.5432 | 6.0 | 375 | 0.5006 | 0.8337 | 0.8418 | 0.8337 | 0.8286 |
| 0.5206 | 6.99 | 437 | 0.4891 | 0.8331 | 0.8357 | 0.8331 | 0.8330 |
| 0.4597 | 8.0 | 500 | 0.4484 | 0.8512 | 0.8510 | 0.8512 | 0.8501 |
| 0.4779 | 8.99 | 562 | 0.4499 | 0.8529 | 0.8547 | 0.8529 | 0.8518 |
| 0.4199 | 10.0 | 625 | 0.4354 | 0.8602 | 0.8583 | 0.8602 | 0.8576 |
| 0.4242 | 10.99 | 687 | 0.4295 | 0.8512 | 0.8523 | 0.8512 | 0.8510 |
| 0.4048 | 12.0 | 750 | 0.4662 | 0.8461 | 0.8490 | 0.8461 | 0.8456 |
| 0.403 | 12.99 | 812 | 0.4424 | 0.8591 | 0.8586 | 0.8591 | 0.8579 |
| 0.3958 | 14.0 | 875 | 0.4448 | 0.8546 | 0.8558 | 0.8546 | 0.8546 |
| 0.3602 | 14.99 | 937 | 0.4378 | 0.8585 | 0.8582 | 0.8585 | 0.8572 |
| 0.352 | 16.0 | 1000 | 0.4570 | 0.8546 | 0.8539 | 0.8546 | 0.8527 |
| 0.3041 | 16.99 | 1062 | 0.4601 | 0.8506 | 0.8486 | 0.8506 | 0.8486 |
| 0.3439 | 18.0 | 1125 | 0.4483 | 0.8489 | 0.8519 | 0.8489 | 0.8496 |
| 0.3148 | 18.99 | 1187 | 0.4372 | 0.8625 | 0.8598 | 0.8625 | 0.8606 |
| 0.3075 | 20.0 | 1250 | 0.4370 | 0.8585 | 0.8556 | 0.8585 | 0.8567 |
| 0.2999 | 20.99 | 1312 | 0.4570 | 0.8546 | 0.8540 | 0.8546 | 0.8535 |
| 0.3156 | 22.0 | 1375 | 0.4541 | 0.8540 | 0.8525 | 0.8540 | 0.8527 |
| 0.2845 | 22.99 | 1437 | 0.4702 | 0.8574 | 0.8584 | 0.8574 | 0.8576 |
| 0.2702 | 24.0 | 1500 | 0.4624 | 0.8630 | 0.8621 | 0.8630 | 0.8615 |
| 0.2914 | 24.99 | 1562 | 0.4670 | 0.8591 | 0.8575 | 0.8591 | 0.8577 |
| 0.2464 | 26.0 | 1625 | 0.4581 | 0.8658 | 0.8648 | 0.8658 | 0.8650 |
| 0.2621 | 26.99 | 1687 | 0.4688 | 0.8658 | 0.8662 | 0.8658 | 0.8657 |
| 0.2731 | 28.0 | 1750 | 0.4664 | 0.8664 | 0.8643 | 0.8664 | 0.8648 |
| 0.2448 | 28.99 | 1812 | 0.4611 | 0.8687 | 0.8668 | 0.8687 | 0.8672 |
| 0.2517 | 29.76 | 1860 | 0.4612 | 0.8675 | 0.8660 | 0.8675 | 0.8663 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
ThatOrJohn/resnet-50-pineapple
|
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
No validation metrics available
|
[
"pineapple-chunk",
"pineapple-ring"
] |
Skullly/results
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1114
- Accuracy: 0.9687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.6639 | 0.1829 | 100 | 0.6155 | 0.6554 |
| 0.4191 | 0.3657 | 200 | 0.3088 | 0.8959 |
| 0.1698 | 0.5486 | 300 | 0.5321 | 0.7281 |
| 0.0749 | 0.7314 | 400 | 0.5087 | 0.7900 |
| 0.0484 | 0.9143 | 500 | 0.4649 | 0.8185 |
| 0.0323 | 1.0971 | 600 | 0.6888 | 0.762 |
| 0.0264 | 1.28 | 700 | 0.1395 | 0.9513 |
| 0.0224 | 1.4629 | 800 | 0.0661 | 0.9776 |
| 0.02 | 1.6457 | 900 | 0.1173 | 0.9581 |
| 0.0168 | 1.8286 | 1000 | 0.3498 | 0.889 |
| 0.013 | 2.0114 | 1100 | 0.1053 | 0.9655 |
| 0.0087 | 2.1943 | 1200 | 0.3601 | 0.8947 |
| 0.0081 | 2.3771 | 1300 | 0.1508 | 0.9535 |
| 0.0073 | 2.56 | 1400 | 0.2090 | 0.9390 |
| 0.0056 | 2.7429 | 1500 | 0.1136 | 0.9649 |
| 0.005 | 2.9257 | 1600 | 0.2656 | 0.9206 |
| 0.0036 | 3.1086 | 1700 | 0.1320 | 0.9595 |
| 0.002 | 3.2914 | 1800 | 0.1068 | 0.9686 |
| 0.0018 | 3.4743 | 1900 | 0.1091 | 0.9690 |
| 0.0019 | 3.6571 | 2000 | 0.1114 | 0.9687 |
| 0.0018 | 3.84 | 2100 | 0.0968 | 0.9719 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"f",
"r"
] |
ThatOrJohn/efficientnet-b1-pineapple
|
## Model description
Fine-tuned to classify between pineapple rings vs chunks
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
No validation metrics available
|
[
"pineapple-chunk",
"pineapple-ring"
] |
mjun/swin-tiny-patch4-window7-224-musinsa
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-musinsa
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1442
- Precision: 0.9553
- Recall: 0.9552
- F1: 0.9551
- Accuracy: 0.9552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 202 | 0.1506 | 0.9427 | 0.9423 | 0.9424 | 0.9423 |
| No log | 2.0 | 404 | 0.1260 | 0.9549 | 0.9548 | 0.9548 | 0.9548 |
| 0.158 | 3.0 | 606 | 0.1318 | 0.9561 | 0.9562 | 0.9561 | 0.9562 |
| 0.158 | 4.0 | 808 | 0.1442 | 0.9553 | 0.9552 | 0.9551 | 0.9552 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Tokenizers 0.19.1
|
[
"label_0",
"label_1",
"label_2",
"label_3"
] |
talli96123/meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1008 | 1.0 | 11 | 1.0912 | 0.3583 |
| 1.0751 | 2.0 | 22 | 1.0569 | 0.5140 |
| 1.0562 | 3.0 | 33 | 1.0284 | 0.4891 |
| 0.9901 | 4.0 | 44 | 0.9771 | 0.5607 |
| 0.9179 | 5.0 | 55 | 0.9142 | 0.5888 |
| 0.8217 | 6.0 | 66 | 0.8546 | 0.6262 |
| 0.7811 | 7.0 | 77 | 0.7960 | 0.6791 |
| 0.8756 | 8.0 | 88 | 0.7693 | 0.6760 |
| 0.8095 | 9.0 | 99 | 0.7796 | 0.6636 |
| 0.6492 | 10.0 | 110 | 0.7908 | 0.6760 |
| 0.6357 | 11.0 | 121 | 0.7367 | 0.6885 |
| 0.6184 | 12.0 | 132 | 0.7575 | 0.6542 |
| 0.5371 | 13.0 | 143 | 0.5625 | 0.8069 |
| 0.5586 | 14.0 | 154 | 0.5400 | 0.7819 |
| 0.4235 | 15.0 | 165 | 0.5775 | 0.7664 |
| 0.5082 | 16.0 | 176 | 0.5360 | 0.7819 |
| 0.3758 | 17.0 | 187 | 0.5193 | 0.8131 |
| 0.3729 | 18.0 | 198 | 0.6018 | 0.7695 |
| 0.5911 | 19.0 | 209 | 0.4724 | 0.8224 |
| 0.3055 | 20.0 | 220 | 0.4877 | 0.8162 |
| 0.3054 | 21.0 | 231 | 0.5504 | 0.7726 |
| 0.2947 | 22.0 | 242 | 0.5059 | 0.8069 |
| 0.2336 | 23.0 | 253 | 0.4085 | 0.8598 |
| 0.2806 | 24.0 | 264 | 0.5123 | 0.8193 |
| 0.2782 | 25.0 | 275 | 0.4825 | 0.8131 |
| 0.2396 | 26.0 | 286 | 0.3329 | 0.8910 |
| 0.1937 | 27.0 | 297 | 0.3984 | 0.8816 |
| 0.5237 | 28.0 | 308 | 0.5059 | 0.8224 |
| 0.1951 | 29.0 | 319 | 0.6188 | 0.7757 |
| 0.2097 | 30.0 | 330 | 0.3235 | 0.8754 |
| 0.1443 | 31.0 | 341 | 0.4216 | 0.8567 |
| 0.1856 | 32.0 | 352 | 0.3461 | 0.8785 |
| 0.1837 | 33.0 | 363 | 0.3602 | 0.8723 |
| 0.2783 | 34.0 | 374 | 0.3804 | 0.8660 |
| 0.1553 | 35.0 | 385 | 0.3125 | 0.8879 |
| 0.1413 | 36.0 | 396 | 0.3002 | 0.8972 |
| 0.1582 | 37.0 | 407 | 0.3564 | 0.8723 |
| 0.1573 | 38.0 | 418 | 0.4468 | 0.8380 |
| 0.188 | 39.0 | 429 | 0.4019 | 0.8505 |
| 0.1562 | 40.0 | 440 | 0.2482 | 0.9221 |
| 0.1295 | 41.0 | 451 | 0.4421 | 0.8349 |
| 0.1472 | 42.0 | 462 | 0.3083 | 0.8972 |
| 0.12 | 43.0 | 473 | 0.2961 | 0.9003 |
| 0.1056 | 44.0 | 484 | 0.3540 | 0.8692 |
| 0.1121 | 45.0 | 495 | 0.3734 | 0.8692 |
| 0.1055 | 46.0 | 506 | 0.3385 | 0.8785 |
| 0.2452 | 47.0 | 517 | 0.3638 | 0.8629 |
| 0.1398 | 48.0 | 528 | 0.3100 | 0.8941 |
| 0.1255 | 49.0 | 539 | 0.2797 | 0.9034 |
| 0.0972 | 50.0 | 550 | 0.2636 | 0.9034 |
| 0.1057 | 51.0 | 561 | 0.2505 | 0.9003 |
| 0.0929 | 52.0 | 572 | 0.3668 | 0.8816 |
| 0.0991 | 53.0 | 583 | 0.2946 | 0.8972 |
| 0.0994 | 54.0 | 594 | 0.2765 | 0.9065 |
| 0.0949 | 55.0 | 605 | 0.2876 | 0.9097 |
| 0.2796 | 56.0 | 616 | 0.2407 | 0.9221 |
| 0.071 | 57.0 | 627 | 0.3321 | 0.8941 |
| 0.1163 | 58.0 | 638 | 0.2527 | 0.9315 |
| 0.0966 | 59.0 | 649 | 0.2549 | 0.9252 |
| 0.0871 | 60.0 | 660 | 0.3171 | 0.8879 |
| 0.216 | 61.0 | 671 | 0.2085 | 0.9283 |
| 0.0556 | 62.0 | 682 | 0.2115 | 0.9190 |
| 0.0842 | 63.0 | 693 | 0.2602 | 0.9097 |
| 0.0824 | 64.0 | 704 | 0.3565 | 0.8723 |
| 0.0765 | 65.0 | 715 | 0.2983 | 0.9003 |
| 0.3268 | 66.0 | 726 | 0.2924 | 0.8972 |
| 0.0881 | 67.0 | 737 | 0.2990 | 0.8941 |
| 0.0656 | 68.0 | 748 | 0.2518 | 0.9128 |
| 0.0707 | 69.0 | 759 | 0.2702 | 0.9003 |
| 0.0609 | 70.0 | 770 | 0.2493 | 0.9190 |
| 0.0882 | 71.0 | 781 | 0.2210 | 0.9252 |
| 0.0706 | 72.0 | 792 | 0.2242 | 0.9252 |
| 0.0569 | 73.0 | 803 | 0.2450 | 0.9097 |
| 0.0476 | 74.0 | 814 | 0.1686 | 0.9408 |
| 0.0587 | 75.0 | 825 | 0.2537 | 0.9159 |
| 0.056 | 76.0 | 836 | 0.2437 | 0.9190 |
| 0.0613 | 77.0 | 847 | 0.2664 | 0.9128 |
| 0.0554 | 78.0 | 858 | 0.2851 | 0.9003 |
| 0.0522 | 79.0 | 869 | 0.2326 | 0.9221 |
| 0.0564 | 80.0 | 880 | 0.2392 | 0.9283 |
| 0.052 | 81.0 | 891 | 0.2298 | 0.9252 |
| 0.0489 | 82.0 | 902 | 0.2626 | 0.9190 |
| 0.0545 | 83.0 | 913 | 0.2442 | 0.9159 |
| 0.054 | 84.0 | 924 | 0.1613 | 0.9439 |
| 0.0481 | 85.0 | 935 | 0.2730 | 0.9190 |
| 0.0541 | 86.0 | 946 | 0.2194 | 0.9315 |
| 0.0489 | 87.0 | 957 | 0.1749 | 0.9470 |
| 0.0515 | 88.0 | 968 | 0.1577 | 0.9502 |
| 0.05 | 89.0 | 979 | 0.2191 | 0.9252 |
| 0.0484 | 90.0 | 990 | 0.2574 | 0.9252 |
| 0.0503 | 91.0 | 1001 | 0.1792 | 0.9408 |
| 0.0434 | 92.0 | 1012 | 0.2147 | 0.9377 |
| 0.0449 | 93.0 | 1023 | 0.2430 | 0.9159 |
| 0.0464 | 94.0 | 1034 | 0.2486 | 0.9159 |
| 0.0469 | 95.0 | 1045 | 0.1922 | 0.9408 |
| 0.0449 | 96.0 | 1056 | 0.2005 | 0.9283 |
| 0.0456 | 97.0 | 1067 | 0.2175 | 0.9346 |
| 0.0425 | 98.0 | 1078 | 0.1975 | 0.9346 |
| 0.0419 | 99.0 | 1089 | 0.2070 | 0.9283 |
| 0.0363 | 100.0 | 1100 | 0.2175 | 0.9283 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_2_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
asad-cse/swin-tiny-patch4-window7-224-finetuned-eurosat
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8049
- Accuracy: 0.7246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5124 | 1.0 | 13 | 1.0907 | 0.6184 |
| 1.1298 | 2.0 | 26 | 0.8774 | 0.7005 |
| 0.8668 | 3.0 | 39 | 0.8510 | 0.7150 |
| 0.6712 | 4.0 | 52 | 0.7277 | 0.7246 |
| 0.6934 | 5.0 | 65 | 0.8049 | 0.7246 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"amd",
"dme",
"erm",
"no",
"rao",
"rvo",
"vid"
] |
talli96123/meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_3
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_3
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1880
- Accuracy: 0.9408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0748 | 1.0 | 21 | 1.0941 | 0.4019 |
| 1.056 | 2.0 | 42 | 1.0704 | 0.4517 |
| 1.0196 | 3.0 | 63 | 0.9922 | 0.5171 |
| 0.9144 | 4.0 | 84 | 0.9600 | 0.5140 |
| 0.9096 | 5.0 | 105 | 0.9206 | 0.5514 |
| 0.786 | 6.0 | 126 | 0.8006 | 0.6511 |
| 0.7149 | 7.0 | 147 | 0.7398 | 0.7196 |
| 0.6742 | 8.0 | 168 | 0.8100 | 0.6542 |
| 0.681 | 9.0 | 189 | 0.7297 | 0.6760 |
| 0.5929 | 10.0 | 210 | 0.7184 | 0.6854 |
| 0.5621 | 11.0 | 231 | 0.7011 | 0.7165 |
| 0.4628 | 12.0 | 252 | 0.6673 | 0.7196 |
| 0.4278 | 13.0 | 273 | 0.7029 | 0.7445 |
| 0.4525 | 14.0 | 294 | 0.6493 | 0.7477 |
| 0.3483 | 15.0 | 315 | 0.6969 | 0.7134 |
| 0.4328 | 16.0 | 336 | 0.5270 | 0.8006 |
| 0.3657 | 17.0 | 357 | 0.5653 | 0.7570 |
| 0.3047 | 18.0 | 378 | 0.4854 | 0.8131 |
| 0.2507 | 19.0 | 399 | 0.4555 | 0.8505 |
| 0.2468 | 20.0 | 420 | 0.5035 | 0.8131 |
| 0.2336 | 21.0 | 441 | 0.7171 | 0.7601 |
| 0.2954 | 22.0 | 462 | 0.4171 | 0.8536 |
| 0.2398 | 23.0 | 483 | 0.5465 | 0.7850 |
| 0.2538 | 24.0 | 504 | 0.5179 | 0.8069 |
| 0.21 | 25.0 | 525 | 0.3688 | 0.8723 |
| 0.1938 | 26.0 | 546 | 0.3997 | 0.8442 |
| 0.171 | 27.0 | 567 | 0.5068 | 0.8224 |
| 0.1983 | 28.0 | 588 | 0.4238 | 0.8380 |
| 0.1839 | 29.0 | 609 | 0.4431 | 0.8380 |
| 0.1977 | 30.0 | 630 | 0.3608 | 0.8598 |
| 0.1545 | 31.0 | 651 | 0.4898 | 0.8536 |
| 0.2214 | 32.0 | 672 | 0.5862 | 0.7850 |
| 0.185 | 33.0 | 693 | 0.3682 | 0.8785 |
| 0.1238 | 34.0 | 714 | 0.4300 | 0.8380 |
| 0.1424 | 35.0 | 735 | 0.5039 | 0.8287 |
| 0.1538 | 36.0 | 756 | 0.5649 | 0.8193 |
| 0.1806 | 37.0 | 777 | 0.3727 | 0.8505 |
| 0.1038 | 38.0 | 798 | 0.4984 | 0.8162 |
| 0.1241 | 39.0 | 819 | 0.3025 | 0.8941 |
| 0.1197 | 40.0 | 840 | 0.3038 | 0.8847 |
| 0.1288 | 41.0 | 861 | 0.5481 | 0.8100 |
| 0.1232 | 42.0 | 882 | 0.4011 | 0.8660 |
| 0.1308 | 43.0 | 903 | 0.3017 | 0.8910 |
| 0.0803 | 44.0 | 924 | 0.4368 | 0.8567 |
| 0.0893 | 45.0 | 945 | 0.3961 | 0.8660 |
| 0.1279 | 46.0 | 966 | 0.4143 | 0.8629 |
| 0.1105 | 47.0 | 987 | 0.3773 | 0.8598 |
| 0.0877 | 48.0 | 1008 | 0.3716 | 0.8816 |
| 0.0951 | 49.0 | 1029 | 0.3312 | 0.8847 |
| 0.0941 | 50.0 | 1050 | 0.2714 | 0.8910 |
| 0.073 | 51.0 | 1071 | 0.3475 | 0.8910 |
| 0.0878 | 52.0 | 1092 | 0.3918 | 0.8847 |
| 0.0898 | 53.0 | 1113 | 0.4729 | 0.8442 |
| 0.0849 | 54.0 | 1134 | 0.4245 | 0.8692 |
| 0.1619 | 55.0 | 1155 | 0.3289 | 0.9065 |
| 0.0838 | 56.0 | 1176 | 0.2787 | 0.9159 |
| 0.0767 | 57.0 | 1197 | 0.2738 | 0.9128 |
| 0.0815 | 58.0 | 1218 | 0.2729 | 0.9128 |
| 0.0747 | 59.0 | 1239 | 0.2036 | 0.9377 |
| 0.0629 | 60.0 | 1260 | 0.2615 | 0.9221 |
| 0.0561 | 61.0 | 1281 | 0.3424 | 0.8910 |
| 0.0666 | 62.0 | 1302 | 0.3222 | 0.8941 |
| 0.0759 | 63.0 | 1323 | 0.3462 | 0.9065 |
| 0.0548 | 64.0 | 1344 | 0.3463 | 0.8972 |
| 0.0607 | 65.0 | 1365 | 0.2171 | 0.9283 |
| 0.0796 | 66.0 | 1386 | 0.3879 | 0.8847 |
| 0.0651 | 67.0 | 1407 | 0.2649 | 0.9159 |
| 0.0615 | 68.0 | 1428 | 0.2469 | 0.9221 |
| 0.0495 | 69.0 | 1449 | 0.2899 | 0.9252 |
| 0.0511 | 70.0 | 1470 | 0.2891 | 0.9065 |
| 0.0487 | 71.0 | 1491 | 0.2990 | 0.9159 |
| 0.0593 | 72.0 | 1512 | 0.3046 | 0.9128 |
| 0.0484 | 73.0 | 1533 | 0.2865 | 0.9065 |
| 0.0534 | 74.0 | 1554 | 0.2614 | 0.9128 |
| 0.0446 | 75.0 | 1575 | 0.3311 | 0.8972 |
| 0.0478 | 76.0 | 1596 | 0.2580 | 0.9159 |
| 0.0335 | 77.0 | 1617 | 0.3392 | 0.9159 |
| 0.0436 | 78.0 | 1638 | 0.3400 | 0.9034 |
| 0.07 | 79.0 | 1659 | 0.3434 | 0.9034 |
| 0.0536 | 80.0 | 1680 | 0.3456 | 0.8972 |
| 0.0431 | 81.0 | 1701 | 0.2386 | 0.9408 |
| 0.0381 | 82.0 | 1722 | 0.2401 | 0.9346 |
| 0.0423 | 83.0 | 1743 | 0.2421 | 0.9346 |
| 0.0393 | 84.0 | 1764 | 0.1979 | 0.9439 |
| 0.0393 | 85.0 | 1785 | 0.2756 | 0.9190 |
| 0.0395 | 86.0 | 1806 | 0.3339 | 0.8972 |
| 0.031 | 87.0 | 1827 | 0.2471 | 0.9252 |
| 0.0227 | 88.0 | 1848 | 0.2182 | 0.9346 |
| 0.0392 | 89.0 | 1869 | 0.2732 | 0.9221 |
| 0.0536 | 90.0 | 1890 | 0.2579 | 0.9283 |
| 0.0426 | 91.0 | 1911 | 0.2062 | 0.9315 |
| 0.0344 | 92.0 | 1932 | 0.2209 | 0.9252 |
| 0.0333 | 93.0 | 1953 | 0.1584 | 0.9564 |
| 0.0338 | 94.0 | 1974 | 0.2976 | 0.9128 |
| 0.0391 | 95.0 | 1995 | 0.2420 | 0.9377 |
| 0.0302 | 96.0 | 2016 | 0.2694 | 0.9159 |
| 0.0268 | 97.0 | 2037 | 0.2610 | 0.9221 |
| 0.0402 | 98.0 | 2058 | 0.2952 | 0.9159 |
| 0.0172 | 99.0 | 2079 | 0.1870 | 0.9470 |
| 0.0241 | 100.0 | 2100 | 0.1880 | 0.9408 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_3_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
AMfeta99/vit-base-oxford-brain-tumor_try_stuff
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-brain-tumor_try_stuff
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Mahadih534/brain-tumor-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5406
- Accuracy: 0.8077
- Precision: 0.8514
- Recall: 0.8077
- F1: 0.7830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 20
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6608 | 1.0 | 11 | 0.5499 | 0.8 | 0.8308 | 0.8 | 0.8039 |
| 0.6097 | 2.0 | 22 | 0.4836 | 0.88 | 0.8989 | 0.88 | 0.8731 |
| 0.5882 | 3.0 | 33 | 0.4191 | 0.88 | 0.8853 | 0.88 | 0.8812 |
| 0.5673 | 4.0 | 44 | 0.4871 | 0.84 | 0.8561 | 0.84 | 0.8427 |
| 0.5619 | 5.0 | 55 | 0.4079 | 0.92 | 0.92 | 0.92 | 0.92 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"0",
"1"
] |
hchcsuim/batch-size16_FFPP-c23_opencv-1FPS_faces-expand20-aligned_unaugmentation
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-c23_opencv-1FPS_faces-expand20-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1572
- Accuracy: 0.9342
- Precision: 0.9383
- Recall: 0.9804
- F1: 0.9589
- Roc Auc: 0.9804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.1641 | 1.0 | 1381 | 0.1572 | 0.9342 | 0.9383 | 0.9804 | 0.9589 | 0.9804 |
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"fake",
"real"
] |
talli96123/meat_calssify_fresh_crop_fixed_epoch100_V_0_2
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meat_calssify_fresh_crop_fixed_epoch100_V_0_2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7219
- Accuracy: 0.7975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0968 | 1.0 | 10 | 1.0907 | 0.3797 |
| 1.0804 | 2.0 | 20 | 1.0759 | 0.3924 |
| 1.0578 | 3.0 | 30 | 1.0750 | 0.4241 |
| 1.0273 | 4.0 | 40 | 1.0443 | 0.4684 |
| 0.9866 | 5.0 | 50 | 1.0325 | 0.4747 |
| 0.9234 | 6.0 | 60 | 0.9837 | 0.5886 |
| 0.8597 | 7.0 | 70 | 0.9564 | 0.5443 |
| 0.8042 | 8.0 | 80 | 0.9315 | 0.5633 |
| 0.8463 | 9.0 | 90 | 0.9334 | 0.5380 |
| 0.7795 | 10.0 | 100 | 0.9305 | 0.5443 |
| 0.7375 | 11.0 | 110 | 0.8787 | 0.6076 |
| 0.6489 | 12.0 | 120 | 0.8685 | 0.6392 |
| 0.5958 | 13.0 | 130 | 0.8133 | 0.6582 |
| 0.5308 | 14.0 | 140 | 0.8563 | 0.6519 |
| 0.5206 | 15.0 | 150 | 0.7902 | 0.6709 |
| 0.4617 | 16.0 | 160 | 0.8114 | 0.6456 |
| 0.4338 | 17.0 | 170 | 0.8134 | 0.6646 |
| 0.454 | 18.0 | 180 | 0.7283 | 0.6772 |
| 0.5094 | 19.0 | 190 | 0.7035 | 0.6962 |
| 0.4133 | 20.0 | 200 | 0.7652 | 0.6835 |
| 0.3504 | 21.0 | 210 | 0.7225 | 0.7089 |
| 0.3602 | 22.0 | 220 | 0.8140 | 0.6582 |
| 0.32 | 23.0 | 230 | 0.7057 | 0.7278 |
| 0.2849 | 24.0 | 240 | 0.7051 | 0.6899 |
| 0.3051 | 25.0 | 250 | 0.7805 | 0.7025 |
| 0.3099 | 26.0 | 260 | 0.7456 | 0.6772 |
| 0.3305 | 27.0 | 270 | 0.7802 | 0.6646 |
| 0.2508 | 28.0 | 280 | 0.7222 | 0.7152 |
| 0.2842 | 29.0 | 290 | 0.6745 | 0.7278 |
| 0.2584 | 30.0 | 300 | 0.6029 | 0.7658 |
| 0.2324 | 31.0 | 310 | 0.6066 | 0.7911 |
| 0.3014 | 32.0 | 320 | 0.7253 | 0.7215 |
| 0.2279 | 33.0 | 330 | 0.7050 | 0.7089 |
| 0.2363 | 34.0 | 340 | 0.7361 | 0.7785 |
| 0.2085 | 35.0 | 350 | 0.6596 | 0.7658 |
| 0.1808 | 36.0 | 360 | 0.7104 | 0.7532 |
| 0.2051 | 37.0 | 370 | 0.7471 | 0.7152 |
| 0.1911 | 38.0 | 380 | 0.8262 | 0.7025 |
| 0.2027 | 39.0 | 390 | 0.7785 | 0.7532 |
| 0.1944 | 40.0 | 400 | 0.8136 | 0.6835 |
| 0.1627 | 41.0 | 410 | 0.8254 | 0.7152 |
| 0.1619 | 42.0 | 420 | 0.8766 | 0.6772 |
| 0.1619 | 43.0 | 430 | 0.6940 | 0.7405 |
| 0.1635 | 44.0 | 440 | 0.8477 | 0.7215 |
| 0.1323 | 45.0 | 450 | 0.6644 | 0.7848 |
| 0.1253 | 46.0 | 460 | 0.7747 | 0.7468 |
| 0.1254 | 47.0 | 470 | 0.9075 | 0.6962 |
| 0.1494 | 48.0 | 480 | 0.8104 | 0.7405 |
| 0.1702 | 49.0 | 490 | 0.7167 | 0.7532 |
| 0.1591 | 50.0 | 500 | 0.8214 | 0.6962 |
| 0.1105 | 51.0 | 510 | 0.9359 | 0.7152 |
| 0.1354 | 52.0 | 520 | 0.7214 | 0.7342 |
| 0.119 | 53.0 | 530 | 0.7825 | 0.7342 |
| 0.0841 | 54.0 | 540 | 0.7528 | 0.7595 |
| 0.12 | 55.0 | 550 | 0.7002 | 0.7658 |
| 0.1096 | 56.0 | 560 | 0.7747 | 0.7785 |
| 0.1192 | 57.0 | 570 | 0.7368 | 0.7532 |
| 0.1268 | 58.0 | 580 | 0.7098 | 0.7722 |
| 0.1351 | 59.0 | 590 | 0.6097 | 0.7848 |
| 0.1248 | 60.0 | 600 | 0.8102 | 0.7215 |
| 0.1378 | 61.0 | 610 | 0.6786 | 0.7405 |
| 0.1208 | 62.0 | 620 | 0.5467 | 0.8101 |
| 0.0786 | 63.0 | 630 | 0.7059 | 0.7785 |
| 0.1048 | 64.0 | 640 | 0.7945 | 0.7278 |
| 0.0954 | 65.0 | 650 | 0.8258 | 0.7278 |
| 0.121 | 66.0 | 660 | 0.7267 | 0.7532 |
| 0.0921 | 67.0 | 670 | 0.5914 | 0.7911 |
| 0.092 | 68.0 | 680 | 0.6923 | 0.7722 |
| 0.1153 | 69.0 | 690 | 0.6655 | 0.8038 |
| 0.0987 | 70.0 | 700 | 0.6774 | 0.7722 |
| 0.0797 | 71.0 | 710 | 0.6143 | 0.7975 |
| 0.0842 | 72.0 | 720 | 0.7301 | 0.7595 |
| 0.0707 | 73.0 | 730 | 0.7614 | 0.7405 |
| 0.0848 | 74.0 | 740 | 0.7578 | 0.7785 |
| 0.0853 | 75.0 | 750 | 0.7785 | 0.7405 |
| 0.0761 | 76.0 | 760 | 0.8719 | 0.7532 |
| 0.1019 | 77.0 | 770 | 0.5698 | 0.8165 |
| 0.0747 | 78.0 | 780 | 0.7956 | 0.7278 |
| 0.0657 | 79.0 | 790 | 0.5792 | 0.7975 |
| 0.0969 | 80.0 | 800 | 0.5721 | 0.8101 |
| 0.0597 | 81.0 | 810 | 0.7171 | 0.7785 |
| 0.0787 | 82.0 | 820 | 0.7493 | 0.7595 |
| 0.0823 | 83.0 | 830 | 0.6758 | 0.8038 |
| 0.0828 | 84.0 | 840 | 0.8082 | 0.7722 |
| 0.0693 | 85.0 | 850 | 0.7310 | 0.7911 |
| 0.074 | 86.0 | 860 | 0.6492 | 0.8228 |
| 0.0736 | 87.0 | 870 | 0.7373 | 0.7785 |
| 0.0763 | 88.0 | 880 | 0.7254 | 0.7848 |
| 0.0823 | 89.0 | 890 | 0.8261 | 0.7785 |
| 0.0614 | 90.0 | 900 | 0.6919 | 0.7911 |
| 0.0916 | 91.0 | 910 | 0.5884 | 0.7975 |
| 0.0539 | 92.0 | 920 | 0.6960 | 0.7658 |
| 0.0604 | 93.0 | 930 | 0.6502 | 0.7975 |
| 0.0596 | 94.0 | 940 | 0.6058 | 0.7975 |
| 0.0599 | 95.0 | 950 | 0.7166 | 0.7785 |
| 0.0452 | 96.0 | 960 | 0.8093 | 0.7658 |
| 0.0556 | 97.0 | 970 | 0.6589 | 0.8354 |
| 0.0675 | 98.0 | 980 | 0.7471 | 0.8101 |
| 0.0581 | 99.0 | 990 | 0.6568 | 0.8038 |
| 0.0515 | 100.0 | 1000 | 0.7219 | 0.7975 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"fresh1",
"fresh2",
"fresh3"
] |
talli96123/meat_calssify_fresh_crop_fixed_epoch100_V_0_2_best
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"fresh1",
"fresh2",
"fresh3"
] |
AMfeta99/vit-base-oxford-brain-tumor_x-ray
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-brain-tumor_x-ray
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Mahadih534/brain-tumor-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2882
- Accuracy: 0.9231
- Precision: 0.9231
- Recall: 0.9231
- F1: 0.9231
## Model description
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224), which is a Vision Transformer (ViT)
ViT model is originaly a transformer encoder model pre-trained and fine-tuned on ImageNet 2012.
It was introduced in the paper "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale" by Dosovitskiy et al.
The model processes images as sequences of 16x16 patches, adding a [CLS] token for classification tasks, and uses absolute position embeddings. Pre-training enables the model to learn rich image representations, which can be leveraged for downstream tasks by adding a linear classifier on top of the [CLS] token. The weights were converted from the timm repository by Ross Wightman.
## Intended uses & limitations
This must be used for classification of x-ray images of the brain to diagnose of brain tumor.
## Training and evaluation data
The model was fine-tuned in the dataset [Mahadih534/brain-tumor-dataset](https://huggingface.co/datasets/Mahadih534/brain-tumor-dataset) that contains 253 brain images. This dataset was originally created by Yousef Ghanem.
The original dataset was splitted into training and evaluation subsets, 80% for training and 20% for evaluation.
For robust framework evaluation, the evaluation subset is further split into two equal parts for validation and testing.
This results in three distinct datasets: training, validation, and testing
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 20
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6519 | 1.0 | 11 | 0.3817 | 0.8 | 0.8476 | 0.8 | 0.7751 |
| 0.2616 | 2.0 | 22 | 0.0675 | 0.96 | 0.9624 | 0.96 | 0.9594 |
| 0.1219 | 3.0 | 33 | 0.1770 | 0.92 | 0.9289 | 0.92 | 0.9174 |
| 0.0527 | 4.0 | 44 | 0.0234 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
[
"0",
"1"
] |
matthieulel/swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5226
- Accuracy: 0.8591
- Precision: 0.8571
- Recall: 0.8591
- F1: 0.8567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0846 | 0.99 | 62 | 0.8092 | 0.7272 | 0.7246 | 0.7272 | 0.7101 |
| 0.7867 | 2.0 | 125 | 0.6366 | 0.7988 | 0.7996 | 0.7988 | 0.7895 |
| 0.6835 | 2.99 | 187 | 0.5315 | 0.8207 | 0.8195 | 0.8207 | 0.8157 |
| 0.586 | 4.0 | 250 | 0.4611 | 0.8489 | 0.8468 | 0.8489 | 0.8452 |
| 0.5263 | 4.99 | 312 | 0.4753 | 0.8399 | 0.8421 | 0.8399 | 0.8400 |
| 0.5341 | 6.0 | 375 | 0.4551 | 0.8427 | 0.8433 | 0.8427 | 0.8386 |
| 0.4743 | 6.99 | 437 | 0.4639 | 0.8382 | 0.8433 | 0.8382 | 0.8391 |
| 0.4573 | 8.0 | 500 | 0.4771 | 0.8360 | 0.8422 | 0.8360 | 0.8345 |
| 0.4368 | 8.99 | 562 | 0.4731 | 0.8472 | 0.8450 | 0.8472 | 0.8452 |
| 0.4022 | 10.0 | 625 | 0.4736 | 0.8540 | 0.8528 | 0.8540 | 0.8516 |
| 0.4005 | 10.99 | 687 | 0.4542 | 0.8551 | 0.8554 | 0.8551 | 0.8547 |
| 0.3514 | 12.0 | 750 | 0.5543 | 0.8467 | 0.8527 | 0.8467 | 0.8471 |
| 0.3565 | 12.99 | 812 | 0.5318 | 0.8506 | 0.8535 | 0.8506 | 0.8493 |
| 0.3717 | 14.0 | 875 | 0.5059 | 0.8579 | 0.8582 | 0.8579 | 0.8574 |
| 0.3343 | 14.99 | 937 | 0.5235 | 0.8472 | 0.8492 | 0.8472 | 0.8474 |
| 0.3053 | 16.0 | 1000 | 0.5226 | 0.8591 | 0.8571 | 0.8591 | 0.8567 |
| 0.2607 | 16.99 | 1062 | 0.5654 | 0.8591 | 0.8579 | 0.8591 | 0.8572 |
| 0.2814 | 18.0 | 1125 | 0.5622 | 0.8546 | 0.8541 | 0.8546 | 0.8537 |
| 0.2735 | 18.99 | 1187 | 0.6185 | 0.8506 | 0.8525 | 0.8506 | 0.8508 |
| 0.2673 | 20.0 | 1250 | 0.6210 | 0.8574 | 0.8544 | 0.8574 | 0.8550 |
| 0.2595 | 20.99 | 1312 | 0.6334 | 0.8422 | 0.8415 | 0.8422 | 0.8399 |
| 0.2583 | 22.0 | 1375 | 0.6565 | 0.8540 | 0.8545 | 0.8540 | 0.8527 |
| 0.239 | 22.99 | 1437 | 0.6859 | 0.8455 | 0.8458 | 0.8455 | 0.8447 |
| 0.2174 | 24.0 | 1500 | 0.6709 | 0.8591 | 0.8581 | 0.8591 | 0.8581 |
| 0.2288 | 24.99 | 1562 | 0.7437 | 0.8444 | 0.8426 | 0.8444 | 0.8419 |
| 0.2305 | 26.0 | 1625 | 0.7048 | 0.8529 | 0.8497 | 0.8529 | 0.8505 |
| 0.2071 | 26.99 | 1687 | 0.7152 | 0.8540 | 0.8527 | 0.8540 | 0.8529 |
| 0.2282 | 28.0 | 1750 | 0.7273 | 0.8568 | 0.8559 | 0.8568 | 0.8554 |
| 0.209 | 28.99 | 1812 | 0.7213 | 0.8557 | 0.8534 | 0.8557 | 0.8540 |
| 0.2078 | 29.76 | 1860 | 0.7273 | 0.8563 | 0.8544 | 0.8563 | 0.8548 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
hchcsuim/batch-size16_FFPP-c23_opencv-1FPS_faces-expand30-aligned_unaugmentation
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-c23_opencv-1FPS_faces-expand30-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1728
- Accuracy: 0.9273
- Precision: 0.9273
- Recall: 0.9843
- F1: 0.9550
- Roc Auc: 0.9792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.1668 | 1.0 | 1381 | 0.1728 | 0.9273 | 0.9273 | 0.9843 | 0.9550 | 0.9792 |
### Framework versions
- Transformers 4.39.2
- Pytorch 2.3.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"fake",
"real"
] |
matthieulel/resnet-50-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5789
- Accuracy: 0.4138
- Precision: 0.4493
- Recall: 0.4138
- F1: 0.3134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 2.2417 | 0.9940 | 124 | 2.2265 | 0.2373 | 0.0697 | 0.2373 | 0.1077 |
| 2.1616 | 1.9960 | 249 | 2.1268 | 0.1950 | 0.1132 | 0.1950 | 0.0889 |
| 2.0459 | 2.9980 | 374 | 1.9901 | 0.2401 | 0.1029 | 0.2401 | 0.1290 |
| 1.9203 | 4.0 | 499 | 1.8571 | 0.3303 | 0.3116 | 0.3303 | 0.2052 |
| 1.8347 | 4.9940 | 623 | 1.7692 | 0.3613 | 0.2694 | 0.3613 | 0.2457 |
| 1.7628 | 5.9960 | 748 | 1.6926 | 0.3850 | 0.4172 | 0.3850 | 0.2758 |
| 1.723 | 6.9980 | 873 | 1.6342 | 0.3985 | 0.4428 | 0.3985 | 0.2922 |
| 1.71 | 8.0 | 998 | 1.6071 | 0.4104 | 0.4369 | 0.4104 | 0.3122 |
| 1.6948 | 8.9940 | 1122 | 1.5789 | 0.4138 | 0.4493 | 0.4138 | 0.3134 |
| 1.656 | 9.9399 | 1240 | 1.5805 | 0.4053 | 0.4176 | 0.4053 | 0.3034 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
matthieulel/swinv2-large-patch4-window12-192-22k-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12-192-22k-finetuned-galaxy10-decals
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-large-patch4-window12-192-22k) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4372
- Accuracy: 0.8568
- Precision: 0.8575
- Recall: 0.8568
- F1: 0.8550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.974 | 0.99 | 62 | 0.7350 | 0.7480 | 0.7464 | 0.7480 | 0.7365 |
| 0.7716 | 2.0 | 125 | 0.6093 | 0.7982 | 0.8102 | 0.7982 | 0.7960 |
| 0.6813 | 2.99 | 187 | 0.5034 | 0.8286 | 0.8301 | 0.8286 | 0.8254 |
| 0.5998 | 4.0 | 250 | 0.4645 | 0.8433 | 0.8431 | 0.8433 | 0.8403 |
| 0.5306 | 4.99 | 312 | 0.4889 | 0.8320 | 0.8377 | 0.8320 | 0.8336 |
| 0.5234 | 6.0 | 375 | 0.5036 | 0.8309 | 0.8398 | 0.8309 | 0.8278 |
| 0.4984 | 6.99 | 437 | 0.4482 | 0.8478 | 0.8484 | 0.8478 | 0.8461 |
| 0.456 | 8.0 | 500 | 0.4370 | 0.8557 | 0.8573 | 0.8557 | 0.8557 |
| 0.4672 | 8.99 | 562 | 0.4372 | 0.8568 | 0.8575 | 0.8568 | 0.8550 |
| 0.4211 | 10.0 | 625 | 0.4428 | 0.8523 | 0.8513 | 0.8523 | 0.8505 |
| 0.4228 | 10.99 | 687 | 0.4762 | 0.8433 | 0.8459 | 0.8433 | 0.8435 |
| 0.3966 | 12.0 | 750 | 0.4943 | 0.8410 | 0.8434 | 0.8410 | 0.8404 |
| 0.383 | 12.99 | 812 | 0.4885 | 0.8478 | 0.8503 | 0.8478 | 0.8463 |
| 0.3899 | 14.0 | 875 | 0.5021 | 0.8472 | 0.8494 | 0.8472 | 0.8474 |
| 0.3364 | 14.99 | 937 | 0.5107 | 0.8495 | 0.8488 | 0.8495 | 0.8486 |
| 0.331 | 16.0 | 1000 | 0.5219 | 0.8484 | 0.8460 | 0.8484 | 0.8454 |
| 0.288 | 16.99 | 1062 | 0.5696 | 0.8422 | 0.8429 | 0.8422 | 0.8410 |
| 0.2867 | 18.0 | 1125 | 0.5529 | 0.8484 | 0.8474 | 0.8484 | 0.8473 |
| 0.2889 | 18.99 | 1187 | 0.5613 | 0.8529 | 0.8522 | 0.8529 | 0.8520 |
| 0.2809 | 20.0 | 1250 | 0.6093 | 0.8433 | 0.8378 | 0.8433 | 0.8391 |
| 0.2684 | 20.99 | 1312 | 0.6096 | 0.8444 | 0.8409 | 0.8444 | 0.8419 |
| 0.2809 | 22.0 | 1375 | 0.6100 | 0.8455 | 0.8453 | 0.8455 | 0.8445 |
| 0.2661 | 22.99 | 1437 | 0.6161 | 0.8354 | 0.8378 | 0.8354 | 0.8359 |
| 0.2435 | 24.0 | 1500 | 0.6540 | 0.8517 | 0.8512 | 0.8517 | 0.8512 |
| 0.2593 | 24.99 | 1562 | 0.6644 | 0.8472 | 0.8462 | 0.8472 | 0.8456 |
| 0.2343 | 26.0 | 1625 | 0.6655 | 0.8467 | 0.8441 | 0.8467 | 0.8449 |
| 0.2281 | 26.99 | 1687 | 0.6759 | 0.8450 | 0.8438 | 0.8450 | 0.8440 |
| 0.2334 | 28.0 | 1750 | 0.6836 | 0.8472 | 0.8445 | 0.8472 | 0.8451 |
| 0.2129 | 28.99 | 1812 | 0.6731 | 0.8489 | 0.8466 | 0.8489 | 0.8471 |
| 0.2252 | 29.76 | 1860 | 0.6773 | 0.8467 | 0.8440 | 0.8467 | 0.8449 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
Falconsai/brand_identification
|
# Logo Recognition Model: a mix of UAE companies and global enterprises
## Model Details
- **Model Name**: Falconsai/brand_identification
- **Base Model**: [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k)
- **Model Type**: Vision Transformer (ViT) - Image Classification
- **Version**: 1.0
- **License**: MIT
- **Author**: Michael Stattelman from Falcons.ai
## Overview
This model is a fine-tuned version of Google's Vision Transformer (ViT) `vit-base-patch16-224-in21k`, specifically trained for the task of classifying UAE company logos.
It was trained on a custom dataset consisting of logos from various brands and companies based in the United Arab Emirates as well as others.
## Primary Use Cases:
The primary use case for this model is to classify images of logos into their respective UAE-based companies.
This can be particularly useful for applications in brand monitoring, competitive analysis, and marketing research within the UAE market.
1. **Marketing and Advertising Analytics:**
- Analyzing the presence and frequency of brand logos in various media channels (TV, social media, websites) to measure brand visibility and effectiveness of advertising campaigns.
2. **Brand Monitoring and Protection:**
- Monitoring where and how often a brand's logo appears online (social media, blogs, forums) to protect against misuse or unauthorized brand representation.
3. **Market Research:**
- Studying consumer behavior and preferences by analyzing the prevalence of different brand logos in public spaces or events.
4. **Competitive Analysis:**
- Comparing the visibility of different brands within a specific market or industry segment based on logo recognition data.
5. **Retail and Inventory Management:**
- Automating inventory tracking by recognizing product brands through their logos, which helps in maintaining stock levels and identifying popular products.
6. **Augmented Reality and Virtual Try-On:**
- Enhancing augmented reality experiences by recognizing brand logos on products or packaging to overlay additional information or virtual elements.
7. **Customer Engagement and Personalization:**
- Enhancing customer experiences by recognizing brands that customers interact with, which can personalize marketing messages or recommendations.
8. **Event Management and Sponsorship Tracking:**
- Tracking sponsor logos at events and venues to evaluate sponsorship effectiveness and compliance with branding agreements.
9. **Security and Authentication:**
- Verifying the authenticity of products or documents by recognizing the presence and correct placement of brand logos.
10. **Content Filtering and Moderation:**
- Filtering or moderating content on social media platforms based on the presence of recognized brand logos to ensure compliance with brand guidelines or prevent misuse.
These are just a few examples of how a Falconsai/brand_identification logo recognition model can be applied across different industries and purposes. The ability to accurately identify brand logos can provide valuable insights and efficiencies in various business operations.
### Direct Use
- Upload an image of a logo to the model to get a classification label.
- Integrate the model into applications or services that require logo recognition.
### Downstream Use
- Incorporate the model into larger systems for automated brand analysis.
- Use the model as part of a tool for sorting and categorizing images by brand.
## Model Description
### Architecture
The base model used is the Vision Transformer `vit-base-patch16-224-in21k`, which uses self-attention mechanisms to process image patches. The fine-tuning process adapted this pre-trained model to recognize and classify specific logos from UAE companies.
### Training Data
The model was trained on a curated dataset of UAE company logos as well as others of international companies. The dataset consists of thousands of images across various brands to ensure robustness and accuracy.
### Performance
The model achieved high accuracy on a held-out validation set, indicating strong performance in classifying UAE company logos. Detailed performance metrics (accuracy, precision, recall, F1-score) can be provided upon request.
## How to Use
To use the model for inference, you can load it using the `transformers` library from Hugging Face:
```python
import torch
from PIL import Image
from transformers import AutoModelForImageClassification, ViTImageProcessor
image = Image.open('<path_to_image>')
image = image.convert("RGB") # Ensure image is in RGB format
# Load model and processor
model = AutoModelForImageClassification.from_pretrained("Falconsai/brand_identification")
processor = ViTImageProcessor.from_pretrained("Falconsai/brand_identification")
# Preprocess image and make predictions
with torch.no_grad():
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
```
### Companies Identified:
- Abu Dhabi Islamic Bank
- Acer
- Adidas
- Adnoc
- Aldar
- Alienware
- Amazon
- AMD
- Apple
- Asus
- Beats by Dre
- Blackberry
- Bose
- Careem
- Cisco Systems
- Coke
- D-Link
- Dell
- Delonghi
- DP World
- Du
- E&
- Emaar
- Emirates
- Emirates NBD
- Etisalat
- Falcons.ai
- First Abu Dhabi Bank
- Fujitsu
- Google
- GoPro
- HEC
- Hewlett Packard
- Hilti
- Hisense
- Huawei
- IBM
- Khaleej Times
- L'Oréal
- Lenovo
- LG
- LinkedIn
- Louis Vuitton
- Majid Al Futtaim
- Mashreq
- Maybelline
- McDonalds
- Mercedes
- Meta
- Microsoft
- MSI
- Nike
- Nvidia
- OpenAI
- Puma
- Rakez
- Samsung
- Snapdragon
- Tesla
- Ubuntu
- Virgin
- Zwag
### Limitations and Biases
- The model is specifically trained on UAE company logos and may not perform well on logos from companies outside the UAE.
- The model's performance is contingent upon the quality and diversity of the training dataset.
- Potential biases in the training data can lead to biases in model predictions.
### Ethical Considerations
- Ensure that the use of this model complies with local regulations and ethical guidelines, especially concerning privacy and data security.
- Be mindful of the limitations and biases and do not use the model in critical applications without thorough validation.
## Acknowledgements
This model was developed and fine-tuned by Michael Stattelman from Falcons.ai, leveraging the base Vision Transformer model provided by Google.
## Contact Information
For further information, questions, or collaboration requests, please contact:
- **Name**: Michael Stattelman
- **Affiliation**: Falcons.ai
- **URL**: https://falcons.ai
---
|
[
"abu_dhabi_islamic_bank",
"acer",
"adidas",
"adnoc",
"aldar",
"alienware",
"amazon",
"amd",
"apple",
"asus",
"beats_by_dre",
"blackberry",
"bose",
"careem",
"cisco_systems",
"coke",
"d-link",
"dell",
"delonghi",
"dp_world",
"du",
"e&",
"emaar",
"emirates",
"emirates_nbd",
"etisalat",
"falcons.ai",
"first_abu_dhabi_bank",
"fujitsu",
"google",
"gopro",
"hec",
"hewlett_packard",
"hilti",
"hisense",
"huawei",
"ibm",
"khaleej_times",
"l'oréal",
"lenovo",
"lg",
"linkedin",
"louis_vuitton",
"majid_al_futtaim",
"mashreq",
"maybelline",
"mcdonalds",
"mercedes",
"meta",
"microsoft",
"msi",
"nike",
"nvidia",
"openai",
"puma",
"rakez",
"samsung",
"snapdragon",
"tesla",
"ubuntu",
"virgin",
"zwag"
] |
hchcsuim/batch-size16_FFPP-c23_opencv-1FPS_faces-expand40-aligned_unaugmentation
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-c23_opencv-1FPS_faces-expand40-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1643
- Accuracy: 0.9326
- Precision: 0.9377
- Recall: 0.9789
- F1: 0.9578
- Roc Auc: 0.9781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.1924 | 1.0 | 1381 | 0.1643 | 0.9326 | 0.9377 | 0.9789 | 0.9578 | 0.9781 |
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"fake",
"real"
] |
habibi26/ktp-not-ktp-clip
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ktp-not-ktp-clip
This model is a fine-tuned version of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1267
- Accuracy: 0.9890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.5809 | 0.6374 |
| No log | 2.0 | 14 | 1.3401 | 0.6703 |
| 0.5558 | 3.0 | 21 | 0.6458 | 0.7692 |
| 0.5558 | 4.0 | 28 | 0.3785 | 0.8681 |
| 0.1701 | 5.0 | 35 | 0.3004 | 0.9451 |
| 0.1701 | 6.0 | 42 | 0.2204 | 0.9560 |
| 0.142 | 7.0 | 49 | 0.1483 | 0.9341 |
| 0.142 | 8.0 | 56 | 0.1386 | 0.9670 |
| 0.1002 | 9.0 | 63 | 0.7714 | 0.8681 |
| 0.1002 | 10.0 | 70 | 0.2285 | 0.9341 |
| 0.0956 | 11.0 | 77 | 0.1162 | 0.9780 |
| 0.0956 | 12.0 | 84 | 0.1104 | 0.9780 |
| 0.0004 | 13.0 | 91 | 0.1722 | 0.9780 |
| 0.0004 | 14.0 | 98 | 0.2109 | 0.9780 |
| 0.0209 | 15.0 | 105 | 0.3321 | 0.9560 |
| 0.0209 | 16.0 | 112 | 0.0785 | 0.9780 |
| 0.0209 | 17.0 | 119 | 0.1525 | 0.9670 |
| 0.0014 | 18.0 | 126 | 0.1436 | 0.9670 |
| 0.0014 | 19.0 | 133 | 0.2278 | 0.9670 |
| 0.0002 | 20.0 | 140 | 0.3035 | 0.9560 |
| 0.0002 | 21.0 | 147 | 0.1239 | 0.9780 |
| 0.001 | 22.0 | 154 | 0.1211 | 0.9890 |
| 0.001 | 23.0 | 161 | 0.1253 | 0.9890 |
| 0.0 | 24.0 | 168 | 0.1265 | 0.9890 |
| 0.0 | 25.0 | 175 | 0.1267 | 0.9890 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"crop",
"not_crop"
] |
matthieulel/convnextv2-atto-1k-224-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-atto-1k-224-finetuned-galaxy10-decals
This model is a fine-tuned version of [facebook/convnextv2-atto-1k-224](https://huggingface.co/facebook/convnextv2-atto-1k-224) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4668
- Accuracy: 0.8461
- Precision: 0.8444
- Recall: 0.8461
- F1: 0.8442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 2.0062 | 0.99 | 62 | 1.8928 | 0.3450 | 0.3432 | 0.3450 | 0.2956 |
| 1.1323 | 2.0 | 125 | 1.0026 | 0.6590 | 0.6634 | 0.6590 | 0.6399 |
| 0.8977 | 2.99 | 187 | 0.7348 | 0.7486 | 0.7415 | 0.7486 | 0.7399 |
| 0.7119 | 4.0 | 250 | 0.6395 | 0.7892 | 0.7878 | 0.7892 | 0.7770 |
| 0.6393 | 4.99 | 312 | 0.5801 | 0.7971 | 0.7916 | 0.7971 | 0.7915 |
| 0.6463 | 6.0 | 375 | 0.5958 | 0.7976 | 0.8147 | 0.7976 | 0.7909 |
| 0.6197 | 6.99 | 437 | 0.5363 | 0.8151 | 0.8119 | 0.8151 | 0.8112 |
| 0.5779 | 8.0 | 500 | 0.5276 | 0.8207 | 0.8205 | 0.8207 | 0.8185 |
| 0.5841 | 8.99 | 562 | 0.5197 | 0.8185 | 0.8203 | 0.8185 | 0.8157 |
| 0.5597 | 10.0 | 625 | 0.5025 | 0.8253 | 0.8192 | 0.8253 | 0.8193 |
| 0.5437 | 10.99 | 687 | 0.4912 | 0.8309 | 0.8295 | 0.8309 | 0.8296 |
| 0.5242 | 12.0 | 750 | 0.5001 | 0.8275 | 0.8303 | 0.8275 | 0.8245 |
| 0.5029 | 12.99 | 812 | 0.5075 | 0.8241 | 0.8228 | 0.8241 | 0.8208 |
| 0.5396 | 14.0 | 875 | 0.4784 | 0.8393 | 0.8395 | 0.8393 | 0.8371 |
| 0.4746 | 14.99 | 937 | 0.4727 | 0.8331 | 0.8318 | 0.8331 | 0.8317 |
| 0.4786 | 16.0 | 1000 | 0.4856 | 0.8331 | 0.8308 | 0.8331 | 0.8300 |
| 0.4338 | 16.99 | 1062 | 0.4884 | 0.8337 | 0.8333 | 0.8337 | 0.8309 |
| 0.4772 | 18.0 | 1125 | 0.4618 | 0.8405 | 0.8370 | 0.8405 | 0.8377 |
| 0.4733 | 18.99 | 1187 | 0.4740 | 0.8393 | 0.8394 | 0.8393 | 0.8381 |
| 0.4475 | 20.0 | 1250 | 0.4678 | 0.8388 | 0.8349 | 0.8388 | 0.8345 |
| 0.4229 | 20.99 | 1312 | 0.4881 | 0.8331 | 0.8317 | 0.8331 | 0.8303 |
| 0.46 | 22.0 | 1375 | 0.4728 | 0.8410 | 0.8382 | 0.8410 | 0.8371 |
| 0.4298 | 22.99 | 1437 | 0.4642 | 0.8360 | 0.8348 | 0.8360 | 0.8345 |
| 0.4225 | 24.0 | 1500 | 0.4706 | 0.8371 | 0.8368 | 0.8371 | 0.8359 |
| 0.426 | 24.99 | 1562 | 0.4733 | 0.8399 | 0.8367 | 0.8399 | 0.8371 |
| 0.3839 | 26.0 | 1625 | 0.4682 | 0.8444 | 0.8423 | 0.8444 | 0.8422 |
| 0.4007 | 26.99 | 1687 | 0.4665 | 0.8382 | 0.8371 | 0.8382 | 0.8367 |
| 0.4245 | 28.0 | 1750 | 0.4695 | 0.8388 | 0.8357 | 0.8388 | 0.8358 |
| 0.3868 | 28.99 | 1812 | 0.4668 | 0.8461 | 0.8444 | 0.8461 | 0.8442 |
| 0.3933 | 29.76 | 1860 | 0.4657 | 0.8461 | 0.8442 | 0.8461 | 0.8440 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
SeyedAli/Remote_Sensing_Image_Swin_Transformer
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Remote_Sensing_Image_Swin_Transformer
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1004
- Accuracy: 0.9661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2786 | 1.0 | 35 | 0.1433 | 0.9536 |
| 0.1035 | 2.0 | 70 | 0.1101 | 0.9625 |
| 0.0288 | 3.0 | 105 | 0.1004 | 0.9661 |
### Confusion matrix
<img src='download.png'>
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
[
"field",
"forest",
"grass",
"industry",
"parking",
"resident",
"river or lake"
] |
matthieulel/convnextv2-femto-1k-224-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-femto-1k-224-finetuned-galaxy10-decals
This model is a fine-tuned version of [facebook/convnextv2-femto-1k-224](https://huggingface.co/facebook/convnextv2-femto-1k-224) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4463
- Accuracy: 0.8551
- Precision: 0.8509
- Recall: 0.8551
- F1: 0.8514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.7326 | 0.99 | 62 | 1.6140 | 0.4758 | 0.4530 | 0.4758 | 0.4312 |
| 1.1706 | 2.0 | 125 | 1.0827 | 0.6218 | 0.6294 | 0.6218 | 0.5983 |
| 0.9046 | 2.99 | 187 | 0.7418 | 0.7542 | 0.7566 | 0.7542 | 0.7351 |
| 0.7305 | 4.0 | 250 | 0.6540 | 0.7880 | 0.7823 | 0.7880 | 0.7789 |
| 0.6378 | 4.99 | 312 | 0.5903 | 0.8089 | 0.8054 | 0.8089 | 0.8047 |
| 0.6447 | 6.0 | 375 | 0.5915 | 0.7954 | 0.8041 | 0.7954 | 0.7865 |
| 0.6228 | 6.99 | 437 | 0.5513 | 0.8162 | 0.8201 | 0.8162 | 0.8164 |
| 0.5758 | 8.0 | 500 | 0.5553 | 0.8078 | 0.8094 | 0.8078 | 0.8033 |
| 0.5831 | 8.99 | 562 | 0.5207 | 0.8191 | 0.8246 | 0.8191 | 0.8162 |
| 0.537 | 10.0 | 625 | 0.4981 | 0.8286 | 0.8233 | 0.8286 | 0.8222 |
| 0.5322 | 10.99 | 687 | 0.4830 | 0.8337 | 0.8340 | 0.8337 | 0.8332 |
| 0.5171 | 12.0 | 750 | 0.4931 | 0.8253 | 0.8258 | 0.8253 | 0.8233 |
| 0.5092 | 12.99 | 812 | 0.4891 | 0.8360 | 0.8360 | 0.8360 | 0.8325 |
| 0.5245 | 14.0 | 875 | 0.4585 | 0.8450 | 0.8452 | 0.8450 | 0.8431 |
| 0.4585 | 14.99 | 937 | 0.4682 | 0.8422 | 0.8407 | 0.8422 | 0.8407 |
| 0.455 | 16.0 | 1000 | 0.4659 | 0.8388 | 0.8370 | 0.8388 | 0.8357 |
| 0.4175 | 16.99 | 1062 | 0.4633 | 0.8382 | 0.8363 | 0.8382 | 0.8351 |
| 0.4574 | 18.0 | 1125 | 0.4479 | 0.8450 | 0.8435 | 0.8450 | 0.8428 |
| 0.4593 | 18.99 | 1187 | 0.4577 | 0.8439 | 0.8446 | 0.8439 | 0.8430 |
| 0.4423 | 20.0 | 1250 | 0.4589 | 0.8461 | 0.8426 | 0.8461 | 0.8413 |
| 0.4141 | 20.99 | 1312 | 0.4732 | 0.8326 | 0.8339 | 0.8326 | 0.8299 |
| 0.4534 | 22.0 | 1375 | 0.4477 | 0.8461 | 0.8422 | 0.8461 | 0.8433 |
| 0.4011 | 22.99 | 1437 | 0.4614 | 0.8399 | 0.8403 | 0.8399 | 0.8390 |
| 0.4162 | 24.0 | 1500 | 0.4576 | 0.8450 | 0.8443 | 0.8450 | 0.8437 |
| 0.4291 | 24.99 | 1562 | 0.4609 | 0.8472 | 0.8441 | 0.8472 | 0.8439 |
| 0.3698 | 26.0 | 1625 | 0.4469 | 0.8506 | 0.8484 | 0.8506 | 0.8482 |
| 0.3957 | 26.99 | 1687 | 0.4488 | 0.8478 | 0.8464 | 0.8478 | 0.8464 |
| 0.4053 | 28.0 | 1750 | 0.4463 | 0.8551 | 0.8509 | 0.8551 | 0.8514 |
| 0.377 | 28.99 | 1812 | 0.4429 | 0.8540 | 0.8504 | 0.8540 | 0.8508 |
| 0.381 | 29.76 | 1860 | 0.4423 | 0.8517 | 0.8483 | 0.8517 | 0.8489 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
matthieulel/convnextv2-pico-1k-224-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-pico-1k-224-finetuned-galaxy10-decals
This model is a fine-tuned version of [facebook/convnextv2-pico-1k-224](https://huggingface.co/facebook/convnextv2-pico-1k-224) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5795
- Accuracy: 0.8546
- Precision: 0.8565
- Recall: 0.8546
- F1: 0.8545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0752 | 0.99 | 62 | 0.9584 | 0.6697 | 0.6820 | 0.6697 | 0.6581 |
| 0.814 | 2.0 | 125 | 0.6716 | 0.7728 | 0.7780 | 0.7728 | 0.7695 |
| 0.7289 | 2.99 | 187 | 0.6071 | 0.7959 | 0.8093 | 0.7959 | 0.7943 |
| 0.6444 | 4.0 | 250 | 0.5873 | 0.8055 | 0.8103 | 0.8055 | 0.8019 |
| 0.5855 | 4.99 | 312 | 0.5889 | 0.8106 | 0.8226 | 0.8106 | 0.8106 |
| 0.5778 | 6.0 | 375 | 0.5039 | 0.8281 | 0.8321 | 0.8281 | 0.8274 |
| 0.5575 | 6.99 | 437 | 0.5162 | 0.8140 | 0.8235 | 0.8140 | 0.8148 |
| 0.5011 | 8.0 | 500 | 0.5369 | 0.8207 | 0.8234 | 0.8207 | 0.8205 |
| 0.4968 | 8.99 | 562 | 0.5152 | 0.8292 | 0.8282 | 0.8292 | 0.8270 |
| 0.4593 | 10.0 | 625 | 0.4854 | 0.8382 | 0.8408 | 0.8382 | 0.8367 |
| 0.4442 | 10.99 | 687 | 0.4923 | 0.8416 | 0.8423 | 0.8416 | 0.8411 |
| 0.4071 | 12.0 | 750 | 0.5312 | 0.8377 | 0.8356 | 0.8377 | 0.8331 |
| 0.4057 | 12.99 | 812 | 0.4954 | 0.8433 | 0.8449 | 0.8433 | 0.8416 |
| 0.4074 | 14.0 | 875 | 0.4735 | 0.8534 | 0.8509 | 0.8534 | 0.8493 |
| 0.3709 | 14.99 | 937 | 0.4977 | 0.8461 | 0.8450 | 0.8461 | 0.8442 |
| 0.3467 | 16.0 | 1000 | 0.5364 | 0.8286 | 0.8278 | 0.8286 | 0.8274 |
| 0.3129 | 16.99 | 1062 | 0.5695 | 0.8422 | 0.8413 | 0.8422 | 0.8376 |
| 0.3242 | 18.0 | 1125 | 0.5131 | 0.8455 | 0.8469 | 0.8455 | 0.8450 |
| 0.3046 | 18.99 | 1187 | 0.5553 | 0.8399 | 0.8382 | 0.8399 | 0.8371 |
| 0.2805 | 20.0 | 1250 | 0.5871 | 0.8523 | 0.8532 | 0.8523 | 0.8456 |
| 0.2776 | 20.99 | 1312 | 0.5428 | 0.8433 | 0.8404 | 0.8433 | 0.8404 |
| 0.2975 | 22.0 | 1375 | 0.5624 | 0.8393 | 0.8344 | 0.8393 | 0.8359 |
| 0.268 | 22.99 | 1437 | 0.5485 | 0.8495 | 0.8518 | 0.8495 | 0.8498 |
| 0.2535 | 24.0 | 1500 | 0.6135 | 0.8382 | 0.8367 | 0.8382 | 0.8358 |
| 0.2543 | 24.99 | 1562 | 0.6103 | 0.8393 | 0.8389 | 0.8393 | 0.8375 |
| 0.2283 | 26.0 | 1625 | 0.5639 | 0.8484 | 0.8499 | 0.8484 | 0.8480 |
| 0.2341 | 26.99 | 1687 | 0.5795 | 0.8546 | 0.8565 | 0.8546 | 0.8545 |
| 0.2404 | 28.0 | 1750 | 0.5794 | 0.8534 | 0.8515 | 0.8534 | 0.8511 |
| 0.2168 | 28.99 | 1812 | 0.5652 | 0.8546 | 0.8525 | 0.8546 | 0.8524 |
| 0.2057 | 29.76 | 1860 | 0.5650 | 0.8546 | 0.8519 | 0.8546 | 0.8518 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
matthieulel/convnextv2-nano-22k-384-finetuned-galaxy10-decals
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-nano-22k-384-finetuned-galaxy10-decals
This model is a fine-tuned version of [facebook/convnextv2-nano-22k-384](https://huggingface.co/facebook/convnextv2-nano-22k-384) on the matthieulel/galaxy10_decals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4057
- Accuracy: 0.8681
- Precision: 0.8662
- Recall: 0.8681
- F1: 0.8650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.6939 | 0.99 | 62 | 1.5326 | 0.4656 | 0.4580 | 0.4656 | 0.4176 |
| 0.9882 | 2.0 | 125 | 0.8491 | 0.7142 | 0.7196 | 0.7142 | 0.7066 |
| 0.7595 | 2.99 | 187 | 0.6041 | 0.7993 | 0.7990 | 0.7993 | 0.7947 |
| 0.6097 | 4.0 | 250 | 0.5397 | 0.8134 | 0.8078 | 0.8134 | 0.8069 |
| 0.5565 | 4.99 | 312 | 0.4990 | 0.8286 | 0.8269 | 0.8286 | 0.8268 |
| 0.5822 | 6.0 | 375 | 0.4684 | 0.8427 | 0.8425 | 0.8427 | 0.8374 |
| 0.5244 | 6.99 | 437 | 0.4484 | 0.8512 | 0.8476 | 0.8512 | 0.8483 |
| 0.4957 | 8.0 | 500 | 0.4487 | 0.8506 | 0.8543 | 0.8506 | 0.8514 |
| 0.4857 | 8.99 | 562 | 0.4369 | 0.8579 | 0.8572 | 0.8579 | 0.8545 |
| 0.4634 | 10.0 | 625 | 0.4104 | 0.8658 | 0.8630 | 0.8658 | 0.8639 |
| 0.4433 | 10.99 | 687 | 0.4117 | 0.8664 | 0.8649 | 0.8664 | 0.8651 |
| 0.4267 | 12.0 | 750 | 0.4096 | 0.8664 | 0.8632 | 0.8664 | 0.8634 |
| 0.4201 | 12.99 | 812 | 0.4212 | 0.8658 | 0.8645 | 0.8658 | 0.8631 |
| 0.4176 | 14.0 | 875 | 0.4057 | 0.8681 | 0.8662 | 0.8681 | 0.8650 |
| 0.3717 | 14.99 | 937 | 0.4299 | 0.8568 | 0.8547 | 0.8568 | 0.8551 |
| 0.3759 | 16.0 | 1000 | 0.4446 | 0.8585 | 0.8563 | 0.8585 | 0.8555 |
| 0.3264 | 16.99 | 1062 | 0.4276 | 0.8647 | 0.8630 | 0.8647 | 0.8623 |
| 0.3573 | 18.0 | 1125 | 0.4199 | 0.8641 | 0.8621 | 0.8641 | 0.8610 |
| 0.3356 | 18.99 | 1187 | 0.4388 | 0.8585 | 0.8597 | 0.8585 | 0.8579 |
| 0.3313 | 20.0 | 1250 | 0.4385 | 0.8602 | 0.8585 | 0.8602 | 0.8571 |
| 0.3044 | 20.99 | 1312 | 0.4485 | 0.8585 | 0.8578 | 0.8585 | 0.8560 |
| 0.3525 | 22.0 | 1375 | 0.4303 | 0.8647 | 0.8641 | 0.8647 | 0.8634 |
| 0.3207 | 22.99 | 1437 | 0.4525 | 0.8608 | 0.8597 | 0.8608 | 0.8591 |
| 0.3044 | 24.0 | 1500 | 0.4417 | 0.8591 | 0.8578 | 0.8591 | 0.8579 |
| 0.3088 | 24.99 | 1562 | 0.4626 | 0.8608 | 0.8586 | 0.8608 | 0.8582 |
| 0.2897 | 26.0 | 1625 | 0.4524 | 0.8630 | 0.8606 | 0.8630 | 0.8606 |
| 0.2823 | 26.99 | 1687 | 0.4433 | 0.8670 | 0.8657 | 0.8670 | 0.8657 |
| 0.2928 | 28.0 | 1750 | 0.4479 | 0.8658 | 0.8629 | 0.8658 | 0.8631 |
| 0.2695 | 28.99 | 1812 | 0.4455 | 0.8658 | 0.8637 | 0.8658 | 0.8639 |
| 0.274 | 29.76 | 1860 | 0.4449 | 0.8630 | 0.8607 | 0.8630 | 0.8610 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
[
"disturbed galaxies",
"merging galaxies",
"round smooth galaxies",
"in-between round smooth galaxies",
"cigar shaped smooth galaxies",
"barred spiral galaxies",
"unbarred tight spiral galaxies",
"unbarred loose spiral galaxies",
"edge-on galaxies without bulge",
"edge-on galaxies with bulge"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.