\n\nThis model is a lightweight and uncased version of MiniLM[1] for the Italian language. Its 17M parameters and 67MB size make it\n85% lighter than a typical mono-lingual BERT model. It is ideal when memory consumption and execution speed are critical while maintaining high-quality results.\n\n\n
Model description
\n\nThe model builds on mMiniLMv2[1] (from Microsoft: [L6xH384 mMiniLMv2](https://github.com/microsoft/unilm/tree/master/minilm)) as a starting point, \nfocusing it on the Italian language while at the same time turning it into an uncased model by modifying the embedding layer \n(as in [2], but computing document-level frequencies over the Wikipedia dataset and setting a frequency threshold of 0.1%), which brings a considerable\nreduction in the number of parameters.\n\nTo compensate for the deletion of cased tokens, which now forces the model to exploit lowercase representations of words previously capitalized, \nthe model has been further pre-trained on the Italian split of the [Wikipedia](https://huggingface.co/datasets/wikipedia) dataset, using the whole word masking [3] technique to make it more robust \nto the new uncased representations.\n\nThe resulting model has 17M parameters, a vocabulary of 14.610 tokens, and a size of 67MB, which makes it 85% lighter than a typical mono-lingual BERT model and\n75% lighter than a standard mono-lingual DistilBERT model.\n\n\n
Training procedure
\n\nThe model has been trained for masked language modeling on the Italian Wikipedia (~3GB) dataset for 10K steps, using the AdamW optimizer, with a batch size of 512 \n(obtained through 128 gradient accumulation steps),\na sequence length of 512, and a linearly decaying learning rate starting from 5e-5. The training has been performed using dynamic masking between epochs and\nexploiting the whole word masking technique.\n\n\n
Performances
\n\nThe following metrics have been computed on the Part of Speech Tagging and Named Entity Recognition tasks, using the UD Italian ISDT and WikiNER datasets, respectively. \nThe PoST model has been trained for 5 epochs, and the NER model for 3 epochs, both with a constant learning rate, fixed at 1e-5. For Part of Speech Tagging, the metrics have been computed on the default test set\nprovided with the dataset, while for Named Entity Recognition the metrics have been computed with a 5-fold cross-validation\n\n| Task | Recall | Precision | F1 |\n| ------ | ------ | ------ | ------ |\n| Part of Speech Tagging | 95.64 | 95.32 | 95.45 |\n| Named Entity Recognition | 82.27 | 80.64 | 81.29 |\n\nThe metrics have been computed at the token level and macro-averaged over the classes.\n\n
Demo
\n\nYou can try the model online (fine-tuned on named entity recognition) using this web app: https://huggingface.co/spaces/osiria/flare-it-demo\n\n
\n\nThis lightweight model has been further pre-trained on Wikipedia, so it's particularly suitable as an agile analyzer for large volumes of natively digital text \nfrom the world wide web, written in a correct and fluent form (like wikis, web pages, news, etc.). However, it may show limitations when it comes to chaotic text, containing errors and slang expressions\n(like social media posts) or when it comes to domain-specific text (like medical, financial or legal content).\n\n
\n\nThe model is released under MIT license"}}},{"rowIdx":28078,"cells":{"modelId":{"kind":"string","value":"CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy"},"tags":{"kind":"list like","value":["pytorch","tf","bert","token-classification","ar","arxiv:2103.06678","transformers","license:apache-2.0","autotrain_compatible"],"string":"[\n \"pytorch\",\n \"tf\",\n \"bert\",\n \"token-classification\",\n \"ar\",\n \"arxiv:2103.06678\",\n \"transformers\",\n \"license:apache-2.0\",\n \"autotrain_compatible\"\n]"},"pipeline_tag":{"kind":"string","value":"token-classification"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForTokenClassification\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":32,"string":"32"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\ninference: \n parameters: \n do_sample: true\n max_length: 384\n top_p: 0.9\n repetition_penalty: 1.0\nlanguage: \n - en\nlicense: mit\ntags: \n - \"text2text generation\"\ntask: \n name: \"lyrics interpretation\"\n type: \"text2text generation\"\nwidget: \n - \n text: \"Explain: \\nLoving him is like driving a new Maserati down a dead end street\\nFaster than the wind, passionate as sin, ending so suddenly\\nLoving him is like trying to change your mind\\nOnce you're already flying through the free fall\\nLike the colors in autumn, so bright, just before they lose it all\\n\\nLosing him was blue, like I'd never known\\nMissing him was dark gray, all alone\\nForgetting him was like trying to know\\nSomebody you never met\\nBut loving him was red\\nLoving him was red\\n\\nTouching him was like realizing all you ever wanted\\nWas right there in front of you\\nMemorizing him was as easy as knowing all the words\\nTo your old favorite song\\nFighting with him was like trying to solve a crossword\\nAnd realizing there's no right answer\\nRegretting him was like wishing you never found out\\nThat love could be that strong\\n\\nLosing him was blue, like I'd never known\\nMissing him was dark gray, all alone\\nForgetting him was like trying to know\\nSomebody you never met\\nBut loving him was red\\nOh, red\\nBurning red\\n\\nRemembering him comes in flashbacks and echoes\\nTell myself it's time now gotta let go\\nBut moving on from him is impossible\\nWhen I still see it all in my head\\nIn burning red\\nBurning, it was red\\n\\nOh, losing him was blue, like I'd never\\nnown\\nMissing him was dark gray, all alone\\nForgetting him was like trying to know\\nSomebody you never met\\n'Cause loving him was red\\nYeah, yeah, red\\nBurning red\\n\\nAnd that's why he's spinning 'round in my head\\nComes back to me, burning red\\nYeah, yeah\\nHis love was like driving a new Maserati down a dead end street\"\n example_title: \"Red - Taylor Swift\"\n---\n\n\n# Overview\nThis pilot hub aims to test whether a flan-t5-base can effectively automate poem interpretation. \nTo use the hub, simply paste in any poem of interest and see its meaning. Please begin your request with the prompt, 'Explain: '."}}},{"rowIdx":28079,"cells":{"modelId":{"kind":"string","value":"CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa"},"tags":{"kind":"list like","value":["pytorch","tf","bert","token-classification","ar","arxiv:2103.06678","transformers","license:apache-2.0","autotrain_compatible"],"string":"[\n \"pytorch\",\n \"tf\",\n \"bert\",\n \"token-classification\",\n \"ar\",\n \"arxiv:2103.06678\",\n \"transformers\",\n \"license:apache-2.0\",\n \"autotrain_compatible\"\n]"},"pipeline_tag":{"kind":"string","value":"token-classification"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForTokenClassification\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":27,"string":"27"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\ninference: \n parameters: \n do_sample: true\n max_length: 384\n top_p: 0.9\n repetition_penalty: 1.0\nlanguage: \n - en\nlicense: mit\ntags: \n - \"text2text generation\"\ntask: \n name: \"poem interpretation\"\n type: \"text2text generation\"\nwidget: \n - \n text: \"Explain: \\nThe Lost Boy\\n\\nBoy it really stinks in here\\nThe dumpster is not the place\\nTo get the food you need each day\\nJust to feed your face.\\n\\nA ten-year-old with a dirty face\\nCrawls out with his daily meal\\nWhat is he doing in this place\\nHow am I suppose to feel?\\n\\nHis mother cradles a baby \\nThe child's been dead three weeks\\nHer mind is gone from drug abuse\\nAnd now she hardly speaks.\\n\\nGrandma is a drunkard\\nWith men who come to town\\nBringing her a bottle\\nJust to go a round.\\n\\nDrugs out on the table \\nA line or two is good\\nThat should carry her over \\nNo one ever understood.\\n\\nThe little boy with dirty face\\nHas not been schooled in years\\nHe fights the streets alone\\nLong since lost his fears.\\n\\nA stale sandwich, and watered coke\\nHis meal for this day\\nWhatever tomorrow may bring\\nHe knows not the word play.\\n\\nEmaciated with distant eyes\\nNo one really sees him\\nJust one of the lost boys\\nHis life completely grim.\\n\\nGod bless the children!\\n\\n\"\n example_title: \"The Lost Boy - pattyann4500 (allpoetry.com/920731)\"\n - \n text: \"Explain: \\nLet your breath be the air I need,\\nwhen I drown in your eyes as I see.\\nLet yourself fall into my arms that bleed,\\nwhen the world shows you no mercy.\\n\\nLet your sad past bury down in the core,\\nwhen you walk with your heart close to me.\\nLet there be your lovely face at the door,\\nWhen I return from the war no matter how long it be.\\n\\nLet your love nourish my frozen heart,\\nwhen it lies under the snow capped tree.\\nLet me be enslaved with you forever from the start,\\nwhen the time comes, together we shall flee.\\n\\nLet your presence enlighten my dark,\\nwhen your smile reflects in the sea.\\nLet the words of love be thy spark,\\nwhen you come out of dreams to kiss me.\\n\\nI wish we were together... my princess... \\n\"\n example_title: \"Princess... - Soulhealer95 (allpoetry.com/11038949)\"\n---\n\n\n# Overview\nThe aim of this pilot hub is to test whether a Flan-T5-Base model, when pre-trained with a lyrics interpretation task, can better interpret poems.\nTo use the hub, simply paste in any poem of interest and see its meaning. Please begin your request with the prompt, 'Explain: '."}}},{"rowIdx":28080,"cells":{"modelId":{"kind":"string","value":"CAMeL-Lab/bert-base-arabic-camelbert-da"},"tags":{"kind":"list like","value":["pytorch","tf","jax","bert","fill-mask","ar","arxiv:2103.06678","transformers","license:apache-2.0","autotrain_compatible"],"string":"[\n \"pytorch\",\n \"tf\",\n \"jax\",\n \"bert\",\n \"fill-mask\",\n \"ar\",\n \"arxiv:2103.06678\",\n \"transformers\",\n \"license:apache-2.0\",\n \"autotrain_compatible\"\n]"},"pipeline_tag":{"kind":"string","value":"fill-mask"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForMaskedLM\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":449,"string":"449"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlibrary_name: stable-baselines3\ntags:\n- LunarLander-v2\n- deep-reinforcement-learning\n- reinforcement-learning\n- stable-baselines3\nmodel-index:\n- name: PPO\n results:\n - task:\n type: reinforcement-learning\n name: reinforcement-learning\n dataset:\n name: LunarLander-v2\n type: LunarLander-v2\n metrics:\n - type: mean_reward\n value: 257.28 +/- 16.62\n name: mean_reward\n verified: false\n---\n\n# **PPO** Agent playing **LunarLander-v2**\nThis is a trained model of a **PPO** agent playing **LunarLander-v2**\nusing the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).\n\n## Usage (with Stable-baselines3)\nTODO: Add your code\n\n\n```python\nfrom stable_baselines3 import ...\nfrom huggingface_sb3 import load_from_hub\n\n...\n```\n"}}},{"rowIdx":28081,"cells":{"modelId":{"kind":"string","value":"CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6"},"tags":{"kind":"list like","value":["pytorch","tf","bert","text-classification","ar","arxiv:2103.06678","transformers","license:apache-2.0"],"string":"[\n \"pytorch\",\n \"tf\",\n \"bert\",\n \"text-classification\",\n \"ar\",\n \"arxiv:2103.06678\",\n \"transformers\",\n \"license:apache-2.0\"\n]"},"pipeline_tag":{"kind":"string","value":"text-classification"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForSequenceClassification\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":34,"string":"34"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\ndatasets:\n- logo-wizard/modern-logo-dataset\ntags:\n- text-to-image\n- lora\n- stable-diffusion\npipeline_tag: text-to-image\nlicense: creativeml-openrail-m\n---\n# LoRA text2image fine-tuning - eewwann/logo-diffusion-lora-v10\nThese are LoRA with Hadamard Product (LoHa) adaption weights for [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-nonema-pruned.safetensors). The weights were fine-tuned on the [logo-wizard/modern-logo-dataset](https://huggingface.co/datasets/logo-wizard/modern-logo-dataset) dataset. You can find some example images in the following. \n\n\n\n\n"}}},{"rowIdx":28082,"cells":{"modelId":{"kind":"string","value":"CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry"},"tags":{"kind":"list like","value":["pytorch","tf","bert","text-classification","ar","arxiv:1905.05700","arxiv:2103.06678","transformers","license:apache-2.0"],"string":"[\n \"pytorch\",\n \"tf\",\n \"bert\",\n \"text-classification\",\n \"ar\",\n \"arxiv:1905.05700\",\n \"arxiv:2103.06678\",\n \"transformers\",\n \"license:apache-2.0\"\n]"},"pipeline_tag":{"kind":"string","value":"text-classification"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForSequenceClassification\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":31,"string":"31"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\ntags:\n- Taxi-v3\n- q-learning\n- reinforcement-learning\n- custom-implementation\nmodel-index:\n- name: q-Taxi-v3\n results:\n - task:\n type: reinforcement-learning\n name: reinforcement-learning\n dataset:\n name: Taxi-v3\n type: Taxi-v3\n metrics:\n - type: mean_reward\n value: 7.50 +/- 2.76\n name: mean_reward\n verified: false\n---\n\n # **Q-Learning** Agent playing1 **Taxi-v3**\n This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .\n\n ## Usage\n\n ```python\n \n model = load_from_hub(repo_id=\"yasndr/q-Taxi-v3\", filename=\"q-learning.pkl\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = gym.make(model[\"env_id\"])\n ```\n "}}},{"rowIdx":28083,"cells":{"modelId":{"kind":"string","value":"CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa"},"tags":{"kind":"list like","value":["pytorch","tf","bert","token-classification","ar","arxiv:2103.06678","transformers","license:apache-2.0","autotrain_compatible"],"string":"[\n \"pytorch\",\n \"tf\",\n \"bert\",\n \"token-classification\",\n \"ar\",\n \"arxiv:2103.06678\",\n \"transformers\",\n \"license:apache-2.0\",\n \"autotrain_compatible\"\n]"},"pipeline_tag":{"kind":"string","value":"token-classification"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForTokenClassification\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":1862,"string":"1,862"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"## Psychology-Alpaca-RM\n- PEFT adapter layers for a reward model based on ``decapoda-research/llama-7b-hf``. \n- Trained with a small subset (110 data points) of ``samhog/cgpt-pairs`` with 10K prompts, each with two answers (one 'good', one 'bad')"}}},{"rowIdx":28084,"cells":{"modelId":{"kind":"string","value":"CAMeL-Lab/bert-base-arabic-camelbert-mix"},"tags":{"kind":"list like","value":["pytorch","tf","jax","bert","fill-mask","ar","arxiv:2103.06678","transformers","Arabic","Dialect","Egyptian","Gulf","Levantine","Classical Arabic","MSA","Modern Standard Arabic","license:apache-2.0","autotrain_compatible"],"string":"[\n \"pytorch\",\n \"tf\",\n \"jax\",\n \"bert\",\n \"fill-mask\",\n \"ar\",\n \"arxiv:2103.06678\",\n \"transformers\",\n \"Arabic\",\n \"Dialect\",\n \"Egyptian\",\n \"Gulf\",\n \"Levantine\",\n \"Classical Arabic\",\n \"MSA\",\n \"Modern Standard Arabic\",\n \"license:apache-2.0\",\n \"autotrain_compatible\"\n]"},"pipeline_tag":{"kind":"string","value":"fill-mask"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForMaskedLM\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":20880,"string":"20,880"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlibrary_name: stable-baselines3\ntags:\n- LunarLander-v2\n- deep-reinforcement-learning\n- reinforcement-learning\n- stable-baselines3\nmodel-index:\n- name: PPO\n results:\n - task:\n type: reinforcement-learning\n name: reinforcement-learning\n dataset:\n name: LunarLander-v2\n type: LunarLander-v2\n metrics:\n - type: mean_reward\n value: 259.59 +/- 19.34\n name: mean_reward\n verified: false\n---\n\n# **PPO** Agent playing **LunarLander-v2**\nThis is a trained model of a **PPO** agent playing **LunarLander-v2**\nusing the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).\n\n## Usage (with Stable-baselines3)\nTODO: Add your code\n\n\n```python\nfrom stable_baselines3 import ...\nfrom huggingface_sb3 import load_from_hub\n\n...\n```\n"}}},{"rowIdx":28085,"cells":{"modelId":{"kind":"string","value":"CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry"},"tags":{"kind":"list like","value":["pytorch","tf","bert","text-classification","ar","arxiv:1905.05700","arxiv:2103.06678","transformers","license:apache-2.0"],"string":"[\n \"pytorch\",\n \"tf\",\n \"bert\",\n \"text-classification\",\n \"ar\",\n \"arxiv:1905.05700\",\n \"arxiv:2103.06678\",\n \"transformers\",\n \"license:apache-2.0\"\n]"},"pipeline_tag":{"kind":"string","value":"text-classification"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForSequenceClassification\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":25,"string":"25"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlanguage: en\nthumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true\ntags:\n- huggingtweets\nwidget:\n- text: \"My dream is\"\n---\n\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
🤖 AI BOT 🤖
\n
Scratch Team
\n
@scratch
\n
\n\nI was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).\n\nCreate your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!\n\n## How does it work?\n\nThe model uses the following pipeline.\n\n\n\nTo understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).\n\n## Training data\n\nThe model was trained on tweets from Scratch Team.\n\n| Data | Scratch Team |\n| --- | --- |\n| Tweets downloaded | 3161 |\n| Retweets | 2028 |\n| Short tweets | 4 |\n| Tweets kept | 1129 |\n\n[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qnkb8q9j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.\n\n## Training procedure\n\nThe model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @scratch's tweets.\n\nHyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yt6szut) for full transparency and reproducibility.\n\nAt the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yt6szut/artifacts) is logged and versioned.\n\n## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n```python\nfrom transformers import pipeline\ngenerator = pipeline('text-generation',\n model='huggingtweets/scratch')\ngenerator(\"My dream is\", num_return_sequences=5)\n```\n\n## Limitations and bias\n\nThe model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.\n\n## About\n\n*Built by Boris Dayma*\n\n[](https://twitter.com/intent/follow?screen_name=borisdayma)\n\nFor more details, visit the project repository.\n\n[](https://github.com/borisdayma/huggingtweets)\n"}}},{"rowIdx":28086,"cells":{"modelId":{"kind":"string","value":"CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth"},"tags":{"kind":"list like","value":["pytorch","tf","jax","bert","fill-mask","ar","arxiv:2103.06678","transformers","license:apache-2.0","autotrain_compatible"],"string":"[\n \"pytorch\",\n \"tf\",\n \"jax\",\n \"bert\",\n \"fill-mask\",\n \"ar\",\n \"arxiv:2103.06678\",\n \"transformers\",\n \"license:apache-2.0\",\n \"autotrain_compatible\"\n]"},"pipeline_tag":{"kind":"string","value":"fill-mask"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForMaskedLM\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":26,"string":"26"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlanguage: en\nthumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true\ntags:\n- huggingtweets\nwidget:\n- text: \"My dream is\"\n---\n\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
🤖 AI BOT 🤖
\n
Chris Uri
\n
@redcloudnimbus
\n
\n\nI was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).\n\nCreate your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!\n\n## How does it work?\n\nThe model uses the following pipeline.\n\n\n\nTo understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).\n\n## Training data\n\nThe model was trained on tweets from Chris Uri.\n\n| Data | Chris Uri |\n| --- | --- |\n| Tweets downloaded | 1359 |\n| Retweets | 208 |\n| Short tweets | 199 |\n| Tweets kept | 952 |\n\n[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/p68z097t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.\n\n## Training procedure\n\nThe model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @redcloudnimbus's tweets.\n\nHyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/s8pwy6bb) for full transparency and reproducibility.\n\nAt the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/s8pwy6bb/artifacts) is logged and versioned.\n\n## How to use\n\nYou can use this model directly with a pipeline for text generation:\n\n```python\nfrom transformers import pipeline\ngenerator = pipeline('text-generation',\n model='huggingtweets/redcloudnimbus')\ngenerator(\"My dream is\", num_return_sequences=5)\n```\n\n## Limitations and bias\n\nThe model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).\n\nIn addition, the data present in the user's tweets further affects the text generated by the model.\n\n## About\n\n*Built by Boris Dayma*\n\n[](https://twitter.com/intent/follow?screen_name=borisdayma)\n\nFor more details, visit the project repository.\n\n[](https://github.com/borisdayma/huggingtweets)\n"}}},{"rowIdx":28087,"cells":{"modelId":{"kind":"string","value":"CAUKiel/JavaBERT-uncased"},"tags":{"kind":"list like","value":["pytorch","safetensors","bert","fill-mask","java","code","transformers","license:apache-2.0","autotrain_compatible"],"string":"[\n \"pytorch\",\n \"safetensors\",\n \"bert\",\n \"fill-mask\",\n \"java\",\n \"code\",\n \"transformers\",\n \"license:apache-2.0\",\n \"autotrain_compatible\"\n]"},"pipeline_tag":{"kind":"string","value":"fill-mask"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForMaskedLM\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":7,"string":"7"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlicense: apache-2.0\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: expert-freelaw\n results: []\n---\n\n\n\n# expert-freelaw\n\nThis model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.0413\n- Accuracy: 0.5643\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 1000\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| 2.0772 | 0.01 | 200 | 2.0728 | 0.5588 |\n| 2.0718 | 0.01 | 400 | 2.0656 | 0.5600 |\n| 2.0661 | 0.02 | 600 | 2.0561 | 0.5617 |\n| 2.0606 | 0.03 | 800 | 2.0472 | 0.5632 |\n| 2.0514 | 0.04 | 1000 | 2.0413 | 0.5643 |\n\n\n### Framework versions\n\n- Transformers 4.28.1\n- Pytorch 2.0.0+cu117\n- Datasets 2.11.0\n- Tokenizers 0.13.3\n"}}},{"rowIdx":28088,"cells":{"modelId":{"kind":"string","value":"CAUKiel/JavaBERT"},"tags":{"kind":"list like","value":["pytorch","safetensors","bert","fill-mask","code","arxiv:2110.10404","arxiv:1910.09700","transformers","license:apache-2.0","autotrain_compatible"],"string":"[\n \"pytorch\",\n \"safetensors\",\n \"bert\",\n \"fill-mask\",\n \"code\",\n \"arxiv:2110.10404\",\n \"arxiv:1910.09700\",\n \"transformers\",\n \"license:apache-2.0\",\n \"autotrain_compatible\"\n]"},"pipeline_tag":{"kind":"string","value":"fill-mask"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForMaskedLM\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":388,"string":"388"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\ntags:\n- FrozenLake-v1-4x4-no_slippery\n- q-learning\n- reinforcement-learning\n- custom-implementation\nmodel-index:\n- name: q-FrozenLake-v1-4x4-noSlippery\n results:\n - task:\n type: reinforcement-learning\n name: reinforcement-learning\n dataset:\n name: FrozenLake-v1-4x4-no_slippery\n type: FrozenLake-v1-4x4-no_slippery\n metrics:\n - type: mean_reward\n value: 1.00 +/- 0.00\n name: mean_reward\n verified: false\n---\n\n # **Q-Learning** Agent playing1 **FrozenLake-v1**\n This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .\n\n ## Usage\n\n ```python\n \n model = load_from_hub(repo_id=\"JacksonBurton/q-FrozenLake-v1-4x4-noSlippery\", filename=\"q-learning.pkl\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = gym.make(model[\"env_id\"])\n ```\n "}}},{"rowIdx":28089,"cells":{"modelId":{"kind":"string","value":"Cameron/BERT-eec-emotion"},"tags":{"kind":"list like","value":["pytorch","jax","bert","text-classification","transformers"],"string":"[\n \"pytorch\",\n \"jax\",\n \"bert\",\n \"text-classification\",\n \"transformers\"\n]"},"pipeline_tag":{"kind":"string","value":"text-classification"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForSequenceClassification\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":36,"string":"36"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlicense: apache-2.0\ntags:\n- generated_from_trainer\ndatasets:\n- glue\nmetrics:\n- matthews_correlation\nmodel-index:\n- name: bert-base-uncased-finetuned-cola\n results:\n - task:\n name: Text Classification\n type: text-classification\n dataset:\n name: glue\n type: glue\n config: cola\n split: validation\n args: cola\n metrics:\n - name: Matthews Correlation\n type: matthews_correlation\n value: 0.5365007161029405\n---\n\n\n\n# bert-base-uncased-finetuned-cola\n\nThis model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4711\n- Matthews Correlation: 0.5365\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 9.678498850368218e-06\n- train_batch_size: 32\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |\n|:-------------:|:-----:|:----:|:---------------:|:--------------------:|\n| No log | 1.0 | 268 | 0.4731 | 0.4664 |\n| 0.4819 | 2.0 | 536 | 0.4537 | 0.5233 |\n| 0.4819 | 3.0 | 804 | 0.4711 | 0.5365 |\n\n\n### Framework versions\n\n- Transformers 4.28.1\n- Pytorch 2.0.0+cu118\n- Datasets 2.12.0\n- Tokenizers 0.13.3\n"}}},{"rowIdx":28090,"cells":{"modelId":{"kind":"string","value":"Cameron/BERT-mdgender-wizard"},"tags":{"kind":"list like","value":["pytorch","jax","bert","text-classification","transformers"],"string":"[\n \"pytorch\",\n \"jax\",\n \"bert\",\n \"text-classification\",\n \"transformers\"\n]"},"pipeline_tag":{"kind":"string","value":"text-classification"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForSequenceClassification\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":30,"string":"30"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlibrary_name: \"transformers.js\"\n---\n\nhttps://huggingface.co/openai/whisper-small.en with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)."}}},{"rowIdx":28091,"cells":{"modelId":{"kind":"string","value":"Canadiancaleb/DialoGPT-small-jesse"},"tags":{"kind":"list like","value":["pytorch","gpt2","text-generation","transformers","conversational"],"string":"[\n \"pytorch\",\n \"gpt2\",\n \"text-generation\",\n \"transformers\",\n \"conversational\"\n]"},"pipeline_tag":{"kind":"string","value":"conversational"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"GPT2LMHeadModel\"\n ],\n \"model_type\": \"gpt2\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": 1000\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":9,"string":"9"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlicense: apache-2.0\ntags:\n- summarization\n- generated_from_trainer\nmetrics:\n- rouge\nmodel-index:\n- name: mt5-small-finetuned-amazon-en-es\n results: []\n---\n\n\n\n# mt5-small-finetuned-amazon-en-es\n\nThis model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 3.0132\n- Rouge1: 16.4719\n- Rouge2: 7.9366\n- Rougel: 16.2123\n- Rougelsum: 16.2853\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5.6e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 8\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |\n|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|\n| 3.9249 | 1.0 | 1209 | 3.1904 | 15.8207 | 8.0555 | 15.4584 | 15.648 |\n| 3.5688 | 2.0 | 2418 | 3.0812 | 16.3271 | 8.1479 | 15.9001 | 16.0134 |\n| 3.3905 | 3.0 | 3627 | 3.0442 | 15.9864 | 7.295 | 15.4247 | 15.5848 |\n| 3.2728 | 4.0 | 4836 | 3.0304 | 16.2893 | 7.5851 | 15.9494 | 16.0117 |\n| 3.1958 | 5.0 | 6045 | 3.0169 | 15.4888 | 7.4495 | 15.2244 | 15.2326 |\n| 3.1359 | 6.0 | 7254 | 3.0158 | 16.3866 | 8.2218 | 16.0625 | 16.0953 |\n| 3.1059 | 7.0 | 8463 | 3.0075 | 15.9134 | 7.8387 | 15.626 | 15.6499 |\n| 3.0852 | 8.0 | 9672 | 3.0132 | 16.4719 | 7.9366 | 16.2123 | 16.2853 |\n\n\n### Framework versions\n\n- Transformers 4.26.1\n- Pytorch 1.13.1+cu117\n- Datasets 2.12.0\n- Tokenizers 0.13.2\n"}}},{"rowIdx":28092,"cells":{"modelId":{"kind":"string","value":"Canadiancaleb/jessebot"},"tags":{"kind":"list like","value":[],"string":"[]"},"pipeline_tag":{"kind":"null"},"config":{"kind":"string","value":"{\n \"architectures\": null,\n \"model_type\": null,\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":0,"string":"0"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlicense: openrail\ntags:\n- scat\n- lora\n- stable diffusion\n---\nHere's the defecation lora, it was available on Civitai until the ban on scat content.\nYou can use various trigger words to get different effects, like \"Scat\", \"Disposal\", \"Feces\" and so on.\nThe main problem with this model is that that it tends to confuse the anus and the vagina, so you'll have to add prompts and negatives usefull to reduce this effect.\n\nYou can find my other models on Civitai: https://civitai.com/user/JollyIm/models\n\nA first example:\n\nPrompts: Realistic, Realism, (Masterpiece, Best Quality, High Quality, Highres:1.4), Detailed, Extremely Detailed, Ambient Soft Lighting, 4K, (Extremely Detailed Eyes, Detailed Face and Skin:1.2), masterpiece, best quality, 1girl, feces, disposal, (anal:1.2), , (public toilet), embarassed, (pile of feces), (perfect pussy), (perfect vagina),\nNegative prompt: easynegative, (worst quality:1.2), (low quality:1.2), (vaginal), (dirty vagina:1.2), (feces in vagina:1.2), (feces in vagina:1.2)\n\nSecond example:\n\nPrompts: masterpiece, best quality, 1girl, scat, (anal:1.2), , (toilet), from behind,\nNegative prompt: easynegative, (worst quality:1.2), (low quality:1.2), (vaginal), (dirty vagina:1.2), (scat in vagina:1.2), (feces in vagina:1.2)\n"}}},{"rowIdx":28093,"cells":{"modelId":{"kind":"string","value":"CapitainData/wav2vec2-large-xlsr-turkish-demo-colab"},"tags":{"kind":"list like","value":[],"string":"[]"},"pipeline_tag":{"kind":"null"},"config":{"kind":"string","value":"{\n \"architectures\": null,\n \"model_type\": null,\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":0,"string":"0"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlibrary_name: \"transformers.js\"\n---\n\nhttps://huggingface.co/facebook/nllb-200-distilled-600M with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)."}}},{"rowIdx":28094,"cells":{"modelId":{"kind":"string","value":"Capreolus/birch-bert-large-msmarco_mb"},"tags":{"kind":"list like","value":["pytorch","tf","jax","bert","next-sentence-prediction","transformers"],"string":"[\n \"pytorch\",\n \"tf\",\n \"jax\",\n \"bert\",\n \"next-sentence-prediction\",\n \"transformers\"\n]"},"pipeline_tag":{"kind":"null"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"BertForNextSentencePrediction\"\n ],\n \"model_type\": \"bert\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":1,"string":"1"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlanguage:\n- he\nmetrics:\n- accuracy\nlibrary_name: transformers\npipeline_tag: text-classification\ntags:\n- legal\n---"}}},{"rowIdx":28095,"cells":{"modelId":{"kind":"string","value":"Capreolus/electra-base-msmarco"},"tags":{"kind":"list like","value":["pytorch","tf","electra","text-classification","arxiv:2008.09093","transformers"],"string":"[\n \"pytorch\",\n \"tf\",\n \"electra\",\n \"text-classification\",\n \"arxiv:2008.09093\",\n \"transformers\"\n]"},"pipeline_tag":{"kind":"string","value":"text-classification"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"ElectraForSequenceClassification\"\n ],\n \"model_type\": \"electra\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":110,"string":"110"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlibrary_name: \"transformers.js\"\n---\n\nhttps://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)."}}},{"rowIdx":28096,"cells":{"modelId":{"kind":"string","value":"Carlork314/Xd"},"tags":{"kind":"list like","value":[],"string":"[]"},"pipeline_tag":{"kind":"null"},"config":{"kind":"string","value":"{\n \"architectures\": null,\n \"model_type\": null,\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":0,"string":"0"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlibrary_name: \"transformers.js\"\n---\n\nhttps://huggingface.co/distilbert-base-cased-distilled-squad with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)."}}},{"rowIdx":28097,"cells":{"modelId":{"kind":"string","value":"CarlosPR/mt5-spanish-memmories-analysis"},"tags":{"kind":"list like","value":["pytorch","mt5","text2text-generation","transformers","autotrain_compatible"],"string":"[\n \"pytorch\",\n \"mt5\",\n \"text2text-generation\",\n \"transformers\",\n \"autotrain_compatible\"\n]"},"pipeline_tag":{"kind":"string","value":"text2text-generation"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"MT5ForConditionalGeneration\"\n ],\n \"model_type\": \"mt5\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":7,"string":"7"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlibrary_name: \"transformers.js\"\n---\n\nhttps://huggingface.co/bert-base-uncased with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)."}}},{"rowIdx":28098,"cells":{"modelId":{"kind":"string","value":"Carolhuehuehuehue/Sla"},"tags":{"kind":"list like","value":[],"string":"[]"},"pipeline_tag":{"kind":"null"},"config":{"kind":"string","value":"{\n \"architectures\": null,\n \"model_type\": null,\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": null,\n \"max_length\": null\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":0,"string":"0"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlibrary_name: \"transformers.js\"\n---\n\nhttps://huggingface.co/sshleifer/distilbart-cnn-6-6 with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)."}}},{"rowIdx":28099,"cells":{"modelId":{"kind":"string","value":"Cedille/fr-boris"},"tags":{"kind":"list like","value":["pytorch","gptj","text-generation","fr","dataset:c4","arxiv:2202.03371","transformers","causal-lm","license:mit","has_space"],"string":"[\n \"pytorch\",\n \"gptj\",\n \"text-generation\",\n \"fr\",\n \"dataset:c4\",\n \"arxiv:2202.03371\",\n \"transformers\",\n \"causal-lm\",\n \"license:mit\",\n \"has_space\"\n]"},"pipeline_tag":{"kind":"string","value":"text-generation"},"config":{"kind":"string","value":"{\n \"architectures\": [\n \"GPTJForCausalLM\"\n ],\n \"model_type\": \"gptj\",\n \"task_specific_params\": {\n \"conversational\": {\n \"max_length\": null\n },\n \"summarization\": {\n \"early_stopping\": null,\n \"length_penalty\": null,\n \"max_length\": null,\n \"min_length\": null,\n \"no_repeat_ngram_size\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"text-generation\": {\n \"do_sample\": true,\n \"max_length\": 50\n },\n \"translation_en_to_de\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_fr\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n },\n \"translation_en_to_ro\": {\n \"early_stopping\": null,\n \"max_length\": null,\n \"num_beams\": null,\n \"prefix\": null\n }\n }\n}"},"downloads":{"kind":"number","value":401,"string":"401"},"first_commit":{"kind":"null"},"card":{"kind":"string","value":"---\nlibrary_name: \"transformers.js\"\n---\n\nhttps://huggingface.co/google/flan-t5-small with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)."}}}],"truncated":false,"partial":false},"paginationData":{"pageIndex":280,"numItemsPerPage":100,"numTotalItems":30344,"offset":28000,"length":100}},"jwt":"eyJhbGciOiJFZERTQSJ9.eyJyZWFkIjp0cnVlLCJwZXJtaXNzaW9ucyI6eyJyZXBvLmNvbnRlbnQucmVhZCI6dHJ1ZX0sImlhdCI6MTc1NzY2NTc5NCwic3ViIjoiL2RhdGFzZXRzL2xpYnJhcmlhbi1ib3RzL2NhcmRfd2l0aF9maXJzdF9jb21taXQiLCJleHAiOjE3NTc2NjkzOTQsImlzcyI6Imh0dHBzOi8vaHVnZ2luZ2ZhY2UuY28ifQ.vxN0opVuRVJGvUwFiultMvFcL4_i30KGOWMUIGEJRoYBe2Q1qxZxuzh9gfslyoBWtJxg9Xd4gnlLO-ReWc4RCg","displayUrls":true},"discussionsStats":{"closed":0,"open":0,"total":0},"fullWidth":true,"hasGatedAccess":true,"hasFullAccess":true,"isEmbedded":false,"savedQueries":{"community":[],"user":[]}}">
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### DreamStep Dreambooth model trained by grisha2000 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: DIPROMATS_subtask_1_base_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DIPROMATS_subtask_1_base_train
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5120
- F1: 0.8267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4533 | 1.0 | 182 | 0.3471 | 0.7932 |
| 0.1763 | 2.0 | 364 | 0.3473 | 0.8116 |
| 0.1359 | 3.0 | 546 | 0.3887 | 0.8144 |
| 0.1728 | 4.0 | 728 | 0.4311 | 0.8147 |
| 0.1519 | 5.0 | 910 | 0.4881 | 0.8236 |
| 0.0085 | 6.0 | 1092 | 0.5120 | 0.8267 |
| 0.1828 | 7.0 | 1274 | 0.5591 | 0.8118 |
| 0.0071 | 8.0 | 1456 | 0.6079 | 0.8263 |
| 0.0015 | 9.0 | 1638 | 0.6919 | 0.8235 |
| 0.0241 | 10.0 | 1820 | 0.6990 | 0.8221 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: kogpt2-base-v2-finetuned-klue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: F1
type: f1
value: 0.37298165525403665
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kogpt2-base-v2-finetuned-klue-ner
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4076
- F1: 0.3730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6084 | 1.0 | 876 | 0.5353 | 0.2118 |
| 0.3911 | 2.0 | 1752 | 0.4691 | 0.3041 |
| 0.2855 | 3.0 | 2628 | 0.4076 | 0.3730 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-finetuned-lr1e-06-epochs10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-lr1e-06-epochs10
This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 10 | 5.6593 |
| No log | 2.0 | 20 | 5.2648 |
| No log | 3.0 | 30 | 5.0527 |
| No log | 4.0 | 40 | 4.9205 |
| No log | 5.0 | 50 | 4.8196 |
| No log | 6.0 | 60 | 4.7436 |
| No log | 7.0 | 70 | 4.6878 |
| No log | 8.0 | 80 | 4.6452 |
| No log | 9.0 | 90 | 4.6218 |
| No log | 10.0 | 100 | 4.6132 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ellabettison/blocking-model
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ellabettison/blocking-model')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ellabettison/blocking-model')
model = AutoModel.from_pretrained('ellabettison/blocking-model')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ellabettison/blocking-model)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2842 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 40,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 178,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
---
license: afl-3.0
---
Sp-bert (BERT for Scandinavian Politics) was trained on political texts coming from Parliamentary speeches in four languages: Norwegian, Swedish, Danish and Icelandic.
Access to model amogh23/autotrain-sentiment-54660127837 is restricted and you are not in the authorized list. Visit https://huggingface.co/amogh23/autotrain-sentiment-54660127837 to ask for access.
---
license: mit
language:
- en
tags:
- cosmology
- emulator
- physics
- 21cmFAST
---
# 21cmEMU
[][pypi_]
[][status]
[][python version]
[][license]
[][read the docs]
[][tests]
[][codecov]
[][pre-commit]
[][black]
[pypi_]: https://pypi.org/project/py21cmemu/
[status]: https://pypi.org/project/py21cmemu/
[python version]: https://pypi.org/project/py21cmemu
[read the docs]: https://21cmemu.readthedocs.io/
[tests]: https://github.com/21cmFAST/21cmEMU/actions?workflow=Tests
[codecov]: https://app.codecov.io/gh/21cmFAST/21cmEMU
[pre-commit]: https://github.com/pre-commit/pre-commit
[black]: https://github.com/psf/black
## Features
- Uses Tensorflow to emulate the following summary statistics: 21-cm power spectrum, 21-cm global brightness temperature, IGM spin temperature, and neutral fraction.
- Uses 21cmFAST to analytically calculate the UV luminosity functions and the Thomson optical depth to the CMB.
## Requirements
- Tensorflow >= 2.6
- 21cmFAST
## Installation
You can install _py21cmEMU_ via [pip] from [PyPI]:
```console
$ pip install py21cmemu
```
## Usage
Please see the [Command-line Reference] for details.
## Contributing
Contributions are very welcome.
To learn more, see the [Contributor Guide].
## License
Distributed under the terms of the [MIT license][license],
_21cmEMU_ is free and open source software.
## Issues
If you encounter any problems,
please [file an issue] along with a detailed description.
## Credits
This project was generated from [@cjolowicz]'s [Hypermodern Python Cookiecutter] template.
[@cjolowicz]: https://github.com/cjolowicz
[pypi]: https://pypi.org/
[hypermodern python cookiecutter]: https://github.com/cjolowicz/cookiecutter-hypermodern-python
[file an issue]: https://github.com/21cmFAST/21cmEMU/issues
[pip]: https://pip.pypa.io/
<!-- github-only -->
[license]: https://github.com/21cmFAST/21cmEMU/blob/main/LICENSE
[contributor guide]: https://github.com/21cmFAST/21cmEMU/blob/main/CONTRIBUTING.md
[command-line reference]: https://21cmEMU.readthedocs.io/en/latest/usage.html
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6158979909555603
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6485
- Matthews Correlation: 0.6159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3168255304753761e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- max_length: 64,
- dropout: 0.3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5039 | 1.0 | 535 | 0.4617 | 0.4879 |
| 0.3299 | 2.0 | 1070 | 0.4489 | 0.5889 |
| 0.2306 | 3.0 | 1605 | 0.6485 | 0.5266 |
| 0.1695 | 4.0 | 2140 | 0.6485 | 0.6159 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: bigscience-openrail-m
---
# Automated cell nuclei segmentation and classification
Models of the [tumourkit](https://github.com/Jerry-Master/lung-tumour-study) library. The key idea behind these models is illustrated by the following image.

The objective is to detect and classify cells of different tissues. Different models trained with tissue from different organs and stainings are provided.
## Lung (H&E)

## Breast (HER2)

## Consep: Colorectal (H&E)

## Monusac: Miscelaneous (H&E)

## Model description
The model is made by [Hovernet](https://github.com/vqdang/hover_net) as a backbone and a graph neural network on top to improve the classification step. Each backbone comes trained at two resolutions: 270x270 and 518x518. They also come in two version each, trained from scratch or fine-tuned from the consep checkpoint of Hovernet (FT). Then, for each Hovernet model, five graph neural networks are provided that can be used on top. Four graph convolutional neural networks trained with different sets of features and one graph attention network trained with all the features.
To use the models the tumourkit library comes with a simple [demo](https://lung-tumour-study.readthedocs.io/en/latest/usage.html#gradio-demo) that you can try. Beware, on CPU it takes nearly 10 minutes per 1024x1024 image.
## Uses
### Intended use
The lung models are built to estimate the percentage of tumoural cells in a given whole slide image (WSI). It is supposed to be used to accelerate histologist work and give priorities among huge amounts of WSIs to analyse.
The other three models are provided for research purposes only.
### Misuse
By no means these models are supposed to substitute a medical expert, and they are not built for diagnosis. Usage in any critical situation is discouraged.
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9230596990121587
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2215
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8518 | 1.0 | 250 | 0.3235 | 0.9055 | 0.9035 |
| 0.2557 | 2.0 | 500 | 0.2215 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
datasets:
- imagenet-1k
library_name: transformers
pipeline_tag: image-classification
license: other
tags:
- vision
- image-classification
---
# MobileViTv2 (mobilevitv2-1.0-imagenet1k-256)
<!-- Provide a quick summary of what the model is/does. -->
MobileViTv2 is the second version of MobileViT. It was proposed in [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari, and first released in [this](https://github.com/apple/ml-cvnets) repository. The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
### Model Description
<!-- Provide a longer summary of what this model is. -->
MobileViTv2 is constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention.
### Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevitv2) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileViTv2FeatureExtractor, MobileViTv2ForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTv2FeatureExtractor.from_pretrained("shehan97/mobilevitv2-1.0-imagenet1k-256")
model = MobileViTv2ForImageClassification.from_pretrained("shehan97/mobilevitv2-1.0-imagenet1k-256")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes.
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {Separable Self-attention for Mobile Vision Transformers},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2206.02680}
}
```
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
---
tags:
- autotrain
- text-generation
widget:
- text: "I love 🤗 AutoTrain because "
datasets:
- huggingface/autotrain-data-z0yf-urlq-kec7
co2_eq_emissions:
emissions: 0
---
# Model Trained Using AutoTrain
- Problem type: Text Generation
- CO2 Emissions (in grams): 0.0000
## Validation Metrics
loss: nan
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### VanGoghStyle2 Dreambooth model trained by reallylongaddress with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: CoryMagic/wikitext-distill
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# CoryMagic/wikitext-distill
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.3345
- Validation Loss: 3.2376
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5754 | 3.3649 | 0 |
| 3.4385 | 3.3004 | 1 |
| 3.3769 | 3.2633 | 2 |
| 3.3345 | 3.2376 | 3 |
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.9.2
- Datasets 2.4.0
- Tokenizers 0.12.1
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5855730181125508
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6423
- Matthews Correlation: 0.5856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4932 | 1.0 | 535 | 0.5174 | 0.5028 |
| 0.2995 | 2.0 | 1070 | 0.4694 | 0.5782 |
| 0.1959 | 3.0 | 1605 | 0.6423 | 0.5856 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: afl-3.0
tags:
- summarization
- t5
- medical
- clinical
language: en
datasets:
- MIMIC-III
widget:
- again noted is the large intraparenchymal hemorrhage in the posterior right frontal lobe with extension into both lateral ventricles. the degree of surrounding edema and effacement of adjacent sulci is unchanged. there is minor contralateral shift of normal midline structures. the ventricular size is unchanged. subarachnoid blood is now seen in the left frontal and parietal lobes, likely due to recirculation of the ventricular blood.
- a least two attempts were made at imaging, however, the study remains severely limited by patient motion. minimal hyperdensity tracks along a left parietal sulcus (2a:18) is equivocal for a small subarachnoid hemorhage. there is no large mass effect detected. there is no shift of normally midline structures. a minimally displaced zygomatic fracture is present (2a:9). the middle ear cavities, mastoid air cells are clear. there is extensive soft tissue swelling overlying the right frontal calvarium with swelling extending to the right preseptal soft tissues (2a:12). there is mild - moderate mucosal thickening within the ethmoid and maxillary sinuses with some fluid and fluid mucosal thickening in the sphenoid sinus.
inference:
parameters:
max_length: 350
metrics:
- rouge-l
---
# Impression section Generator For Radiology Reports 🏥
This model is is the result of participation of SINAI team in [Task 1B: Radiology Report Summarization](https://vilmedic.app/misc/bionlp23/sharedtask) at the BioNLP workshop held on ACL 2023.
The goal of this task is to foster development of automatic radiology report summarization systems and expanding their applicability by incorporating seven different modalities and anatomies in the provided data.
We propose to automate the generation of radiology impressions with "sequence-to-sequence" learning that leverages the power of publicly available pre-trained models, both general domain and biomedical domain-specific.
This repository provides access to our best-performing system that resulted from fine-tuning of [Sci-Five base](https://huggingface.co/razent/SciFive-base-Pubmed_PMC), which is T5 model trained for extra 200k steps to optimize it in the context of biomedical literature.
# Results
The official evaluation results prove that adaptation of a general-domain system for biomedical literature is beneficial for the subsequent fine-tuning for radiology report summarization task. The Table below summarizes the official scores obtained by this model during the official evaluation. Team standings re available [here](https://vilmedic.app/misc/bionlp23/leaderboard/).
| BLEU4 | ROUGE-L | BERTscore | F1-RadGraph
|-----------|--------|----------|----------|
| 017.38 | 32.32 | 55.04 | 33.96 |
# System description paper and citation
The paper with the detailed description of the system will be published in the [Proceedings of the 22st Workshop on Biomedical Language Processing](https://aclanthology.org/venues/bionlp/).
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - LittleFlyingSheep/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-unit4test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
---
tags:
- autotrain
- text-generation
widget:
- text: "I love 🤗 AutoTrain because "
datasets:
- huggingface/autotrain-data-5u7lo-5p6l-zjp0
co2_eq_emissions:
emissions: 0
---
# Model Trained Using AutoTrain
- Problem type: Text Generation
- CO2 Emissions (in grams): 0.0000
## Validation Metrics
loss: nan
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# amittian/setfit_asoc_version_0_0_1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("amittian/setfit_asoc_version_0_0_1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
---
license: openrail
datasets:
- balgot/stylegan3-annotated
language:
- en
metrics:
- mse
tags:
- face-generation
- stylegan3
library_name: pytorch
---
# Text-to-StyleGAN3 Latent Space Translation
This model was created as a part of the project for FI:PA228 (Masaryk University),
inspired by this paper: [Face Generation from Textual Features using Conditionally trained Inputs to Generative Adversarial Networks](https://arxiv.org/abs/2301.09123)
It was trained on the generated dataset from BLIP and StyleGAN3. See the [corresponding notebook](https://colab.research.google.com/drive/14rDcCc0Xr1L1Ax3aKezEhmcn81vXGVQ7?usp=sharing)
for further details.
## How to use:
```python
import torch.nn as nn
# for now, the model class needs to be defined, so...
class LaTran(nn.Module):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.pipe = nn.Sequential(
nn.Linear(384, 512),
nn.ReLU(),
nn.Linear(512, 512)
)
def forward(self, v):
return self.pipe(v.unsqueeze(1))
# Instantiate and load the model
dev = ... # device to use
PATH = "translation_model-sd.pt" # local path
model = LaTran().to(dev)
model.load_state_dict(torch.load(TRANSLATION_MODEL, map_location=dev))
```
## Demo
For the demo of the whole pipeline, or how this model helps to generate a final image,
visits [text-to-stylegan HF space](https://huggingface.co/spaces/balgot/text-to-stylegan3).
## Examples
* Prompt: `attractive young woman, blond hair`

* Prompt initial: `cute young boy, blond hair, blue eyes, smiling`
* Prompt second: `old man, short gray hair, glasses, wearing hat`
<img src="https://huggingface.co/balgot/bert-2-stylegan3/resolve/main/young2old.gif" width="200" height="200" />
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8334173810724491
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2700
- F1: 0.8334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5754 | 1.0 | 191 | 0.3555 | 0.7842 |
| 0.2623 | 2.0 | 382 | 0.2806 | 0.8180 |
| 0.1744 | 3.0 | 573 | 0.2700 | 0.8334 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: imdb_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# imdb_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4974
- Validation Loss: 0.2063
- Train Accuracy: 0.93
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 625, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.4974 | 0.2063 | 0.93 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.931
- name: F1
type: f1
value: 0.9309844319832071
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2160
- Accuracy: 0.931
- F1: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8342 | 1.0 | 250 | 0.3068 | 0.9115 | 0.9084 |
| 0.248 | 2.0 | 500 | 0.2160 | 0.931 | 0.9310 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: openrail
---
Indians are known for bringing an incredible level of energy and excitement to whatever sporting event they participate in. They have a special and amazing love for the sport of playing it, and in particular, cricket is one of the sports that they enjoy playing. Not only do the people of India have a profound fondness for this Betbook247 Exchange, but their nation is also responsible for producing some of the most gifted players in the history of cricket.
People in India wager a significant amount of money on the <a href="https://getcricketidonline.com/betbook247-new-id-sign-up-register.html">Betbook247</a> Exchange, and one of the reasons for this is because of this.
As a result of the expansion of cricket betting, individuals now have a wonderful opportunity to engage in the sport of cricket. They are now able to not only stay up with the most recent cricket news, but also have the possibility to make money while playing their favorite sport at the same time.
The frequency of Betbook247 Exchange in India may be proved by the subsequent explanations that are offered below; these explanations are listed in order from most relevant to least relevant.
buyonline cricket id
https://vocal.media/gamers/how-to-use-king-exchange
https://vocal.media/gamers/how-to-use-bdbetway
https://vocal.media/gamers/how-to-use-fairexch9
https://vocal.media/gamers/how-to-use-lotusbook247-com-login
https://vocal.media/gamers/how-to-use-matchbox9
https://vocal.media/gamers/how-to-use-ambani-book-365
https://vocal.media/gamers/how-to-use-dafabet-login
https://vocal.media/gamers/how-to-use-satsport247
https://vocal.media/gamers/how-to-use-10cric10
https://vocal.media/gamers/how-to-use-abexch9
https://vocal.media/gamers/how-to-use-cricketbet9
https://vocal.media/gamers/how-to-use-doexch
https://vocal.media/gamers/how-to-use-lucky7
https://vocal.media/gamers/how-to-use-tenexch
https://vocal.media/gamers/how-to-use-4rabet-login
https://vocal.media/gamers/how-to-use-skyinplay
https://vocal.media/gamers/how-to-use-mahakal-book
https://vocal.media/gamers/how-to-use-silver-exchange-id
https://vocal.media/gamers/how-to-use-rajbet
https://vocal.media/gamers/how-to-use-pb77-exchange
https://vocal.media/gamers/how-to-use-12bet-login
https://vocal.media/gamers/how-to-use-bet-star-exchange
https://vocal.media/gamers/how-to-use-marvelbet-login
https://vocal.media/gamers/how-to-use-jeetwin-login
https://vocal.media/gamers/how-to-use-rajveerexch-login
https://vocal.media/gamers/how-to-use-reddy-anna-book-login
https://vocal.media/gamers/how-to-use-1win-login
https://vocal.media/gamers/how-to-use-ssexch-login
https://vocal.media/gamers/how-to-use-fun88-login
https://vocal.media/gamers/how-to-use-pin-up-bet-login
https://vocal.media/gamers/how-to-use-betbarter-login
https://mohit.tistory.com/3
getcricket
https://getcricket.tistory.com/2
iplbetting id
https://mohit.tistory.com/2
https://getcricketid.bcz.com/2023/05/02/choose-your-bets-wisely-sign-up-for-betbhai9/
https://vocal.media/gamers/cricket-betting-phrases-jetexch9-symbol
https://vocal.media/gamers/rajbet-exchange-id-provides-betting-on-the-indian-premier-league
https://vocal.media/gamers/lotus365-io-accepts-bets-on-the-indian-premier-league
https://vocal.media/gamers/ssexch-accepts-cricket-bets
https://vocal.media/gamers/lotus-exchange-betting-id-app-allows-you-to-begin-earning-money-right-immediately
https://vocal.media/gamers/yolo247-bet-allows-you-to-place-a-bet-on-any-indian-premier-league-team-of-your-choice
https://vocal.media/gamers/skyinplay-betting-exchange-experience-cricket-betting-an-exciting-experience
https://vocal.media/gamers/4rabet-ipl-betting-id-that-may-be-utilized-for-live-wagering
https://vocal.media/gamers/sky247-login-is-a-useful-and-immersive-platform
https://vocal.media/gamers/world777-id-login-is-a-well-known-betting-platform
cricket betting
https://mohit.tistory.com/4
https://vocal.media/gamers/download-the-all-cricket-id-app-for-android-to-put-bets-on-their-favorite-indian-premier-league-club
https://vocal.media/gamers/online-cricket-id-login-provides-a-variety-of-betting-options
https://vocal.media/gamers/online-cricket-betting-id-provides-cricket-betting
https://vocal.media/gamers/online-cricket-betting-id-in-india-offers-a-reliable-and-secure-betting-environment
https://vocal.media/gamers/exchange-cricket-id-will-teach-you-the-ins-and-outs-of-cricket-betting
https://vocal.media/gamers/cricket-id-online-offers-single-game-betting
https://vocal.media/gamers/cricket-betting-id-makes-it-simple-to-place-bets-on-cricket-matches
https://vocal.media/gamers/cricket-id-online-provides-the-best-live-betting-and-in-play-action
https://vocal.media/gamers/discover-the-benefits-of-ambani-book-betting-id-gambling
https://vocal.media/gamers/gain-an-advantage-while-betting-on-prime-exch-betting-id
onlinecricketidhindi
https://mohit.tistory.com/5
https://vocal.media/gamers/get-your-goexch9-whatsapp-id-account-and-begin-betting-on-cricket-immediately
https://vocal.media/gamers/getting-a-cricket-id-will-keep-you-ahead-of-the-betting-game-at-all-times
https://vocal.media/gamers/get-the-dreamexch-whatsapp-number-online-on-your-device
https://vocal.media/gamers/is-skyinplay-exchange-india-s-most-trustworthy-currency
https://vocal.media/gamers/join-dreamexch-ipl-id-admin-in-the-cricket-madness
https://vocal.media/gamers/join-dreamexch-ipl-to-take-advantage-of-the-best-welcome-incentive-available
https://vocal.media/gamers/keep-outside-events-from-affecting-your-ambani-book-247-new-id-prospects
https://vocal.media/gamers/join-dreamexch-register-to-take-advantage-of-the-best-welcome-incentive-available
https://vocal.media/gamers/keep-up-to-date-with-dreamexch-new-id-s-real-time-score-administrator
https://vocal.media/gamers/learn-how-to-read-odds-before-betting-on-ambani-book-365-sign-up
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 31.80 +/- 23.68
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.925808056925967
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2186
- Accuracy: 0.9255
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3109 | 0.913 | 0.9104 |
| No log | 2.0 | 500 | 0.2186 | 0.9255 | 0.9258 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
Access to model kobamasa/kobakoba is restricted and you are not in the authorized list. Visit https://huggingface.co/kobamasa/kobakoba to ask for access.
# Harry Potter Chatbot
This model is a chatbot designed to generate responses in the style of Harry Potter, the protagonist from J.K. Rowling's popular book series and its movie adaptations.
## Model Architecture
The `harry_potter_chatbot` is based on the [`DialoGPT-medium`](https://huggingface.co/microsoft/DialoGPT-medium) model, a powerful GPT-based architecture designed for generating conversational responses. It has been fine-tuned on a dataset of Harry Potter's dialogues from movie transcripts.
## Usage
You can use this model to generate responses for a given input text using the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("diabolic6045/harry_potter_chatbot")
model = AutoModelForCausalLM.from_pretrained("diabolic6045/harry_potter_chatbot")
input_text = "What's your favorite spell?"
input_tokens = tokenizer.encode(input_text, return_tensors='pt')
output_tokens = model.generate(input_tokens, max_length=50, num_return_sequences=1)
output_text = tokenizer.decode(output_tokens[0], skip_special_tokens=True)
print(output_text)
```
## Limitations
This model is specifically designed to generate responses in the style of Harry Potter and may not provide accurate or coherent answers to general knowledge questions. It may also sometimes generate inappropriate responses. Be cautious while using this model in a public setting or for critical applications.
## Training Data
The model was fine-tuned on a dataset of Harry Potter's dialogues from movie transcripts. The dataset was collected from publicly available movie scripts and includes conversations and quotes from various Harry Potter films.
## Acknowledgments
This model was trained using the Hugging Face [Transformers](https://github.com/huggingface/transformers) library, and it is based on the [`DialoGPT-medium`](https://huggingface.co/microsoft/DialoGPT-medium) model by Microsoft. Special thanks to the Hugging Face team and Microsoft for their contributions to the NLP community.
---
Feel free to test the model and provide feedback or report any issues. Enjoy chatting with Harry Potter!
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# sheetalp91/setfit-model-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("sheetalp91/setfit-model-1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
---
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- accuracy
- f1
model-index:
- name: goemotions_bertspanish_finetunig_d
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
config: simplified
split: test
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.425
- name: F1
type: f1
value: 0.321340168917587
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goemotions_bertspanish_finetunig_d
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2826
- Accuracy: 0.425
- F1: 0.3213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: jjdelgado/my_newsgroups_roberta_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jjdelgado/my_newsgroups_roberta_model
This model is a fine-tuned version of [RoBERTa-base](https://huggingface.co/RoBERTa-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3069
- Validation Loss: 1.0260
- Train Accuracy: 0.6920
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3535, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.3069 | 1.0260 | 0.6920 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: openrail
---
## Thanks
Big thanks to `Google` for lending us TPUv4s to train this model on. Big thanks to the Huggingface and Diffusers team for organising the JAX Diffusers sprint, giving support and making the JAX training scripts. Big thanks to StabilityAI for opensourcing the Stable Diffusion model, it has made a great impact on the community!
## About the dataset
To make this demo as good as possible, our team spend a lot of time training a custom model. We used the LAION5B dataset to build our custom dataset, which contains 130k images of 15 types of rooms in almost 30 design styles. After fetching all these images, we started adding metadata such as captions (from the BLIP captioning model) and segmentation maps (from the HuggingFace UperNetForSemanticSegmentation model).
## About the model
This dataset was then used to train the controlnet model to generate quality interior design images by using the segmentation maps and prompts as conditioning information for the model. By training on segmentation maps, the end user has a very finegrained control over which objects they want to place in their room.
The training started from the `lllyasviel/control_v11p_sd15_seg` checkpoint, which is a robustly trained controlnet model conditioned on segmentation maps. This checkpoint got fine-tuned on a TPUv4 with the JAX framework. Afterwards, the checkpoint was converted into a PyTorch checkpoint for easy integration with the diffusers library.
## About the demo
Our team made a streamlit demo where you can test out the capabilities of this model.
The resulting model is used in a community pipeline that supports image2image and inpainting, so the user can keep elements of their room and change specific parts of the image.
https://huggingface.co/spaces/controlnet-interior-design/controlnet-seg
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: kogpt2-base-v2-finetuned-klue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: F1
type: f1
value: 0.7404764644953389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kogpt2-base-v2-finetuned-klue-ner
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3849
- F1: 0.7405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2667 | 1.0 | 1313 | 0.2522 | 0.7073 |
| 0.173 | 2.0 | 2626 | 0.2498 | 0.7313 |
| 0.1237 | 3.0 | 3939 | 0.2660 | 0.7330 |
| 0.0861 | 4.0 | 5252 | 0.3104 | 0.7423 |
| 0.0592 | 5.0 | 6565 | 0.3849 | 0.7405 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
Access to model Leonhard1337/helloAI is restricted and you are not in the authorized list. Visit https://huggingface.co/Leonhard1337/helloAI to ask for access.
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.73 +/- 5.48
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r WilliamADSP/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: kogpt2-base-v2-finetuned-klue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: F1
type: f1
value: 0.7679222357229647
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kogpt2-base-v2-finetuned-klue-ner
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3344
- F1: 0.7679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4868 | 1.0 | 876 | 0.3412 | 0.7589 |
| 0.2705 | 2.0 | 1752 | 0.3255 | 0.7692 |
| 0.2199 | 3.0 | 2628 | 0.3220 | 0.7560 |
| 0.181 | 4.0 | 3504 | 0.3122 | 0.7815 |
| 0.1409 | 5.0 | 4380 | 0.3344 | 0.7679 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: WilliamADSP/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
---
license: openrail
datasets:
- OpenAssistant/oasst1
tags:
- male
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.518818601771926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4610
- Matthews Correlation: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4985 | 1.0 | 535 | 0.4610 | 0.5188 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- accuracy
- f1
model-index:
- name: goemotions_bertspanish_finetunig_e
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
config: simplified
split: test
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.4
- name: F1
type: f1
value: 0.2777912523419085
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goemotions_bertspanish_finetunig_e
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3859
- Accuracy: 0.4
- F1: 0.2778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
Lora fine tune trained on this [dataset](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
```python
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{data_point["instruction"]}
### Response:
{data_point["output"]}
```
CUDA_VISIBLE_DEVICES=0
python llama.py model c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors 4bit-128g.safetensors
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: jva-missions-report-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jva-missions-report-v2
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1873
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.1873 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- f1
- recall
- accuracy
- precision
model-index:
- name: bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-test
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7374
- F1: 0.1580
- Recall: 0.3233
- Accuracy: 0.3233
- Precision: 0.1045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Accuracy | Precision |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:--------:|:---------:|
| 2.7132 | 1.0 | 6530 | 2.7463 | 0.1580 | 0.3233 | 0.3233 | 0.1045 |
| 2.7441 | 2.0 | 13060 | 2.7423 | 0.1580 | 0.3233 | 0.3233 | 0.1045 |
| 2.7328 | 3.0 | 19590 | 2.7365 | 0.1580 | 0.3233 | 0.3233 | 0.1045 |
| 2.7464 | 4.0 | 26120 | 2.7374 | 0.1580 | 0.3233 | 0.3233 | 0.1045 |
| 2.7178 | 5.0 | 32650 | 2.7374 | 0.1580 | 0.3233 | 0.3233 | 0.1045 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1623167114075832323/FeVdguyt_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CARMAXLLA</div>
<div style="text-align: center; font-size: 14px;">@carmaxlla</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CARMAXLLA.
| Data | CARMAXLLA |
| --- | --- |
| Tweets downloaded | 3139 |
| Retweets | 409 |
| Short tweets | 574 |
| Tweets kept | 2156 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/t435hmy0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @carmaxlla's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mduks720) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mduks720/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/carmaxlla')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: transcriber-t5-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# transcriber-t5-v6
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.129 | 0.29 | 500 | 0.1034 |
| 0.0962 | 0.59 | 1000 | 0.0758 |
| 0.0754 | 0.88 | 1500 | 0.0611 |
| 0.097 | 1.18 | 2000 | 0.0562 |
| 0.034 | 1.47 | 2500 | 0.0502 |
| 0.0679 | 1.77 | 3000 | 0.0466 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
license: mit
datasets:
- wikipedia
language:
- it
widget:
- text: "milano è una <mask> dell'italia"
example_title: "Example 1"
- text: "giacomo leopardi è stato uno dei più grandi <mask> del classicismo italiano"
example_title: "Example 2"
- text: "la pizza è un noto simbolo della <mask> gastronomica italiana"
example_title: "Example 3"
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: FLARE 🔥</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Introduction</h3>
This model is a <b>lightweight</b> and uncased version of <b>MiniLM</b> <b>[1]</b> for the <b>Italian</b> language. Its <b>17M parameters</b> and <b>67MB</b> size make it
<b>85% lighter</b> than a typical mono-lingual BERT model. It is ideal when memory consumption and execution speed are critical while maintaining high-quality results.
<h3>Model description</h3>
The model builds on <b>mMiniLMv2</b> <b>[1]</b> (from Microsoft: [L6xH384 mMiniLMv2](https://github.com/microsoft/unilm/tree/master/minilm)) as a starting point,
focusing it on the Italian language while at the same time turning it into an uncased model by modifying the embedding layer
(as in <b>[2]</b>, but computing document-level frequencies over the <b>Wikipedia</b> dataset and setting a frequency threshold of 0.1%), which brings a considerable
reduction in the number of parameters.
To compensate for the deletion of cased tokens, which now forces the model to exploit lowercase representations of words previously capitalized,
the model has been further pre-trained on the Italian split of the [Wikipedia](https://huggingface.co/datasets/wikipedia) dataset, using the <b>whole word masking [3]</b> technique to make it more robust
to the new uncased representations.
The resulting model has 17M parameters, a vocabulary of 14.610 tokens, and a size of 67MB, which makes it <b>85% lighter</b> than a typical mono-lingual BERT model and
75% lighter than a standard mono-lingual DistilBERT model.
<h3>Training procedure</h3>
The model has been trained for <b>masked language modeling</b> on the Italian <b>Wikipedia</b> (~3GB) dataset for 10K steps, using the AdamW optimizer, with a batch size of 512
(obtained through 128 gradient accumulation steps),
a sequence length of 512, and a linearly decaying learning rate starting from 5e-5. The training has been performed using <b>dynamic masking</b> between epochs and
exploiting the <b>whole word masking</b> technique.
<h3>Performances</h3>
The following metrics have been computed on the Part of Speech Tagging and Named Entity Recognition tasks, using the <b>UD Italian ISDT</b> and <b>WikiNER</b> datasets, respectively.
The PoST model has been trained for 5 epochs, and the NER model for 3 epochs, both with a constant learning rate, fixed at 1e-5. For Part of Speech Tagging, the metrics have been computed on the default test set
provided with the dataset, while for Named Entity Recognition the metrics have been computed with a 5-fold cross-validation
| Task | Recall | Precision | F1 |
| ------ | ------ | ------ | ------ |
| Part of Speech Tagging | 95.64 | 95.32 | 95.45 |
| Named Entity Recognition | 82.27 | 80.64 | 81.29 |
The metrics have been computed at the token level and macro-averaged over the classes.
<h3>Demo</h3>
You can try the model online (fine-tuned on named entity recognition) using this web app: https://huggingface.co/spaces/osiria/flare-it-demo
<h3>Quick usage</h3>
```python
from transformers import AutoTokenizer, XLMRobertaForMaskedLM
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("osiria/flare-it")
model = XLMRobertaForMaskedLM.from_pretrained("osiria/flare-it")
pipeline_mlm = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
```
<h3>Limitations</h3>
This lightweight model has been further pre-trained on Wikipedia, so it's particularly suitable as an agile analyzer for large volumes of natively digital text
from the world wide web, written in a correct and fluent form (like wikis, web pages, news, etc.). However, it may show limitations when it comes to chaotic text, containing errors and slang expressions
(like social media posts) or when it comes to domain-specific text (like medical, financial or legal content).
<h3>References</h3>
[1] https://arxiv.org/abs/2012.15828
[2] https://arxiv.org/abs/2010.05609
[3] https://arxiv.org/abs/1906.08101
<h3>License</h3>
The model is released under <b>MIT</b> license
---
inference:
parameters:
do_sample: true
max_length: 384
top_p: 0.9
repetition_penalty: 1.0
language:
- en
license: mit
tags:
- "text2text generation"
task:
name: "lyrics interpretation"
type: "text2text generation"
widget:
-
text: "Explain: \nLoving him is like driving a new Maserati down a dead end street\nFaster than the wind, passionate as sin, ending so suddenly\nLoving him is like trying to change your mind\nOnce you're already flying through the free fall\nLike the colors in autumn, so bright, just before they lose it all\n\nLosing him was blue, like I'd never known\nMissing him was dark gray, all alone\nForgetting him was like trying to know\nSomebody you never met\nBut loving him was red\nLoving him was red\n\nTouching him was like realizing all you ever wanted\nWas right there in front of you\nMemorizing him was as easy as knowing all the words\nTo your old favorite song\nFighting with him was like trying to solve a crossword\nAnd realizing there's no right answer\nRegretting him was like wishing you never found out\nThat love could be that strong\n\nLosing him was blue, like I'd never known\nMissing him was dark gray, all alone\nForgetting him was like trying to know\nSomebody you never met\nBut loving him was red\nOh, red\nBurning red\n\nRemembering him comes in flashbacks and echoes\nTell myself it's time now gotta let go\nBut moving on from him is impossible\nWhen I still see it all in my head\nIn burning red\nBurning, it was red\n\nOh, losing him was blue, like I'd never\nnown\nMissing him was dark gray, all alone\nForgetting him was like trying to know\nSomebody you never met\n'Cause loving him was red\nYeah, yeah, red\nBurning red\n\nAnd that's why he's spinning 'round in my head\nComes back to me, burning red\nYeah, yeah\nHis love was like driving a new Maserati down a dead end street"
example_title: "Red - Taylor Swift"
---
# Overview
This pilot hub aims to test whether a flan-t5-base can effectively automate poem interpretation.
To use the hub, simply paste in any poem of interest and see its meaning. Please begin your request with the prompt, 'Explain: '.
---
inference:
parameters:
do_sample: true
max_length: 384
top_p: 0.9
repetition_penalty: 1.0
language:
- en
license: mit
tags:
- "text2text generation"
task:
name: "poem interpretation"
type: "text2text generation"
widget:
-
text: "Explain: \nThe Lost Boy\n\nBoy it really stinks in here\nThe dumpster is not the place\nTo get the food you need each day\nJust to feed your face.\n\nA ten-year-old with a dirty face\nCrawls out with his daily meal\nWhat is he doing in this place\nHow am I suppose to feel?\n\nHis mother cradles a baby \nThe child's been dead three weeks\nHer mind is gone from drug abuse\nAnd now she hardly speaks.\n\nGrandma is a drunkard\nWith men who come to town\nBringing her a bottle\nJust to go a round.\n\nDrugs out on the table \nA line or two is good\nThat should carry her over \nNo one ever understood.\n\nThe little boy with dirty face\nHas not been schooled in years\nHe fights the streets alone\nLong since lost his fears.\n\nA stale sandwich, and watered coke\nHis meal for this day\nWhatever tomorrow may bring\nHe knows not the word play.\n\nEmaciated with distant eyes\nNo one really sees him\nJust one of the lost boys\nHis life completely grim.\n\nGod bless the children!\n\n"
example_title: "The Lost Boy - pattyann4500 (allpoetry.com/920731)"
-
text: "Explain: \nLet your breath be the air I need,\nwhen I drown in your eyes as I see.\nLet yourself fall into my arms that bleed,\nwhen the world shows you no mercy.\n\nLet your sad past bury down in the core,\nwhen you walk with your heart close to me.\nLet there be your lovely face at the door,\nWhen I return from the war no matter how long it be.\n\nLet your love nourish my frozen heart,\nwhen it lies under the snow capped tree.\nLet me be enslaved with you forever from the start,\nwhen the time comes, together we shall flee.\n\nLet your presence enlighten my dark,\nwhen your smile reflects in the sea.\nLet the words of love be thy spark,\nwhen you come out of dreams to kiss me.\n\nI wish we were together... my princess... \n"
example_title: "Princess... - Soulhealer95 (allpoetry.com/11038949)"
---
# Overview
The aim of this pilot hub is to test whether a Flan-T5-Base model, when pre-trained with a lyrics interpretation task, can better interpret poems.
To use the hub, simply paste in any poem of interest and see its meaning. Please begin your request with the prompt, 'Explain: '.
---
datasets:
- logo-wizard/modern-logo-dataset
tags:
- text-to-image
- lora
- stable-diffusion
pipeline_tag: text-to-image
license: creativeml-openrail-m
---
# LoRA text2image fine-tuning - eewwann/logo-diffusion-lora-v10
These are LoRA with Hadamard Product (LoHa) adaption weights for [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-nonema-pruned.safetensors). The weights were fine-tuned on the [logo-wizard/modern-logo-dataset](https://huggingface.co/datasets/logo-wizard/modern-logo-dataset) dataset. You can find some example images in the following.




## Psychology-Alpaca-RM
- PEFT adapter layers for a reward model based on ``decapoda-research/llama-7b-hf``.
- Trained with a small subset (110 data points) of ``samhog/cgpt-pairs`` with 10K prompts, each with two answers (one 'good', one 'bad')
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1080557016463147008/sPN7F0Dd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Scratch Team</div>
<div style="text-align: center; font-size: 14px;">@scratch</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Scratch Team.
| Data | Scratch Team |
| --- | --- |
| Tweets downloaded | 3161 |
| Retweets | 2028 |
| Short tweets | 4 |
| Tweets kept | 1129 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qnkb8q9j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @scratch's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yt6szut) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yt6szut/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/scratch')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1627766675620745235/CgPEg0Tc_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Chris Uri</div>
<div style="text-align: center; font-size: 14px;">@redcloudnimbus</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Chris Uri.
| Data | Chris Uri |
| --- | --- |
| Tweets downloaded | 1359 |
| Retweets | 208 |
| Short tweets | 199 |
| Tweets kept | 952 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/p68z097t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @redcloudnimbus's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/s8pwy6bb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/s8pwy6bb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/redcloudnimbus')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: expert-freelaw
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# expert-freelaw
This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0413
- Accuracy: 0.5643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0772 | 0.01 | 200 | 2.0728 | 0.5588 |
| 2.0718 | 0.01 | 400 | 2.0656 | 0.5600 |
| 2.0661 | 0.02 | 600 | 2.0561 | 0.5617 |
| 2.0606 | 0.03 | 800 | 2.0472 | 0.5632 |
| 2.0514 | 0.04 | 1000 | 2.0413 | 0.5643 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5365007161029405
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4711
- Matthews Correlation: 0.5365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.678498850368218e-06
- train_batch_size: 32
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.4731 | 0.4664 |
| 0.4819 | 2.0 | 536 | 0.4537 | 0.5233 |
| 0.4819 | 3.0 | 804 | 0.4711 | 0.5365 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
---
library_name: "transformers.js"
---
https://huggingface.co/openai/whisper-small.en with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
---
license: openrail
tags:
- scat
- lora
- stable diffusion
---
Here's the defecation lora, it was available on Civitai until the ban on scat content.
You can use various trigger words to get different effects, like "Scat", "Disposal", "Feces" and so on.
The main problem with this model is that that it tends to confuse the anus and the vagina, so you'll have to add prompts and negatives usefull to reduce this effect.
You can find my other models on Civitai: https://civitai.com/user/JollyIm/models
A first example:

Prompts: Realistic, Realism, (Masterpiece, Best Quality, High Quality, Highres:1.4), Detailed, Extremely Detailed, Ambient Soft Lighting, 4K, (Extremely Detailed Eyes, Detailed Face and Skin:1.2), masterpiece, best quality, 1girl, feces, disposal, (anal:1.2), <lora:defecation_v1:0.7>, (public toilet), embarassed, (pile of feces), (perfect pussy), (perfect vagina),
Negative prompt: easynegative, (worst quality:1.2), (low quality:1.2), (vaginal), (dirty vagina:1.2), (feces in vagina:1.2), (feces in vagina:1.2)
Second example:

Prompts: masterpiece, best quality, 1girl, scat, (anal:1.2), <lora:defecation_v1:0.9>, (toilet), from behind,
Negative prompt: easynegative, (worst quality:1.2), (low quality:1.2), (vaginal), (dirty vagina:1.2), (scat in vagina:1.2), (feces in vagina:1.2)
---
library_name: "transformers.js"
---
https://huggingface.co/facebook/nllb-200-distilled-600M with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
---
library_name: "transformers.js"
---
https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
---
library_name: "transformers.js"
---
https://huggingface.co/distilbert-base-cased-distilled-squad with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
---
library_name: "transformers.js"
---
https://huggingface.co/bert-base-uncased with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
---
library_name: "transformers.js"
---
https://huggingface.co/sshleifer/distilbart-cnn-6-6 with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
---
library_name: "transformers.js"
---
https://huggingface.co/google/flan-t5-small with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.