modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Culmenus/XLMR-ENIS-finetuned-ner
|
[
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: artistic-2.0
datasets:
- Fakermiya/nsfw-sfw
language:
- pl
library_name: adapter-transformers
tags:
- art
---
|
Culmenus/checkpoint-168500-finetuned-de-to-is_nr2
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- generated_from_trainer
model-index:
- name: subh_whisper_small_distil_libri360_12_to_10_batch_4_epoch_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# subh_whisper_small_distil_libri360_12_to_10_batch_4_epoch_20
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3609.3647
- eval_wer: 95.1950
- eval_runtime: 2080.4966
- eval_samples_per_second: 2.598
- eval_steps_per_second: 2.598
- epoch: 4.95
- step: 500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 512
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Culmenus/opus-mt-de-is-finetuned-de-to-is
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | 2023-04-19T18:27:10Z |
---
license: cc-by-nc-4.0
language:
- zh
tags:
- mad
- fate
- hk
---
# 《命案》免費線上看完整版(2023小鴨影音)
哪裡可以《命案》免費線上看?命案線上看、高清小鴨影音完整版,隨時隨地輕鬆追上最新電影資訊!
《命案》線上看、完整版小鴨 2023,(電影)命案線上看【小鴨版免費】而且還是原廠正版HD畫質。
## 命案線上看、電影下載片免費:
[](https://super4kuhdq.com/zh/movie/969050)
🔴觀看完整版 HD ➡ [https://super4kuhdq.com/zh/movie/969050](https://super4kuhdq.com/zh/movie/969050)
免費觀看《命案》 Mad Fate 2020小鴨 完整版—香港電影2023【在線觀看】
●●可供下載,(命案 2023) 720p、1080p、BrRip、DvdRip、Youtube、Reddit、多語言和高質量●●
點開後就可以觀看囉,高畫質免費線上看,命案線上看完整版、命案線上看小鴨。提供繁體中文字幕,離線觀看,支援跨裝置(Android, iOS, Android TV, Apple TV, Chromecast, AirPlay, MOD)接續播放。
您可以免費享受最高質量的[Mad Fate 2023]電影。線上看電影《命案》的完整版。
## 《命案》香港上映, 上映時間, 故事, 劇情介紹、如何觀看, 在这里可用 。
一心帮凤姐化解“死劫”的命理大师(林家栋 饰),因凤姐独自离开而无法阻止她被残杀的命运。正当大师悲痛莫名之际却机缘巧合遇上送错外卖的茶餐厅少东(杨乐文 饰),并挑起了少东与生俱来的杀戮冲动。大师算出少东将会因杀人而犯牢狱之灾,少东害怕再陷囹圄,求大师帮忙改命。曾目睹少东杀猫的老差骨(吴延烨 饰)坚信少东是天生的心理变态,甩不掉嗜血本性,大师却认为既是天生,那错的不是少东,是命运!大师使尽风水术数、中西玄学,但总是人算不如天算,铩羽而归…黔驴技穷之际,老差骨步步进逼、凶手虎视眈眈、还有年轻凤姐(伍咏诗 饰)的致命诱惑,令少东的杀念越加炽烈,执刀就要踏上杀戮之路。而大师也频临精神崩溃,命中注定的大劫将至! 一切皆是命,半点不由人?
发布日期: 2023-04-20
运行时间: 108 分钟
类型: 犯罪, 悬疑
## 至于如何在没有广告的情况下免費線上看《命案》?
在这里你可以《命案》電影、免費線上看而無需註冊完整高清 1080p、无广告,如果您使 用 Apple 裝置,且您的 Android TV 支援 AirPlay,即可將 Apple 裝置的畫面鏡射到電視上, 或是串流播放內容。
## 您也可以在這裡免費下載《命案》電影!
找幾部電影看吧!下面小編就給大家介紹幾個不錯的電影資源網站,以下幾個電影資源網站各有各的特色,比如專注於電影資源整理、專注於電視劇資源整理還有一些事專注於美劇資源整理的,希望這些分享能夠幫助到各位小夥伴們。小調網小調網,即以前的電影天堂。該網站是目前國內較大的電影在線觀看和下載平台。主要有迅雷下載和快車下載以及手機視頻格式下載。
我們提供觀看全高清質量的最新電影的機會。 《命案 電影》在線觀看 1080p 質量的免費高清電影。 您可以訪問 文字幕和原版的電影節最突出的作品和電影。
### 谷歌關鍵詞:
命案
命案線上看
命案線上看小鴨
命案免費線上看
命案線上看
命案2023電影
命案線上看完整版
命案香港上映
命案香港上映時間
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_ekkicc
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: cc-by-nc-4.0
language:
- zh
tags:
- art
- legal
---
# 《殺神John Wick 4》免費線上看完整版(2023小鴨影音)
哪裡可以《殺神John Wick 4》免費線上看?殺神John Wick 4線上看、高清小鴨影音完整版,隨時隨地輕鬆追上最新電影資訊!
《殺神John Wick 4》線上看、完整版小鴨 2023,(電影)殺神John Wick 4線上看【小鴨版免費】而且還是原廠正版HD畫質。
## 殺神John Wick 4線上看、電影下載片免費:
[](https://super4kuhdq.com/zh/movie/603692)
🔴觀看完整版 HD ➡ [https://super4kuhdq.com/zh/movie/603692](https://super4kuhdq.com/zh/movie/603692)
免費觀看《殺神John Wick 4》 John Wick: Chapter 4 2023小鴨 完整版—香港電影2023【在線觀看】
●●可供下載,(殺神John Wick 4 2023) 720p、1080p、BrRip、DvdRip、Youtube、Reddit、多語言和高質量●●
點開後就可以觀看囉,高畫質免費線上看,殺神John Wick 4線上看完整版、殺神John Wick 4線上看小鴨。提供繁體中文字幕,離線觀看,支援跨裝置(Android, iOS, Android TV, Apple TV, Chromecast, AirPlay, MOD)接續播放。
您可以免費享受最高質量的[John Wick: Chapter 4 2023]電影。線上看電影《殺神John Wick 4》的完整版。
## 《殺神John Wick 4》香港上映, 上映時間, 故事, 劇情介紹、如何觀看, 在这里可用 。
约翰纽约躲避高桌会的追杀,展开对高桌会的复仇大计。高桌会的掌控者被约翰在摩洛哥击杀后,侯爵召见纽约大陆饭店经理温斯顿和接待员沙隆,褫夺温斯顿职务,炸毁大陆饭店,处决沙隆。 侯爵委托凯恩暗杀约翰。约翰在大阪大陆饭店逃难,岛津明下令清空饭店与约翰对抗刺客。约翰回到纽约,温斯顿建议和侯爵决斗。约翰需要重修和吉普赛人的关系,杀死高桌会德国分会高级干部基拉。卡蒂雅委任约翰为家族代理人,约翰让温斯顿代为向侯爵发起挑战。侯爵悬赏2600万美元通缉约翰。约翰击退刺客的袭击,去圣心圣殿和凯恩交锋。
发布日期: 2023-03-22
运行时间: 170 分钟
类型: 动作, 惊悚, 犯罪
## 至于如何在没有广告的情况下免費線上看《殺神John Wick 4》?
在这里你可以《殺神John Wick 4》電影、免費線上看而無需註冊完整高清 1080p、无广告,如果您使 用 Apple 裝置,且您的 Android TV 支援 AirPlay,即可將 Apple 裝置的畫面鏡射到電視上, 或是串流播放內容。
## 您也可以在這裡免費下載《殺神John Wick 4》電影!
找幾部電影看吧!下面小編就給大家介紹幾個不錯的電影資源網站,以下幾個電影資源網站各有各的特色,比如專注於電影資源整理、專注於電視劇資源整理還有一些事專注於美劇資源整理的,希望這些分享能夠幫助到各位小夥伴們。小調網小調網,即以前的電影天堂。該網站是目前國內較大的電影在線觀看和下載平台。主要有迅雷下載和快車下載以及手機視頻格式下載。
我們提供觀看全高清質量的最新電影的機會。 《殺神John Wick 4 電影》在線觀看 1080p 質量的免費高清電影。 您可以訪問 文字幕和原版的電影節最突出的作品和電影。
### 谷歌關鍵詞:
殺神John Wick 4
殺神John Wick 4線上看
殺神John Wick 4線上看小鴨
殺神John Wick 4免費線上看
殺神John Wick 4線上看
殺神John Wick 42023電影
殺神John Wick 4線上看完整版
殺神John Wick 4香港上映
殺神John Wick 4香港上映時間
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2-finetuned-de-to-is_nr2
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-04-19T18:33:27Z |
---
library_name: stable-baselines3
tags:
- HalfCheetahBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetahBulletEnv-v0
type: HalfCheetahBulletEnv-v0
metrics:
- type: mean_reward
value: 2457.48 +/- 33.74
name: mean_reward
verified: false
---
# **A2C** Agent playing **HalfCheetahBulletEnv-v0**
This is a trained model of a **A2C** agent playing **HalfCheetahBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CurtisASmith/GPT-JRT
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-04-19T18:34:56Z |
---
library_name: stable-baselines3
tags:
- RoombaAToB-punish-stagnant-bounds
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoombaAToB-punish-stagnant-bounds
type: RoombaAToB-punish-stagnant-bounds
metrics:
- type: mean_reward
value: -300.75 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **RoombaAToB-punish-stagnant-bounds**
This is a trained model of a **PPO** agent playing **RoombaAToB-punish-stagnant-bounds**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CurtisBowser/DialoGPT-medium-sora
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | 2023-04-19T18:44:18Z |
---
license: creativeml-openrail-m
language:
- zh
tags:
- legal
- art
---
# 《燈火闌珊》免費線上看完整版(2023小鴨影音)
哪裡可以《燈火闌珊》免費線上看?燈火闌珊線上看、高清小鴨影音完整版,隨時隨地輕鬆追上最新電影資訊!
《燈火闌珊》線上看、完整版小鴨 2023,(電影)燈火闌珊線上看【小鴨版免費】而且還是原廠正版HD畫質。
## 燈火闌珊線上看、電影下載片免費:
[](https://super4kuhdq.com/zh/movie/824742)
🔴觀看完整版 HD ➡ [https://super4kuhdq.com/zh/movie/824742](https://super4kuhdq.com/zh/movie/824742)
免費觀看《燈火闌珊》 A Light Never Goes Out 2023小鴨 完整版—香港電影2023【在線觀看】
●●可供下載,(燈火闌珊 2023) 720p、1080p、BrRip、DvdRip、Youtube、Reddit、多語言和高質量●●
點開後就可以觀看囉,高畫質免費線上看,燈火闌珊線上看完整版、燈火闌珊線上看小鴨。提供繁體中文字幕,離線觀看,支援跨裝置(Android, iOS, Android TV, Apple TV, Chromecast, AirPlay, MOD)接續播放。
您可以免費享受最高質量的[A Light Never Goes Out 2023]電影。線上看電影《燈火闌珊》的完整版。
## 《燈火闌珊》香港上映, 上映時間, 故事, 劇情介紹、如何觀看, 在这里可用 。
香港曾以满街霓虹灯闻名世界,却随着时代变迁,昔日的灿烂繁华似乎已成往事。 妻子难以接受丈夫离世,终日在家整理其遗物。 身边所有人都叫她别再执着,却屡劝无效。 一日,她发现一条开启亡夫的秘密霓虹灯工场的锁匙,并在那里遇见他的青年徒弟,得悉亡夫一直希望重造一个已拆除的神秘霓虹灯牌。 盼能完成其遗愿的妻子,在过程中逐渐了解每个昔日灯牌的故事。
发布日期: 2023-04-13
运行时间: 103 分钟
类型: 剧情
## 至于如何在没有广告的情况下免費線上看《燈火闌珊》?
在这里你可以《燈火闌珊》電影、免費線上看而無需註冊完整高清 1080p、无广告,如果您使 用 Apple 裝置,且您的 Android TV 支援 AirPlay,即可將 Apple 裝置的畫面鏡射到電視上, 或是串流播放內容。
## 您也可以在這裡免費下載《燈火闌珊》電影!
找幾部電影看吧!下面小編就給大家介紹幾個不錯的電影資源網站,以下幾個電影資源網站各有各的特色,比如專注於電影資源整理、專注於電視劇資源整理還有一些事專注於美劇資源整理的,希望這些分享能夠幫助到各位小夥伴們。小調網小調網,即以前的電影天堂。該網站是目前國內較大的電影在線觀看和下載平台。主要有迅雷下載和快車下載以及手機視頻格式下載。
我們提供觀看全高清質量的最新電影的機會。 《燈火闌珊 電影》在線觀看 1080p 質量的免費高清電影。 您可以訪問 文字幕和原版的電影節最突出的作品和電影。
### 谷歌關鍵詞:
燈火闌珊
燈火闌珊線上看
燈火闌珊線上看小鴨
燈火闌珊免費線上看
燈火闌珊線上看
燈火闌珊2023電影
燈火闌珊線上看完整版
燈火闌珊香港上映
燈火闌珊香港上映時間
|
CyberMuffin/DialoGPT-small-ChandlerBot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
library_name: stable-baselines3
tags:
- RoombaAToB-punish-stag-at-end
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoombaAToB-punish-stag-at-end
type: RoombaAToB-punish-stag-at-end
metrics:
- type: mean_reward
value: -2014.75 +/- 15.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **RoombaAToB-punish-stag-at-end**
This is a trained model of a **PPO** agent playing **RoombaAToB-punish-stag-at-end**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Czapla/Rick
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
language:
- zh
tags:
- legal
- art
---
# 《吸血鬼奴才:雷菲爾》免費線上看完整版(2023小鴨影音)
哪裡可以《吸血鬼奴才:雷菲爾》免費線上看?吸血鬼奴才:雷菲爾線上看、高清小鴨影音完整版,隨時隨地輕鬆追上最新電影資訊!
《吸血鬼奴才:雷菲爾》線上看、完整版小鴨 2023,(電影)吸血鬼奴才:雷菲爾線上看【小鴨版免費】而且還是原廠正版HD畫質。
## 吸血鬼奴才:雷菲爾線上看、電影下載片免費:
[](https://super4kuhdq.com/zh/movie/649609)
🔴觀看完整版 HD ➡ [https://super4kuhdq.com/zh/movie/649609](https://super4kuhdq.com/zh/movie/649609)
免費觀看《吸血鬼奴才:雷菲爾》 RENFIELD 2023小鴨 完整版—香港電影2023【在線觀看】
●●可供下載,(吸血鬼奴才:雷菲爾 2023) 720p、1080p、BrRip、DvdRip、Youtube、Reddit、多語言和高質量●●
點開後就可以觀看囉,高畫質免費線上看,吸血鬼奴才:雷菲爾線上看完整版、吸血鬼奴才:雷菲爾線上看小鴨。提供繁體中文字幕,離線觀看,支援跨裝置(Android, iOS, Android TV, Apple TV, Chromecast, AirPlay, MOD)接續播放。
您可以免費享受最高質量的[RENFIELD 2023]電影。線上看電影《吸血鬼奴才:雷菲爾》的完整版。
## 《吸血鬼奴才:雷菲爾》香港上映, 上映時間, 故事, 劇情介紹、如何觀看, 在这里可用 。
这个现代版吸血鬼德古拉的故事中,主人公是他忠心耿耿的随从。雷恩菲尔德(尼古拉斯·霍尔特 饰),是史上最自恋的老板德古拉(尼古拉斯·凯奇 饰)饱受折磨的助手。雷恩菲尔德被迫为他的主人寻找猎物,并且必须随传随到,听从他的吩咐做任何事,不管这些事情有多低贱。但是现在,在数百年来对主人唯命是从的服侍之后,雷恩菲尔德准备好想看看他在暗夜王子的阴影以外,是否能过另一种生活,但是他必须想办法摆脱他和主人之间的依附关系。
发布日期: 2023-04-07
运行时间: 93 分钟
类型: 喜剧, 恐怖, 奇幻
## 至于如何在没有广告的情况下免費線上看《吸血鬼奴才:雷菲爾》?
在这里你可以《吸血鬼奴才:雷菲爾》電影、免費線上看而無需註冊完整高清 1080p、无广告,如果您使 用 Apple 裝置,且您的 Android TV 支援 AirPlay,即可將 Apple 裝置的畫面鏡射到電視上, 或是串流播放內容。
## 您也可以在這裡免費下載《吸血鬼奴才:雷菲爾》電影!
找幾部電影看吧!下面小編就給大家介紹幾個不錯的電影資源網站,以下幾個電影資源網站各有各的特色,比如專注於電影資源整理、專注於電視劇資源整理還有一些事專注於美劇資源整理的,希望這些分享能夠幫助到各位小夥伴們。小調網小調網,即以前的電影天堂。該網站是目前國內較大的電影在線觀看和下載平台。主要有迅雷下載和快車下載以及手機視頻格式下載。
我們提供觀看全高清質量的最新電影的機會。 《吸血鬼奴才:雷菲爾 電影》在線觀看 1080p 質量的免費高清電影。 您可以訪問 文字幕和原版的電影節最突出的作品和電影。
### 谷歌關鍵詞:
吸血鬼奴才:雷菲爾
吸血鬼奴才:雷菲爾線上看
吸血鬼奴才:雷菲爾線上看小鴨
吸血鬼奴才:雷菲爾免費線上看
吸血鬼奴才:雷菲爾線上看
吸血鬼奴才:雷菲爾2023電影
吸血鬼奴才:雷菲爾線上看完整版
吸血鬼奴才:雷菲爾香港上映
吸血鬼奴才:雷菲爾香港上映時間
|
D-Keqi/espnet_asr_train_asr_streaming_transformer_raw_en_bpe500_sp_valid.acc.ave
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [mssjoyy/distilbert-base-uncased-finetuned-ner](https://huggingface.co/mssjoyy/distilbert-base-uncased-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1753
- Precision: 0.6570
- Recall: 0.6506
- F1: 0.6538
- Accuracy: 0.9676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 176 | 0.1542 | 0.6349 | 0.6335 | 0.6342 | 0.9669 |
| No log | 2.0 | 352 | 0.1714 | 0.6423 | 0.6004 | 0.6207 | 0.9664 |
| 0.0172 | 3.0 | 528 | 0.1689 | 0.6516 | 0.6154 | 0.6330 | 0.9677 |
| 0.0172 | 4.0 | 704 | 0.1728 | 0.6501 | 0.6410 | 0.6455 | 0.9678 |
| 0.0172 | 5.0 | 880 | 0.1753 | 0.6570 | 0.6506 | 0.6538 | 0.9676 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
D3vil/DialoGPT-smaall-harrypottery
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Vocabulary Trimmed [vocabtrimmer/xlm-roberta-base-xnli-en](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-en): `vocabtrimmer/xlm-roberta-base-xnli-en-trimmed-en`
This model is a trimmed version of [vocabtrimmer/xlm-roberta-base-xnli-en](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-en) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | vocabtrimmer/xlm-roberta-base-xnli-en | vocabtrimmer/xlm-roberta-base-xnli-en-trimmed-en |
|:---------------------------|:----------------------------------------|:---------------------------------------------------|
| parameter_size_full | 278,045,955 | 219,090,435 |
| parameter_size_embedding | 192,001,536 | 133,046,016 |
| vocab_size | 250,002 | 173,237 |
| compression_rate_full | 100.0 | 78.8 |
| compression_rate_embedding | 100.0 | 69.29 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | | 2 |
|
D3xter1922/electra-base-discriminator-finetuned-cola
|
[
"pytorch",
"tensorboard",
"electra",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] |
text-classification
|
{
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 68 | 2023-04-19T19:03:44Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: sofiapecora/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
D3xter1922/electra-base-discriminator-finetuned-mnli
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- RoombaAToB-harcodemap-punish-stagnant-long
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: BC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoombaAToB-harcodemap-punish-stagnant-long
type: RoombaAToB-harcodemap-punish-stagnant-long
metrics:
- type: mean_reward
value: -535.33 +/- 0.00
name: mean_reward
verified: false
---
# **BC** Agent playing **RoombaAToB-harcodemap-punish-stagnant-long**
This is a trained model of a **BC** agent playing **RoombaAToB-harcodemap-punish-stagnant-long**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DHBaek/xlm-roberta-large-korquad-mask
|
[
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
# Vocabulary Trimmed [vocabtrimmer/xlm-roberta-base-xnli-fr](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-fr): `vocabtrimmer/xlm-roberta-base-xnli-fr-trimmed-fr`
This model is a trimmed version of [vocabtrimmer/xlm-roberta-base-xnli-fr](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-fr) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | vocabtrimmer/xlm-roberta-base-xnli-fr | vocabtrimmer/xlm-roberta-base-xnli-fr-trimmed-fr |
|:---------------------------|:----------------------------------------|:---------------------------------------------------|
| parameter_size_full | 278,045,955 | 151,865,091 |
| parameter_size_embedding | 192,001,536 | 65,820,672 |
| vocab_size | 250,002 | 85,704 |
| compression_rate_full | 100.0 | 54.62 |
| compression_rate_embedding | 100.0 | 34.28 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | | 2 |
|
DJSammy/bert-base-danish-uncased_BotXO-ai
|
[
"pytorch",
"jax",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"transformers",
"bert",
"masked-lm",
"license:cc-by-4.0",
"fill-mask"
] |
fill-mask
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: AraElectra-finetuned-CrossVal-fnd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AraElectra-finetuned-CrossVal-fnd
This model is a fine-tuned version of [aubmindlab/araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1317
- Macro F1: 0.9489
- Accuracy: 0.9505
- Precision: 0.9488
- Recall: 0.9490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 123
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:------:|
| 0.3788 | 1.0 | 798 | 0.1687 | 0.9453 | 0.9473 | 0.9480 | 0.9431 |
| 0.2273 | 2.0 | 1597 | 0.1876 | 0.9200 | 0.9239 | 0.9306 | 0.9134 |
| 0.1611 | 3.0 | 2395 | 0.1317 | 0.9489 | 0.9505 | 0.9488 | 0.9490 |
| 0.0972 | 4.0 | 3192 | 0.1685 | 0.9484 | 0.9501 | 0.9489 | 0.9479 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
DKpro000/DialoGPT-medium-harrypotter
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DLNLP/t5-small-finetuned-xsum
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: other
inference: false
---
# Alpaca LoRA 65B GPTQ 4bit
This is a [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) 4bit quantisation of [changsung's alpaca-lora-65B](https://huggingface.co/chansung/alpaca-lora-65b)
I also have 4bit and 2bit GGML files for cPU inference available here: [TheBloke/alpaca-lora-65B-GGML](https://huggingface.co/TheBloke/alpaca-lora-65B-GGML).
## These files need a lot of VRAM!
I believe they will work on 2 x 24GB cards, and I hope that at least the 1024g file will work on an A100 40GB.
I can't guarantee that the two 128g files will work in only 40GB of VRAM.
I haven't specifically tested VRAM requirements yet but will aim to do so at some point. If you have any experiences to share, please do so in the comments.
If you want to try CPU inference instead, check out my GGML repo: [TheBloke/alpaca-lora-65B-GGML](https://huggingface.co/TheBloke/alpaca-lora-65B-GGML).
## GIBBERISH OUTPUT IN `text-generation-webui`?
Please read the Provided Files section below. You should use `alpaca-lora-65B-GPTQ-4bit-128g.no-act-order.safetensors` unless you are able to use the latest Triton branch of GPTQ-for-LLaMa.
## Provided files
Three files are provided. **The second and third files will not work unless you use a recent version of the Triton branch of GPTQ-for-LLaMa**
Specifically, the last two files use `--act-order` for maximum quantisation quality and will not work with oobabooga's fork of GPTQ-for-LLaMa. Therefore at this time it will also not work with the CUDA branch of GPTQ-for-LLaMa, or `text-generation-webui` one-click installers.
Unless you are able to use the latest Triton GPTQ-for-LLaMa code, please use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
* `alpaca-lora-65B-GPTQ-4bit-128g.no-act-order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with text-generation-webui one-click-installers
* Works on Windows
* Will require ~40GB of VRAM, meaning you'll need an A100 or 2 x 24GB cards.
* I haven't yet tested how much VRAM is required exactly so it's possible it won't run on an A100 40GB
* Parameters: Groupsize = 128g. No act-order.
* Command used to create the GPTQ:
```
CUDA_VISIBLE_DEVICES=0 python3 llama.py alpaca-lora-65B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors alpaca-lora-65B-GPTQ-4bit-128g.no-act-order.safetensors
```
* `alpaca-lora-65B-GPTQ-4bit-128g.safetensors`
* Only works with the latest Triton branch of GPTQ-for-LLaMa
* **Does not** work with text-generation-webui one-click-installers
* **Does not** work on Windows
* Will require 40+GB of VRAM, meaning you'll need an A100 or 2 x 24GB cards.
* I haven't yet tested how much VRAM is required exactly so it's possible it won't run on an A100 40GB
* Parameters: Groupsize = 128g. act-order.
* Offers highest quality quantisation, but requires recent Triton GPTQ-for-LLaMa code and more VRAM
* Command used to create the GPTQ:
```
CUDA_VISIBLE_DEVICES=0 python3 llama.py alpaca-lora-65B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors alpaca-lora-65B-GPTQ-4bit-128g.safetensors
```
* `alpaca-lora-65B-GPTQ-4bit-1024g.safetensors`
* Only works with the latest Triton branch of GPTQ-for-LLaMa
* **Does not** work with text-generation-webui one-click-installers
* **Does not** work on Windows
* Should require less VRAM than the 128g file, so hopefully it will run in an A100 40GB
* I haven't yet tested how much VRAM is required exactly
* Parameters: Groupsize = 1024g. act-order.
* Offers the benefits of act-order, but at a higher groupsize to reduce VRAM requirements
* Command used to create the GPTQ:
```
CUDA_VISIBLE_DEVICES=0 python3 llama.py alpaca-lora-65B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 1024 --save_safetensors alpaca-lora-65B-GPTQ-4bit-1024g.safetensors
```
## How to run in `text-generation-webui`
File `alpaca-lora-65B-GPTQ-4bit-128g.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
[Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
The other two `safetensors` model files were created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest Triton GPTQ-for-LLaMa is used inside the UI.
If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
```
# Clone text-generation-webui, if you don't already have it
git clone https://github.com/oobabooga/text-generation-webui
# Make a repositories directory
mkdir text-generation-webui/repositories
cd text-generation-webui/repositories
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
```
Then install this model into `text-generation-webui/models` and launch the UI as follows:
```
cd text-generation-webui
python server.py --model alpaca-lora-65B-GPTQ-4bit --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
```
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
If you can't update GPTQ-for-LLaMa to the latest Triton branch, or don't want to, you can use `alpaca-lora-65B-GPTQ-4bit-128g.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
# Original model card not provided
No model card was provided in [changsung's original repository](https://huggingface.co/chansung/alpaca-lora-65b).
Based on the name, I assume this is the result of fine tuning using the original GPT 3.5 Alpaca dataset. It is unknown as to whether the original Stanford data was used, or the [cleaned tloen/alpaca-lora variant](https://github.com/tloen/alpaca-lora).
|
DTAI-KULeuven/robbertje-1-gb-merged
|
[
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:oscar (NL)",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | 2023-04-19T19:52:41Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 845.47 +/- 135.68
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DTAI-KULeuven/robbertje-1-gb-non-shuffled
|
[
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 53 | 2023-04-19T19:53:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
DTAI-KULeuven/robbertje-1-gb-shuffled
|
[
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:oscar (NL)",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
# Vocabulary Trimmed [vocabtrimmer/xlm-roberta-base-xnli-de](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-de): `vocabtrimmer/xlm-roberta-base-xnli-de-trimmed-de`
This model is a trimmed version of [vocabtrimmer/xlm-roberta-base-xnli-de](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-de) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | vocabtrimmer/xlm-roberta-base-xnli-de | vocabtrimmer/xlm-roberta-base-xnli-de-trimmed-de |
|:---------------------------|:----------------------------------------|:---------------------------------------------------|
| parameter_size_full | 278,045,955 | 156,466,947 |
| parameter_size_embedding | 192,001,536 | 70,422,528 |
| vocab_size | 250,002 | 91,696 |
| compression_rate_full | 100.0 | 56.27 |
| compression_rate_embedding | 100.0 | 36.68 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| de | vocabtrimmer/mc4_validation | text | de | validation | | 2 |
|
alexandrainst/da-binary-emotion-classification-base
|
[
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,066 | null |
---
language:
- en
tags:
- causal-lm
license: cc-by-nc-sa-4.0
datasets:
- dmayhem93/ChatCombined
- tatsu-lab/alpaca
- nomic-ai/gpt4all_prompt_generations
- Dahoas/full-hh-rlhf
- jeffwan/sharegpt_vicuna
- HuggingFaceH4/databricks_dolly_15k
---
|
alexandrainst/da-hatespeech-detection-base
|
[
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,719 | null |
---
license: bsd-3-clause
---
Salesforce's [CodeGen](https://github.com/salesforce/CodeGen) 350M mono model ported to ggml and quantized to be executed on Apple Silicon M1/M2 CPU.
Please refer to this [tutorial](https://github.com/virtualramblas/codegen-quantization-M1) to learn more about the process that has been followed to achieve this result and feel free to leave a star if you find it useful. Thanks.
|
alexandrainst/da-subjectivivity-classification-base
|
[
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"dataset:DDSC/twitter-sent",
"dataset:DDSC/europarl",
"transformers",
"license:cc-by-sa-4.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 846 | null |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
- anon8231489123/ShareGPT_Vicuna_unfiltered
- togethercomputer/RedPajama-Data-1T
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
tags:
- not-for-all-audiences
- code
- text-generation-inference
- legal
- finance
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alexandrainst/da-ned-base
|
[
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] |
text-classification
|
{
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ceramland Dreambooth model trained by kikokikona with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.JPG)
|
Daiki/scibert_scivocab_uncased-finetuned-cola
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- generated_from_trainer
datasets:
- lewtun/code_alpaca
model-index:
- name: large-model-finetuned-code-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# large-model-finetuned-code-alpaca
This model is a fine-tuned version of [bigcode/large-model](https://huggingface.co/bigcode/large-model) on the lewtun/code_alpaca dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1672 | 0.03 | 1 | 1.1605 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken
|
[
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: creativeml-openrail-m
duplicated_from: SirVeggie/salutemix
---
# SaluteMix model
SaluteMix is a yet-another semi-realistic mix. Name comes from 99% success rate when using salute tag. All previews are pure txt2img.
I highly recommend `EasyNegative embedding`, or `(low quality, worst quality:1.4), (bad anatomy), extra digit, fewer digits, (extra arms:1.2), bad hands, by (bad-artist:0.6), bad-image-v2-39000` as the negative prompt.
Should be fairly competent at nsfw stuff.
CivitAI page: https://civitai.com/models/19238/salutemix
**Negative embeddings:** \
https://huggingface.co/datasets/gsdf/EasyNegative \
https://huggingface.co/nick-x-hacker/bad-artist \
https://huggingface.co/Xynon/models/tree/main/experimentals/TI
## Recipe
```
animebrush3 = custom mix with wlop style (details missing)
cn-any = Counterfeit-V2.5 + (nixeu-any - anythingV3) @1.0
cn-f = Counterfeit-V2.5 + (nixeu-f - wd1.3) @1.0
cn-flo = Counterfeit-V2.5 + (floydian_nixeu - sd1.4) @1.0
cn-temp = cn-any + cn-f @0.4
cn-full = cn-temp + cn-flo @0.6
temp1 = AOM2_nsfw + 7th_anime_v3_C @0.5
cn-mix = cn-full + temp1 @0.5
step1 = animebrush3 + 2dn_1 @0.5
temp2 = chilloutmix_ni + grapefruitv4 @0.3
step2 = step1 + temp2 @0.25
SaluteMix = step2 + cn-mix @0.2
```
## Links to models
https://civitai.com/models/4807/2dn \
https://civitai.com/models/6424/chilloutmix \
https://civitai.com/models/2583/grapefruit-hentai-model \
Floydian's nixeu: https://huggingface.co/FloydianSound/Nixeu_Diffusion_v1-5 \
Orange mixes: https://huggingface.co/WarriorMama777/OrangeMixs \
7th_anime: https://huggingface.co/syaimu/7th_Layer \
Counterfeit: https://huggingface.co/gsdf/Counterfeit-V2.5 \
Nixeu models: https://huggingface.co/SirVeggie/nixeu \
https://huggingface.co/SirVeggie/wlop
|
Danbi/distilgpt2-finetuned-wikitext2
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Vocabulary Trimmed [vocabtrimmer/xlm-roberta-base-xnli-es](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-es): `vocabtrimmer/xlm-roberta-base-xnli-es-trimmed-es`
This model is a trimmed version of [vocabtrimmer/xlm-roberta-base-xnli-es](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-es) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | vocabtrimmer/xlm-roberta-base-xnli-es | vocabtrimmer/xlm-roberta-base-xnli-es-trimmed-es |
|:---------------------------|:----------------------------------------|:---------------------------------------------------|
| parameter_size_full | 278,045,955 | 152,921,859 |
| parameter_size_embedding | 192,001,536 | 66,877,440 |
| vocab_size | 250,002 | 87,080 |
| compression_rate_full | 100.0 | 55.0 |
| compression_rate_embedding | 100.0 | 34.83 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | | 2 |
|
DannyMichael/ECU911
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 10.70 +/- 5.57
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Darein/Def
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
# Vocabulary Trimmed [vocabtrimmer/xlm-roberta-base-xnli-ar](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-ar): `vocabtrimmer/xlm-roberta-base-xnli-ar-trimmed-ar`
This model is a trimmed version of [vocabtrimmer/xlm-roberta-base-xnli-ar](https://huggingface.co/vocabtrimmer/xlm-roberta-base-xnli-ar) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | vocabtrimmer/xlm-roberta-base-xnli-ar | vocabtrimmer/xlm-roberta-base-xnli-ar-trimmed-ar |
|:---------------------------|:----------------------------------------|:---------------------------------------------------|
| parameter_size_full | 278,045,955 | 124,345,347 |
| parameter_size_embedding | 192,001,536 | 38,300,928 |
| vocab_size | 250,002 | 49,871 |
| compression_rate_full | 100.0 | 44.72 |
| compression_rate_embedding | 100.0 | 19.95 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| ar | vocabtrimmer/mc4_validation | text | ar | validation | | 2 |
|
Daryaflp/roberta-retrained_ru_covid
|
[
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
library_name: stable-baselines3
tags:
- RoombaAToB-harcodemap-punish-stagnant-no-training
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: BC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoombaAToB-harcodemap-punish-stagnant-no-training
type: RoombaAToB-harcodemap-punish-stagnant-no-training
metrics:
- type: mean_reward
value: -9.04 +/- 0.00
name: mean_reward
verified: false
---
# **BC** Agent playing **RoombaAToB-harcodemap-punish-stagnant-no-training**
This is a trained model of a **BC** agent playing **RoombaAToB-harcodemap-punish-stagnant-no-training**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DataikuNLP/TinyBERT_General_4L_312D
|
[
"pytorch",
"jax",
"bert",
"arxiv:1909.10351",
"transformers"
] | null |
{
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 74 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -4.77 +/- 1.31
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DataikuNLP/average_word_embeddings_glove.6B.300d
|
[
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"license:apache-2.0"
] |
sentence-similarity
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.27 +/- 17.57
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DataikuNLP/distiluse-base-multilingual-cased-v1
|
[
"pytorch",
"distilbert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] |
sentence-similarity
|
{
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | null |
---
pipeline_tag: translation
tags:
- generated_from_trainer
datasets:
- opus_books
model-index:
- name: my_awesome_opus_nooks_model
results: []
---
|
DataikuNLP/paraphrase-MiniLM-L6-v2
|
[
"pytorch",
"bert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] |
sentence-similarity
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25 | null |
---
{}
---
This is a BERT-based NER model trained to detect PERSON and BRAND entities in text.
|
Dave/twomad-model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: FelipePasquevich/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Davlan/bert-base-multilingual-cased-finetuned-amharic
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 109 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: flan-t5-large-da-multiwoz2.0_800-ep20-nonstop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-da-multiwoz2.0_800-ep20-nonstop
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3410
- Accuracy: 43.8809
- Num: 7358
- Gen Len: 15.5621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:|
| 0.6162 | 2.98 | 1000 | 0.3647 | 37.6718 | 7358 | 14.561 |
| 0.3779 | 5.95 | 2000 | 0.3429 | 40.5537 | 7358 | 15.6453 |
| 0.3392 | 8.93 | 3000 | 0.3363 | 42.0162 | 7358 | 15.4191 |
| 0.3122 | 11.9 | 4000 | 0.3379 | 43.6228 | 7358 | 15.4768 |
| 0.2969 | 14.88 | 5000 | 0.3381 | 44.0703 | 7358 | 15.6844 |
| 0.2845 | 17.86 | 6000 | 0.3413 | 43.946 | 7358 | 15.5164 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Davlan/bert-base-multilingual-cased-finetuned-luganda
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 16 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5654
- Bleu: 18.3332
- Gen Len: 13.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 1 | 0.5654 | 18.3332 | 13.6667 |
| No log | 2.0 | 2 | 0.5654 | 18.3332 | 13.6667 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Davlan/bert-base-multilingual-cased-finetuned-naija
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13 | null |
Access to model iffatN/chatty_gtp2 is restricted and you are not in the authorized list. Visit https://huggingface.co/iffatN/chatty_gtp2 to ask for access.
|
Davlan/bert-base-multilingual-cased-finetuned-swahili
|
[
"pytorch",
"tf",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 67 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: AdonaiHS/SoccerTwos7
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Davlan/bert-base-multilingual-cased-finetuned-wolof
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
Access to model SanidhyaSingh/AI is restricted and you are not in the authorized list. Visit https://huggingface.co/SanidhyaSingh/AI to ask for access.
|
Davlan/byt5-base-yor-eng-mt
|
[
"pytorch",
"t5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
library_name: stable-baselines3
tags:
- RoombaAToB-no-training-far
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: BC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoombaAToB-no-training-far
type: RoombaAToB-no-training-far
metrics:
- type: mean_reward
value: -97.11 +/- 0.00
name: mean_reward
verified: false
---
# **BC** Agent playing **RoombaAToB-no-training-far**
This is a trained model of a **BC** agent playing **RoombaAToB-no-training-far**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Davlan/distilbert-base-multilingual-cased-ner-hrl
|
[
"pytorch",
"tf",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible",
"has_space"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 123,856 | null |
# MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
[Deyao Zhu](https://tsutikgiau.github.io/)* (On Job Market!), [Jun Chen](https://junchen14.github.io/)* (On Job Market!), [Xiaoqian Shen](https://xiaoqian-shen.github.io), [Xiang Li](https://xiangli.ac.cn), and [Mohamed Elhoseiny](https://www.mohamed-elhoseiny.com/). *Equal Contribution
**King Abdullah University of Science and Technology**
## Online Demo
Click the image to chat with MiniGPT-4 around your images
[](https://minigpt-4.github.io)
## Examples
| | |
:-------------------------:|:-------------------------:
 | 
 | 
More examples can be found in the [project page](https://minigpt-4.github.io).
## Introduction
- MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer.
- We train MiniGPT-4 with two stages. The first traditional pretraining stage is trained using roughly 5 million aligned image-text pairs in 10 hours using 4 A100s. After the first stage, Vicuna is able to understand the image. But the generation ability of Vicuna is heavilly impacted.
- To address this issue and improve usability, we propose a novel way to create high-quality image-text pairs by the model itself and ChatGPT together. Based on this, we then create a small (3500 pairs in total) yet high-quality dataset.
- The second finetuning stage is trained on this dataset in a conversation template to significantly improve its generation reliability and overall usability. To our surprise, this stage is computationally efficient and takes only around 7 minutes with a single A100.
- MiniGPT-4 yields many emerging vision-language capabilities similar to those demonstrated in GPT-4.

## Getting Started
### Installation
**1. Prepare the code and the environment**
Git clone our repository, creating a python environment and ativate it via the following command
```bash
git clone https://github.com/Vision-CAIR/MiniGPT-4.git
cd MiniGPT-4
conda env create -f environment.yml
conda activate minigpt4
```
**2. Prepare the pretrained Vicuna weights**
The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B.
Please refer to our instruction [here](PrepareVicuna.md)
to prepare the Vicuna weights.
The final weights would be in a single folder with the following structure:
```
vicuna_weights
├── config.json
├── generation_config.json
├── pytorch_model.bin.index.json
├── pytorch_model-00001-of-00003.bin
...
```
Then, set the path to the vicuna weight in the model config file
[here](minigpt4/configs/models/minigpt4.yaml#L16) at Line 16.
**3. Prepare the pretrained MiniGPT-4 checkpoint**
To play with our pretrained model, download the pretrained checkpoint
[here](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link).
Then, set the path to the pretrained checkpoint in the evaluation config file
in [eval_configs/minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#L10) at Line 11.
### Launching Demo Locally
Try out our demo [demo.py](demo.py) on your local machine by running
```
python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
```
Here, we load Vicuna as 8 bit by default to save some GPU memory usage.
Besides, the default beam search width is 1.
Under this setting, the demo cost about 23G GPU memory.
If you have a more powerful GPU with larger GPU memory, you can run the model
in 16 bit by setting low_resource to False in the config file
[minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml) and use a larger beam search width.
### Training
The training of MiniGPT-4 contains two alignment stages.
**1. First pretraining stage**
In the first pretrained stage, the model is trained using image-text pairs from Laion and CC datasets
to align the vision and language model. To download and prepare the datasets, please check
our [first stage dataset preparation instruction](dataset/README_1_STAGE.md).
After the first stage, the visual features are mapped and can be understood by the language
model.
To launch the first stage training, run the following command. In our experiments, we use 4 A100.
You can change the save path in the config file
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage1_pretrain.yaml)
```bash
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage1_pretrain.yaml
```
A MiniGPT-4 checkpoint with only stage one training can be downloaded
[here](https://drive.google.com/file/d/1u9FRRBB3VovP1HxCAlpD9Lw4t4P6-Yq8/view?usp=share_link).
Compared to the model after stage two, this checkpoint generate incomplete and repeated sentences frequently.
**2. Second finetuning stage**
In the second stage, we use a small high quality image-text pair dataset created by ourselves
and convert it to a conversation format to further align MiniGPT-4.
To download and prepare our second stage dataset, please check our
[second stage dataset preparation instruction](dataset/README_2_STAGE.md).
To launch the second stage alignment,
first specify the path to the checkpoint file trained in stage 1 in
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage2_finetune.yaml).
You can also specify the output path there.
Then, run the following command. In our experiments, we use 1 A100.
```bash
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml
```
After the second stage alignment, MiniGPT-4 is able to talk about the image coherently and user-friendly.
## Acknowledgement
+ [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2) The model architecture of MiniGPT-4 follows BLIP-2. Don't forget to check this great open-source work if you don't know it before!
+ [Lavis](https://github.com/salesforce/LAVIS) This repository is built upon Lavis!
+ [Vicuna](https://github.com/lm-sys/FastChat) The fantastic language ability of Vicuna with only 13B parameters is just amazing. And it is open-source!
If you're using MiniGPT-4 in your research or applications, please cite using this BibTeX:
```bibtex
@misc{zhu2022minigpt4,
title={MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models},
author={Deyao Zhu and Jun Chen and Xiaoqian Shen and xiang Li and Mohamed Elhoseiny},
year={2023},
}
```
## License
This repository is under [BSD 3-Clause License](LICENSE.md).
Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with
BSD 3-Clause License [here](LICENSE_Lavis.md).
|
Davlan/m2m100_418M-eng-yor-mt
|
[
"pytorch",
"m2m_100",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"M2M100ForConditionalGeneration"
],
"model_type": "m2m_100",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: FelipePasquevich/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Davlan/m2m100_418M-yor-eng-mt
|
[
"pytorch",
"m2m_100",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"M2M100ForConditionalGeneration"
],
"model_type": "m2m_100",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
リアル系マージモデルです。日本人を始めとするアジア系の再現ができるように調整しています。特にjapanese doll likenessとの親和性を意識しています。
このマージモデルを公開するにあたり、使用したモデルの製作者の皆様に感謝申し上げます。
This is a realistic merge model. It is adjusted to reproduce Japanese and other Asian descent. We are especially conscious of the affinity with japanese doll likeness.
We would like to thank the creators of the models we used for making this merge model available to the public.
マージ使用モデル(Merge use model)
DreamShaper
https://civitai.com/models/4384/dreamshaper
NeverEnding Dream (NED)
https://civitai.com/models/10028/neverending-dream-ned
MUSE_v1
https://civitai.com/models/13564/musev1
Colorful
https://civitai.com/models/7279/colorful
本モデルも使用したモデルの利用条件に従う形になりますが、以下に関しては厳に使用を禁止いたします。
・暴力的な表現
・児童ポルノ
・未成年者の性的な表現
本モデルは『CreativeML Open RAIL-M』の範囲でラインセンスされます。 本モデルを使用した上での問題及び生成された画像に関するいかなる問題に関しても、当方は一切責任を持ちません。ご了承の上ご使用ください。 また、マージモデルのライセンス変更に伴い、公開を停止することがあります。
This model is also subject to the terms and conditions of use of the model used, but the use of the following is strictly prohibited.
・Violent expressions
・Child pornography
・Sexual expression of minors
This model is line-sensed within the scope of CreativeML Open RAIL-M. We are not responsible for any problems that may arise from the use of this model or the images it generates. We are not responsible for any problems that may occur when using this model or any problems related to the generated images. Please use the model with your understanding. We reserve the right to suspend publication of the merged model due to changes in its license.
|
Davlan/mbart50-large-yor-eng-mt
|
[
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
Access to model itsDesTV/GOD_AI is restricted and you are not in the authorized list. Visit https://huggingface.co/itsDesTV/GOD_AI to ask for access.
|
Davlan/mt5-small-en-pcm
|
[
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
tags:
- generated_from_trainer
model-index:
- name: flan-t5-large-da-multiwoz2.1_400-ep15-nonstop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-da-multiwoz2.1_400-ep15-nonstop
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Davlan/mt5_base_eng_yor_mt
|
[
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: hussamalafandi/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Davlan/mt5_base_yor_eng_mt
|
[
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
library_name: stable-baselines3
tags:
- RoombaAToB-mid-goal
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoombaAToB-mid-goal
type: RoombaAToB-mid-goal
metrics:
- type: mean_reward
value: 595.49 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **RoombaAToB-mid-goal**
This is a trained model of a **PPO** agent playing **RoombaAToB-mid-goal**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Davlan/xlm-roberta-base-finetuned-chichewa
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
library_name: stable-baselines3
tags:
- RoombaAToB-no-theta
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: BC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoombaAToB-no-theta
type: RoombaAToB-no-theta
metrics:
- type: mean_reward
value: -3.88 +/- 0.00
name: mean_reward
verified: false
---
# **BC** Agent playing **RoombaAToB-no-theta**
This is a trained model of a **BC** agent playing **RoombaAToB-no-theta**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Davlan/xlm-roberta-base-finetuned-english
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
tags:
- generated_from_trainer
model-index:
- name: flan-t5-large-da-multiwoz2.0_400-ep15-nonstop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-da-multiwoz2.0_400-ep15-nonstop
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Davlan/xlm-roberta-base-finetuned-kinyarwanda
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 61 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: andrea-silvi/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Davlan/xlm-roberta-base-finetuned-shona
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
# Drug-Drug-Interaction-Classification
Drug to Drug Interaction Classifier
An innovative approach was developed to address a crucial challenge in drug-drug interaction research. While existing state of the art link prediction models rely on prior knowledge of a drug's interaction with other drugs, our solution utilizes the CatBoost to classify potential interactions based solely on intrinsic properties.
We developed a new method for predicting drug interactions using the CatBoost algorithm that relies solely on intrinsic properties, rather than prior knowledge of a drug's interactions. We achieved a high accuracy of 0.85 and an AUC-ROC score of 0.86. This breakthrough provides a more efficient and cost-effective approach to predicting drug interactions, particularly for new drugs without prior interaction data.
|
Davlan/xlm-roberta-base-finetuned-somali
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hlyu/albert-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hlyu/albert-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hlyu/albert-base-v2')
model = AutoModel.from_pretrained('hlyu/albert-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hlyu/albert-base-v2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5055 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 2000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Davlan/xlm-roberta-base-finetuned-swahili
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 40 | null |
---
license: other
inference: false
---
# Quantised GGMLs of alpaca-lora-65B
Quantised 4bit and 5bit GGMLs of [changsung's alpaca-lora-65B](https://huggingface.co/chansung/alpaca-lora-65b) for CPU inference with [llama.cpp](https://github.com/ggerganov/llama.cpp).
I also have 4bit GPTQ files for GPU inference available here: [TheBloke/alpaca-lora-65B-GPTQ-4bit](https://huggingface.co/TheBloke/alpaca-lora-65B-GPTQ-4bit).
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`alpaca-lora-65B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 40.8GB | 43GB | 4bit. |
`alpaca-lora-65B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 44.9GB | 43GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`alpaca-lora-65B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 44.9GB | 47GB | 5bit. Higher quality than 4bit, at cost of slightly higher resources. |
`alpaca-lora-65B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 49GB | 51GB | 5bit. Slightly higher resource usage and quality than q5_0. |
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 8 -m alpaca-lora-65B.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Write a story about llamas
### Response:"
```
Change `-t 8` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
# Original model card not provided
No model card was provided in [changsung's original repository](https://huggingface.co/chansung/alpaca-lora-65b).
Based on the name, I assume this is the result of fine tuning using the original GPT 3.5 Alpaca dataset. It is unknown as to whether the original Stanford data was used, or the [cleaned tloen/alpaca-lora variant](https://github.com/tloen/alpaca-lora).
|
Davlan/xlm-roberta-base-finetuned-wolof
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
# 📋 BUOD: Text Summarization Model for the Filipino Language Directory
[](https://huggingface.co/jamesesguerra/distilbart-cnn-12-6-finetuned-1.3.1) [](https://huggingface.co/0xhaz/bert2bert-cnn_dailymail-fp16-finetuned-1.0.0) 
Authors: [James Esguerra](https://huggingface.co/jamesesguerra), [Julia Avila](), [Hazielle Bugayong](https://huggingface.co/0xhaz)
> Foreword: This research was done in two parts, gathering the data and running transformer models,
> namely distilBART and bert2bert. Below is the step-by-step process of the experientaton of the study:
## 📚 Steps
- 📝 **Gathering the data**
- 🔧 **Initializing the transfomer models; fine-tuning of the models:**
-- via Google Colab
-- via Google Colab (Local runtime)
-- via Jupyter Notebook
## 📝 Gathering data
An [article scraper](https://github.com/jamesesguerra/article_scraper) was used in this experimentation which can gather bodies of text from various news sites. The data gathered was used to pre-train and finetune the models in the next step. This also includes instructions on how to use the article scraper.
## 🔧 Initialization of transformer models
#### via Google Colab
Two models, distilBART and bert2bert were used to compar abstractive text summarization performance. They can be found here:
- [distilBART](https://colab.research.google.com/drive/1Lv78nHqQh2I7KaFkUzWsn_MXsyP_PP1I?authuser=3#scrollTo=moK3d7mTQ1v-)
- [bert2bert](https://colab.research.google.com/drive/1Lv78nHqQh2I7KaFkUzWsn_MXsyP_PP1I?authuser=3#scrollTo=moK3d7mTQ1v-)
#### via Google Colab Local Runtime
##### Dependencies
- Jupyter Notebook
- Anaconda
- _Optional:_ CUDA Toolkit for Nvidia, requires an account to install
- Tensorflow
##### Installing dependencies
Create an anaconda environment. This can also be used for tensorflow, which links your GPU to Google colab's Local runtime:
```sh
conda create -n tf-gpu
conda activate tf-gpu
```
##### Optional Step: GPU Utilization (if you are using an external GPU)
Next, install the **CUDA toolkit**, this is the version that was used in this experiment. You may find a more compatible version for your hardware:
```sh
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0
```
Then, upgrade pip and install tensorflow:
```sh
pip install –upgrade pip
pip install “tensorflow<2.11” –user
```
Now, check if tensorflow has been configured to use the GPU,
Type in termnial:
```sh
python
```
Next, type the following to verify:
```sh
import tensorflow as tf
tf.test.is_built_with_cuda()
```
If it returns `true`, you have succesfully initialized the environment with your external GPU. If not, you may follow the tutorials found here:
- CUDA Toolkit Tutorial [here](https://medium.com/geekculture/install-cuda-and-cudnn-on-windows-linux-52d1501a8805)
- Creating and Anaconda environment [step-by-step](https://stackoverflow.com/questions/51002045/how-to-make-jupyter-notebook-to-run-on-gpu)
- Installing Tensorflow locally using [this tutorial](https://www.tensorflow.org/install/pip#windows-native_1)
##### Connecting to a Google Colab Local Runtime
To connect this on a Google Colab Local Runtime, [this tutorial](https://research.google.com/colaboratory/local-runtimes.html) was used.
First, install Jupyter notebook (if you haven't) and enable server permissions:
```sh
pip install jupyter_http_over_ws
jupyter serverextension enable --py jupyter_http_over_ws
```
Next, start and authenticate the server:
```sh
jupyter notebook --NotebookApp.allow_origin='https://colab.research.google.com' --port=8888 --NotebookApp.port_retries=0
```
You can now copy the token url and paste it on your Google Colab.
#### Running the notebook using Jupyter Notebook
##### Dependencies
- Jupyter Notebook
- Anaconda
- _Optional:_ CUDA Toolkit for Nvidia, requires an account to install
- Tensorflow
Download the notebooks and save them in your chosen directory.
Create an environment where you can run the notebook via Anaconda
```sh
conda create -n env
conda activate env
```
**You may also opt to install the CUDA toolkit and tensforflow in this environment.
Next, run the notebooks via Jupyter Notebook.
```sh
jupyter notebook
```
##### After you're done
Deactivate the environment and also disable the server using the commands in your console.
```sh
conda deactivate
```
```sh
jupyter serverextension disable --py jupyter_http_over_ws
```
## 🔗 Additional Links/ Directory
Here are some links to resources and or references.
| Name | Link |
| ------ | ------ |
| Ateneo Social Computing Lab | https://huggingface.co/ateneoscsl |
|
Davlan/xlm-roberta-base-masakhaner
|
[
"pytorch",
"xlm-roberta",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: bsd-3-clause
---
Salesforce's [CodeGen](https://github.com/salesforce/CodeGen) 6B mono model ported to ggml and quantized to be executed on Apple Silicon M1/M2 CPU.
Please refer to this [tutorial](https://github.com/virtualramblas/codegen-quantization-M1) to learn more about the process that has been followed to achieve this result and feel free to leave a star if you find it useful. Thanks.
|
Davlan/xlm-roberta-base-ner-hrl
|
[
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 760 | null |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hlyu/albert_another
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hlyu/albert_another')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hlyu/albert_another')
model = AutoModel.from_pretrained('hlyu/albert_another')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hlyu/albert_another)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5055 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 2000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Davlan/xlm-roberta-base-wikiann-ner
|
[
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 235 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-fr-explorer-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-fr-explorer-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Dawn576/Dawn
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-04-19T23:16:36Z |
---
library_name: stable-baselines3
tags:
- RoombaAToB-long-goal
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoombaAToB-long-goal
type: RoombaAToB-long-goal
metrics:
- type: mean_reward
value: 98.32 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **RoombaAToB-long-goal**
This is a trained model of a **PPO** agent playing **RoombaAToB-long-goal**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dazai/Ko
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- shared-task
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bsc-bio-ehr-es-finetuned-ner-1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: shared-task
type: shared-task
config: Shared
split: validation
args: Shared
metrics:
- name: Precision
type: precision
value: 0.28507462686567164
- name: Recall
type: recall
value: 0.3560111835973905
- name: F1
type: f1
value: 0.3166183174471612
- name: Accuracy
type: accuracy
value: 0.8444321635810997
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bsc-bio-ehr-es-finetuned-ner-1
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the shared-task dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6021
- Precision: 0.2851
- Recall: 0.3560
- F1: 0.3166
- Accuracy: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 59 | 0.6644 | 0.2234 | 0.2600 | 0.2403 | 0.8198 |
| No log | 2.0 | 118 | 0.5786 | 0.1997 | 0.2507 | 0.2223 | 0.8331 |
| No log | 3.0 | 177 | 0.6083 | 0.2732 | 0.3187 | 0.2942 | 0.8379 |
| No log | 4.0 | 236 | 0.6032 | 0.2855 | 0.3486 | 0.3139 | 0.8366 |
| No log | 5.0 | 295 | 0.6021 | 0.2851 | 0.3560 | 0.3166 | 0.8444 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Dbluciferm3737/U
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 38.70 +/- 26.83
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ddarkros/Test
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
# MPT-1b-RedPajama-200b
MPT-1b-RedPajama-200b is a 1.3 billion parameter decoder-only transformer trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
The model was trained for 200B tokens by sampling from the subsets of the RedPajama dataset in the same proportions as were used by the [Llama series of models](https://arxiv.org/abs/2302.13971).
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
April 20, 2023
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom model architecture `MosaicGPT` that is not yet part of the `transformers` package.
`MosaicGPT` includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALIBI](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-1b-redpajama-200b', trust_remote_code=True)
```
To use the optimized triton implementation of FlashAttention, you can load with `attn_impl='triton'` and move the model to `bfloat16` like so:
```python
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-1b-redpajama-200b', trust_remote_code=True, attn_impl='triton')
model.to(device='cuda:0', dtype=torch.bfloat16)
```
## Model Description
This model uses the MosaicML LLM codebase, which can be found in the [MosaicML Examples Repository](https://github.com/mosaicml/examples/tree/v0.0.4/examples/llm).
The architecture is a modification of a standard decoder-only transformer.
The transformer has 24 layers, 16 attention heads, and width 2048.
The model has been modified from a standard transformer in the following ways:
* It uses ALiBi and does not use positional embeddings.
* It uses QK LayerNorm.
* It does not use biases.
## Training Data
The model was trained for 200B tokens (batch size 2200, sequence length 2048). It was trained on the following data mix:
* 67% RedPajama Common Crawl
* 15% [C4](https://huggingface.co/datasets/c4)
* 4.5% RedPajama GitHub
* 4.5% RedPajama Wikipedia
* 4.5% RedPajama Books
* 2.5% RedPajama Arxiv
* 2% RedPajama StackExchange
This is the same mix of data as was used in the Llama series of models](https://arxiv.org/abs/2302.13971).
Each sample was chosen from one of the datasets, with the dataset selected with the probability specified above.
The examples were shuffled within each dataset.
Each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
## Training Configuration
This model was trained on 440 A100-40GBs for about half a day using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using FSDP.
## Acknowledgements
This model builds on the work of [Together](https://www.together.xyz), which created the RedPajama dataset with the goal of mimicking the training data used to create the Llama series of models.
We gratefully acknowledge the hard work of the team that put together this dataset, and we hope this model serves as a useful companion to that work.
We also gratefully acknowledge the work of the researchers who created the Llama series of models, which was the impetus for our efforts and those who worked on the RedPajama project.
|
DeBERTa/deberta-v2-xxlarge
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- es
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small spanish - ROGRANMAR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small spanish - ROGRANMAR
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the minds dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 100.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| No log | 5.0 | 10 | 141.9978 | 1248.1793 |
| No log | 10.0 | 20 | nan | 100.0 |
| 77.0413 | 15.0 | 30 | nan | 100.0 |
| 77.0413 | 20.0 | 40 | nan | 100.0 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
DeadBeast/emoBERTTamil
|
[
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:tamilmixsentiment",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 35 | null |
---
license: mit
pipeline_tag: token-classification
widget:
- text: "In addition to manufacturing major components for Typhoon, the site builds the aft fuselage and the horizontal and vertical tail planes for every F-35 military aircraft under contract to the prime contractor, Lockheed Martin."
example_title: "Example 1"
- text: "The check needs about 50-70 man-hours to complete, and is usually performed in an aircraft hangar."
example_title: "Example 2"
- text: "With the development of high-altitude and long-endurance unmanned aerial vehicles (UAVs), optimization of the coordinated energy dispatch of UAVs’ energy management systems has become a key target in the research of electric UAVs."
example_title: "Example 3"
---
This is a Named-entity Recognition model for Aerospace that identifies entities within the following categories: VEHICLE, COMPONENT, TASK, and FACILITY.
|
DeadBeast/roberta-base-pretrained-mr-2
|
[
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2_finetuned_SparC_Hugging_face
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_finetuned_SparC_Hugging_face
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9133 | 1.0 | 95 | 0.9611 |
| 0.9644 | 2.0 | 190 | 0.9500 |
| 0.8763 | 3.0 | 285 | 0.9580 |
| 0.8368 | 4.0 | 380 | 0.9509 |
| 0.8647 | 5.0 | 475 | 0.9518 |
| 0.865 | 6.0 | 570 | 0.9575 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
DeadBeast/roberta-base-pretrained-mr
|
[
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: platzi-vit-model-geovany-uribe
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-geovany-uribe
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0277
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1416 | 3.85 | 500 | 0.0277 | 0.9850 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Declan/Breitbart_model_v6
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
Converted using [convert_llama_weights_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py), commit `d2ffc3f`:
```
python convert_llama_weights_to_hf.py --input_dir /models/LLaMA/ --model_size 7B --output_dir /tmp/converted
```
|
Declan/Breitbart_model_v8
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: gpl-2.0
---
Model uploads for AstroSleuth
View the repo on github here: https://github.com/Aveygo/AstroSleuth
|
Declan/ChicagoTribune_model_v3
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
tags:
- generated_from_keras_callback
model-index:
- name: automotive-base_ex
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# automotive-base_ex
This model is a fine-tuned version of [thearod5/automotive-base](https://huggingface.co/thearod5/automotive-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6472
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 45, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.7048 | 0 |
| 0.6431 | 1 |
| 0.6472 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Declan/ChicagoTribune_model_v4
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
title: Chat-with-GPT4
emoji: 🚀
colorFrom: red
colorTo: indigo
sdk: gradio
sdk_version: 3.21.0
app_file: app.py
pinned: false
license: mit
duplicated_from: ysharma/ChatGPTwithAPI
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
Declan/ChicagoTribune_model_v8
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.10 +/- 19.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Declan/FoxNews_model_v5
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | 2023-04-20T01:20:22Z |
---
license: unknown
language:
- zh
metrics:
- character
pipeline_tag: text-to-speech
tags:
- music
---
# Inital
|
Declan/HuffPost_model_v1
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Declan/HuffPost_model_v6
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
Training parameters:
```
model_args = ClassificationArgs()
model_args.max_seq_length = 512
model_args.train_batch_size = 12
model_args.eval_batch_size = 12
model_args.num_train_epochs = 5
model_args.evaluate_during_training = False
model_args.learning_rate = 1e-5
model_args.use_multiprocessing = False
model_args.fp16 = False
model_args.save_steps = -1
model_args.save_eval_checkpoints = False
model_args.no_cache = True
model_args.reprocess_input_data = True
model_args.overwrite_output_dir = True
```
Evaluation on BoolQ Test Set:
| | Precision | Recall | F1-score |
|:------------:|:---------:|:------:|:--------:|
| 0 | 0.82 | 0.80 | 0.81 |
| 1 | 0.88 | 0.89 | 0.88 |
| accuracy | | | 0.86 |
| macro avg | 0.85 | 0.84 | 0.85 |
| weighted avg | 0.86 | 0.86 | 0.86 |
ROC AUC Score: 0.844
|
Declan/Independent__model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- zh
tags:
- LoRA
- Butterfly
- Butterfly_wings
---
有时候在想,如果花仙子变成了真实的,会是怎么样的呢?
于是我炼了一个关于“蝴蝶”的LoRA,然后就出现仿若真人的花仙子~~
底模:chilloutmix_v10,
图片:219张蝴蝶,
训练次数:6
第一次写就这样吧~
直接放tag吧,大伙自己拿去试一试~
(Masterpiece, best quality, complex details),unreal engine, portrait,
1girl, tiny cute and adorable, smile, wearing the beautiful butterfly, bare shoulders, chitin, 2 wings, (translucent, golden butterfly_wings), powdered gold,
(forest:1.6), colorful, rimming light, lighting effect, <lora:ButterflyS:0.4> <lora:oliviaDiffusion_v2:0.3><lora:koreanDollLikeness_v15:0.3> <lora:fanSiSi_v11:0.5> <lora:hipoly3DModelLora_v10:0.4>
Negative prompt:
(worst quality, low quality), (breasts:1.6),lowres, bad anatomy, ((bad hands)), text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry,NSFW
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 5.5, Seed: 3286975915, Size: 1536x1536, Model hash: 47c201cbf5, Model: 写实_crescentwonder_v2Stable, Denoising strength: 0.35, Clip skip: 2, ENSD: 31337, Ultimate SD upscale upscaler: ESRGAN_4x, Ultimate SD upscale tile_width: 768, Ultimate SD upscale tile_height: 768, Ultimate SD upscale mask_blur: 8, Ultimate SD upscale padding: 32






















|
Declan/NPR_model_v3
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
language:
- nl
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Declan/NPR_model_v6
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: agpl-3.0
---
[UNOFFICIAL]
This is the pretrained DeepFocus model that accompanies the manuscript "DeepFocus: Detection of out-of-focus regions in whole slide digital images using deep learning",
published by Caglar Senaras et al in PLOS One (October 2018, DOI: https://doi.org/10.1371/journal.pone.0205387)
This model has been ported to Tensorflow 2 / Keras and uploaded to HuggingFace for easier sharing, but has not been verified by the original authors and
is in no way affiliated with the original authors. The official pretrained model is available on the official GitHub repository (https://github.com/cialab/deepfocus).
The modified, ported Tensorflow 2 models (not affiliated with the original authors) are available at https://github.com/jamesdolezal/deepfocus.
The original training dataset is available at [https://doi.org/10.5281/zenodo.1134848](https://doi.org/10.5281/zenodo.1134848).
The license as included in the original repository is GNU Affero General Public License v3.0 (AGPL-3.0).
|
Declan/NewYorkPost_model_v1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- LLMs
- MiniGPT-4
---
这是MiniGPT-4的转化权重,利用的教程是[MiniGPT-4/PrepareVicuna.md](https://github.com/Vision-CAIR/MiniGPT-4/blob/main/PrepareVicuna.md) ,使用它,您可以不需要LLAMA-13B和vicuna-13b-delta-v0进行转化。
- [https://github.com/Vision-CAIR/MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4)
|
Declan/NewYorkTimes_model_v1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- anime
library_name: diffusers
---
try to learn
|
Declan/NewYorkTimes_model_v6
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: creativeml-openrail-m
base_model: /home/ubuntu/model/stable-diffusion-v1-4
instance_prompt: a photo of xiaoxin boy,Thick and black eyebrows, round eyes, chubby and cute cheeks, very adorable, a Japanese cartoon little boy of around 4 years old
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - heine123/xiaoxin_out
These are LoRA adaption weights for /home/ubuntu/model/stable-diffusion-v1-4. The weights were trained on a photo of xiaoxin boy,Thick and black eyebrows, round eyes, chubby and cute cheeks, very adorable, a Japanese cartoon little boy of around 4 years old using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
Declan/NewYorkTimes_model_v8
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: creativeml-openrail-m
language:
- ja
tags:
- stable-diffusion
---
本モデルは『CreativeML Open RAIL-M』の範囲でラインセンスされます。
本モデルを使用した上での問題に関しては、当方は一切責任を持ちません。ご了承の上ご使用ください。
また、マージモデルのライセンス変更に伴い、公開を停止することがあります。
<マージ利用モデル>
マージの際し、下記のライセンスに関して継承しております。
✓ Use the model without crediting the creator<br>
✓ Sell images they generate<br>
✓ Run on services that generate images for money<br>
✓ Share merges using this model<br>
✕ Sell this model or merges using this model<br>
✓ Have different permissions when sharing merges<br>
【要約】
生成される画像はクレジット表記なしで商用利用可能です。
checkpointをマージに使用することも可能ですが、販売はできません。
また、制限の変更を行うこともできません。
MUSE_v1
https://civitai.com/models/13564/musev1
XXMix_9
https://civitai.com/models/47274/xxmix9
Soda Mix
https://civitai.com/models/47507/soda-mix
※拡大なしを推奨致します。
本モデルも使用したモデルの利用条件に従う形になりますが、以下に関しては厳に使用を禁止いたします。<br>
・暴力的な表現<br>
・児童ポルノ<br>
・未成年者の性的な表現<br>
本モデルは『CreativeML Open RAIL-M』の範囲でラインセンスされます。 本モデルを使用した上での問題及び生成された画像に関するいかなる問題に関しても、当方は一切責任を持ちません。ご了承の上ご使用ください。 また、マージモデルのライセンス変更に伴い、公開を停止することがあります。
This model is also subject to the terms and conditions of use of the model used, but the use of the following is strictly prohibited.
・Violent expressions
・Child pornography
・Sexual expression of minors
This model is line-sensed within the scope of CreativeML Open RAIL-M. We are not responsible for any problems that may arise from the use of this model or the images it generates. We are not responsible for any problems that may occur when using this model or any problems related to the generated images. Please use the model with your understanding. We reserve the right to suspend publication of the merged model due to changes in its license.
(更新履歴)
2023/5/6 V1.2 → V2.1
素材モデルのレシピ変更
2023/5/4 V1.2 → V2.0
素材モデルの変更 (Colorfulを使用しないレシピに変更) ※公開していない
2023/4/29 V1.1 → V1.2
素材モデルのバージョンアップに伴い、素材モデルのバージョン変更
2023/1/26 V1 → V1.1
素材に使用モデルのライセンス変更に伴い、素材モデルを変更
|
Declan/Politico_model_v1
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hlyu/basemodel_xuan_loss
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hlyu/basemodel_xuan_loss')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hlyu/basemodel_xuan_loss')
model = AutoModel.from_pretrained('hlyu/basemodel_xuan_loss')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hlyu/basemodel_xuan_loss)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5055 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`__main__.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 5000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Declan/Politico_model_v3
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/44702/susannah-honkai-3rd
|
Declan/Politico_model_v5
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/45232/yuisis-granblue-fantasy
|
Declan/Politico_model_v6
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: cc
metrics:
- mIoU
pipeline_tag: image-segmentation
tags:
- stanford indoor
- sunrgbd
- semi-supervised
- semantic segmentation
datasets:
- stanford_indoor
- sunrgbs
---
| Dataset | Labels used | Modality | Framework | Config file | Checkpoint | Test mIoU |
|---------|-------------|----------|-----------------|--------------------------------------|--------------------------------------------------------|-----------|
| SID | 0.1% (49) | RGB + D | Supervised Only | src/configs/sid_0.1_suponly.yml | [sid_0.1_suponly.pth](https://huggingface.co/harshm121/M3L/blob/main/SID/sid_0.1_suponly.pth) | 42.09 |
| SID | 0.1% (49) | RGB + D | Mean Teacher | src/configs/sid_0.1_mt.yml | [sid_0.1_mt.pth](https://huggingface.co/harshm121/M3L/blob/main/SID/sid_0.1_mt.pth) | 41.77 |
| SID | 0.1% (49) | RGB + D | M3L | src/configs/sid_0.1_m3l.yml | [sid_0.1_m3l.pth](https://huggingface.co/harshm121/M3L/blob/main/SID/sid_0.1_m3l.pth) | 44.1 |
| SID | 0.2% (98) | RGB + D | Supervised Only | src/configs/sid_0.2_suponly.yml | [sid_0.2_suponly.pth](https://huggingface.co/harshm121/M3L/blob/main/SID/sid_0.2_suponly.pth) | 46.6 |
| SID | 0.2% (98) | RGB + D | Mean Teacher | src/configs/sid_0.2_mt.yml | [sid_0.2_mt.pth](https://huggingface.co/harshm121/M3L/blob/main/SID/sid_0.2_mt.pth) | 48.54 |
| SID | 0.2% (98) | RGB + D | M3L | src/configs/sid_0.2_m3l.yml | [sid_0.2_m3l.pth](https://huggingface.co/harshm121/M3L/blob/main/SID/sid_0.2_m3l.pth) | 49.05 |
| SID | 1% (491) | RGB + D | Supervised Only | src/configs/sid_1.0_suponly.yml | [sid_1.0_suponly.pth](https://huggingface.co/harshm121/M3L/blob/main/SID/sid_1.0_suponly.pth) | 52.47 |
| SID | 1% (491) | RGB + D | Mean Teacher | src/configs/sid_1.0_mt.yml | [sid_1.0_mt.pth](https://huggingface.co/harshm121/M3L/blob/main/SID/sid_1.0_mt.pth) | 54.32 |
| SID | 1% (491) | RGB + D | M3L | src/configs/sid_1.0_m3l.yml | [sid_1.0_m3l.pth](https://huggingface.co/harshm121/M3L/blob/main/SID/sid_1.0_m3l.pth) | 55.48 |
| SUNRGBD | 6.25% (297) | RGB + D | Supervised Only | src/configs/sunrgbd_6.25_suponly.yml | [sunrgbd_6.25_suponly.pth](https://huggingface.co/harshm121/M3L/blob/main/SUNRGBD/sunrgbd_6.25_suponly.pth) | 32 |
| SUNRGBD | 6.25% (297) | RGB + D | Mean Teacher | src/configs/sunrgbd_6.25_mt.yml | [sunrgbd_6.25_mt.pth](https://huggingface.co/harshm121/M3L/blob/main/SUNRGBD/sunrgbd_6.25_mt.pth) | 31.11 |
| SUNRGBD | 6.25% (297) | RGB + D | M3L | src/configs/sunrgbd_6.25_m3l.yml | [sunrgbd_6.25_m3l.pth](https://huggingface.co/harshm121/M3L/blob/main/SUNRGBD/sunrgbd_6.25_m3l.pth) | 30.67 |
| SUNRGBD | 12.5% (594) | RGB + D | Supervised Only | src/configs/sunrgbd_12.5_suponly.yml | [sunrgbd_12.5_suponly.pth](https://huggingface.co/harshm121/M3L/blob/main/SUNRGBD/sunrgbd_12.5_suponly.pth) | 35.88 |
| SUNRGBD | 12.5% (594) | RGB + D | Mean Teacher | src/configs/sunrgbd_12.5_mt.yml | [sunrgbd_12.5_mt.pth](https://huggingface.co/harshm121/M3L/blob/main/SUNRGBD/sunrgbd_12.5_mt.pth) | 39.17 |
| SUNRGBD | 12.5% (594) | RGB + D | M3L | src/configs/sunrgbd_12.5_m3l.yml | [sunrgbd_12.5_m3l.pth](https://huggingface.co/harshm121/M3L/blob/main/SUNRGBD/sunrgbd_12.5_m3l.pth) | 39.7 |
| SUNRGBD | 25% (1189) | RGB + D | Supervised Only | src/configs/sunrgbd_25_suponly.yml | [sunrgbd_25_suponly.pth](https://huggingface.co/harshm121/M3L/blob/main/SUNRGBD/sunrgbd_25_suponly.pth) | 42.09 |
| SUNRGBD | 25% (1189) | RGB + D | Mean Teacher | src/configs/sunrgbd_25_mt.yml | [sunrgbd_25_mt.pth](https://huggingface.co/harshm121/M3L/blob/main/SUNRGBD/sunrgbd_25_mt.pth) | 41.95 |
| SUNRGBD | 25% (1189) | RGB + D | M3L | src/configs/sunrgbd_25_suponly.yml | [sunrgbd_25_m3l.pth](https://huggingface.co/harshm121/M3L/blob/main/SUNRGBD/sunrgbd_25_m3l.pth) | 42.69 |
|
Declan/Politico_model_v8
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/42192/zero-two
|
Declan/Reuters_model_v2
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/43696/shuten-douji-fate-grand-order
|
Declan/Reuters_model_v4
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/41513/chen-hai-azur-lane-cerulean-ripples
|
Declan/Reuters_model_v5
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/45013/suzukaze-aoba
|
Declan/Reuters_model_v6
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/45018/kusano-yui
|
Declan/Reuters_model_v8
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/33764/mahjongsoul-characters
|
Declan/WallStreetJournal_model_v2
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/45126/bradamante-5in1-all-outfit-fate-grand-order
|
Declan/WallStreetJournal_model_v3
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
** Model upgraded and finetuned starting from LlaMa model. I hope everyone creates modes starting from this open-source project**
GPTQ conversion command (on CUDA branch):
CUDA_VISIBLE_DEVICES=0 python llama.py ../capibara-17b-4bit c4 --wbits 4 --true-sequential --groupsize 128 --save capibara-17b-4bit-128g.pt
Added 1 token to the tokenizer model:
python llama-tools/add_tokens.py capibara-17b/tokenizer.model /content/tokenizer.model llama-tools/test_list.txt
Enjoy
|
Declan/WallStreetJournal_model_v4
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/45459/sharon-holygrail-engage-kiss
|
DeepBasak/Slack
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
---
https://civitai.com/models/44954/ahogemix
|
DeepChem/ChemBERTa-10M-MLM
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 90 | null |
---
license: bsd-3-clause
---
This is a finetuned CodeT5-base checkpoint on CodeXGLUE code summarization python data.
Pretrained model: https://huggingface.co/Salesforce/codet5-base
Finetuning dataset: https://huggingface.co/datasets/code_x_glue_ct_code_to_text (only the python split)
|
DeepPavlov/distilrubert-base-cased-conversational
|
[
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"transformers"
] | null |
{
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6,324 | null |
---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ckiplab-albert-base-chinese-david-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ckiplab-albert-base-chinese-david-ner
This model is a fine-tuned version of [ckiplab/albert-base-chinese](https://huggingface.co/ckiplab/albert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2725
- Precision: 0.7354
- Recall: 0.7379
- F1: 0.7367
- Accuracy: 0.9278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4221 | 1.4 | 500 | 0.2888 | 0.7072 | 0.7414 | 0.7239 | 0.9169 |
| 0.1314 | 2.8 | 1000 | 0.2725 | 0.7354 | 0.7379 | 0.7367 | 0.9278 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 2.11.0
- Tokenizers 0.13.3
|
DeepPavlov/distilrubert-tiny-cased-conversational-v1
|
[
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"transformers"
] | null |
{
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9,141 | null |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: jkorstad/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DeepPavlov/xlm-roberta-large-en-ru-mnli
|
[
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"ru",
"dataset:glue",
"dataset:mnli",
"transformers",
"xlm-roberta-large",
"xlm-roberta-large-en-ru",
"xlm-roberta-large-en-ru-mnli",
"has_space"
] |
text-classification
|
{
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 227 | 2023-04-20T03:23:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: jimli0816/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jimli0816/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3713
- Validation Loss: 0.3560
- Train Accuracy: 0.902
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7850 | 1.6194 | 0.85 | 0 |
| 1.2053 | 0.7987 | 0.906 | 1 |
| 0.6998 | 0.5437 | 0.891 | 2 |
| 0.4879 | 0.4149 | 0.91 | 3 |
| 0.3713 | 0.3560 | 0.902 | 4 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
DeskDown/MarianMix_en-zh-10
|
[
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -140.00 +/- 73.86
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'thuyentruong/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.