id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
joey234/mmlu-prehistory-rule-neg
|
joey234
|
Dataset Card for "mmlu-prehistory-rule-neg"
More Information needed
|
joey234/mmlu-professional_accounting-rule-neg
|
joey234
|
Dataset Card for "mmlu-professional_accounting-rule-neg"
More Information needed
|
joey234/mmlu-professional_law-rule-neg
|
joey234
|
Dataset Card for "mmlu-professional_law-rule-neg"
More Information needed
|
joey234/mmlu-professional_medicine-rule-neg
|
joey234
|
Dataset Card for "mmlu-professional_medicine-rule-neg"
More Information needed
|
joey234/mmlu-professional_psychology-rule-neg
|
joey234
|
Dataset Card for "mmlu-professional_psychology-rule-neg"
More Information needed
|
joey234/mmlu-public_relations-rule-neg
|
joey234
|
Dataset Card for "mmlu-public_relations-rule-neg"
More Information needed
|
joey234/mmlu-security_studies-rule-neg
|
joey234
|
Dataset Card for "mmlu-security_studies-rule-neg"
More Information needed
|
joey234/mmlu-sociology-rule-neg
|
joey234
|
Dataset Card for "mmlu-sociology-rule-neg"
More Information needed
|
joey234/mmlu-us_foreign_policy-rule-neg
|
joey234
|
Dataset Card for "mmlu-us_foreign_policy-rule-neg"
More Information needed
|
joey234/mmlu-virology-rule-neg
|
joey234
|
Dataset Card for "mmlu-virology-rule-neg"
More Information needed
|
joey234/mmlu-world_religions-rule-neg
|
joey234
|
Dataset Card for "mmlu-world_religions-rule-neg"
More Information needed
|
moyix/asleep_keyboard
|
moyix
|
The Asleep at the Keyboard dataset contains 89 code generation scenarios that are designed to test the ability of code generation models to generate code secure code. The dataset is split into three evaluation axes: diversity of weaknesses (DoW), diversity of prompts (DoP), and diversity of domains (DoD).
To perform this analysis we prompt Copilot to generate code in scenarios relevant to high-risk cybersecurity weaknesses, e.g. those from MITRE’s “Top 25” Common Weakness Enumeration (CWE) list. We explore Copilot’s performance on three distinct code generation axes—examining how it performs given diversity of weaknesses, diversity of prompts, and diversity of domains. In total, we produce 89 different scenarios
|
joey234/mmlu-abstract_algebra-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-abstract_algebra-rule-neg-prepend"
More Information needed
|
joey234/mmlu-anatomy-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-anatomy-rule-neg-prepend"
More Information needed
|
joey234/mmlu-astronomy-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-astronomy-rule-neg-prepend"
More Information needed
|
joey234/mmlu-business_ethics-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-business_ethics-rule-neg-prepend"
More Information needed
|
joey234/mmlu-clinical_knowledge-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-clinical_knowledge-rule-neg-prepend"
More Information needed
|
joey234/mmlu-college_biology-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-college_biology-rule-neg-prepend"
More Information needed
|
joey234/mmlu-college_chemistry-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-college_chemistry-rule-neg-prepend"
More Information needed
|
joey234/mmlu-college_computer_science-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-college_computer_science-rule-neg-prepend"
More Information needed
|
joey234/mmlu-college_mathematics-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-college_mathematics-rule-neg-prepend"
More Information needed
|
joey234/mmlu-college_medicine-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-college_medicine-rule-neg-prepend"
More Information needed
|
joey234/mmlu-college_physics-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-college_physics-rule-neg-prepend"
More Information needed
|
joey234/mmlu-computer_security-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-computer_security-rule-neg-prepend"
More Information needed
|
joey234/mmlu-conceptual_physics-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-conceptual_physics-rule-neg-prepend"
More Information needed
|
joey234/mmlu-econometrics-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-econometrics-rule-neg-prepend"
More Information needed
|
joey234/mmlu-electrical_engineering-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-electrical_engineering-rule-neg-prepend"
More Information needed
|
joey234/mmlu-elementary_mathematics-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-elementary_mathematics-rule-neg-prepend"
More Information needed
|
joey234/mmlu-formal_logic-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-formal_logic-rule-neg-prepend"
More Information needed
|
joey234/mmlu-global_facts-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-global_facts-rule-neg-prepend"
More Information needed
|
joey234/mmlu-high_school_biology-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-high_school_biology-rule-neg-prepend"
More Information needed
|
joey234/mmlu-high_school_chemistry-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-high_school_chemistry-rule-neg-prepend"
More Information needed
|
joey234/mmlu-high_school_computer_science-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-high_school_computer_science-rule-neg-prepend"
More Information needed
|
joey234/mmlu-high_school_european_history-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-high_school_european_history-rule-neg-prepend"
More Information needed
|
joey234/mmlu-high_school_geography-rule-neg-prepend
|
joey234
|
Dataset Card for "mmlu-high_school_geography-rule-neg-prepend"
More Information needed
|
rickRossie/bluemoon_roleplay_chat_data_300k_messages
|
rickRossie
|
Dataset Card for "bluemoon_roleplay_chat_data_300k_messages"
More Information needed
|
izumi-lab/llm-japanese-dataset
|
izumi-lab
|
llm-japanese-dataset
LLM構築用の日本語インストラクション(チャット)データセット
主に,英語で構築されたLLMモデルなどに対して,チャット(Instruction)応答タスクに関してLoRAなどでチューニングするために使用できます.
※様々な公開言語資源を利用させていただきました.関係各位にはこの場を借りて御礼申し上げます.
updates
2023/5/15にAlpaca datasetがNCにライセンス変更されたことに対応し,安心してご利用いただけるように,データセットから当該データセットをドロップしました.
v1.0.1にて,ドロップ後のデータセットをご利用いただけます.
2024/1/4にWikipedia summaryに空白文字のみで構成される出力を削除することに対応し,Wikipediaのバージョンアップデート(20240101)をしました(v1.0.2).
2024/1/18にAsian Language Treebank… See the full description on the dataset page: https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset.
|
cardy/kohatespeech
|
cardy
|
They provide the first human-annotated Korean corpus for toxic speech detection and the large unlabeled corpus.
The data is comments from the Korean entertainment news aggregation platform.
|
MuGeminorum/HEp2
|
MuGeminorum
|
Dataset card for "MuGeminorum/HEp2"
The HEp-2 (Human Epithelial type 2) dataset is a widely used benchmark in the field of medical image analysis, especially for the task of antinuclear antibody (ANA) pattern classification. The dataset contains microscopic images of HEp-2 cells stained with fluorescence, demonstrating multiple patterns of autoantibody binding associated with various autoimmune diseases. The HEp-2 dataset is utilized by researchers and practitioners to develop… See the full description on the dataset page: https://huggingface.co/datasets/MuGeminorum/HEp2.
|
MuGeminorum/emo163
|
MuGeminorum
|
Intro
The emo163 dataset contains about 395,000 music sentiment tagged data, where each piece of data consists of three main columns: song ID, song list ID, and the sentiment tag of the song. The source of this data is the official website of NetEase Cloud Music, which provides exhaustive information for labeling song sentiment. The song ID uniquely identifies each song, while the song list ID indicates the song's belonging to the song list. Sentiment tags give each song an… See the full description on the dataset page: https://huggingface.co/datasets/MuGeminorum/emo163.
|
atasoglu/databricks-dolly-15k-tr
|
atasoglu
|
This dataset is machine-translated version of databricks-dolly-15k.jsonl into Turkish.
Used googletrans==3.1.0a0 to translation.
|
llm-book/JGLUE
|
llm-book
|
JGLUE, Japanese General Language Understanding Evaluation, is built to measure the general NLU ability in Japanese. JGLUE has been constructed from scratch without translation. We hope that JGLUE will facilitate NLU research in Japanese.
|
nielsr/breast-cancer
|
nielsr
|
Dataset Card for "breast-cancer"
Dataset was taken from the MedSAM project and used in this notebook which fine-tunes Meta's SAM model on the dataset.
More Information needed
|
teknium/GPT4-LLM-Cleaned
|
teknium
|
This is the GPT4-LLM dataset from : https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
It has been filtered of all OpenAI disclaimers and refusals. (Disclaimer: It may have removed some additional things besides just OAI disclaimers, as I used the followings script which is a bit more broad: https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered/blob/main/wizardlm_clean.py)
There is a modified script of that in the repo that was used specifically for… See the full description on the dataset page: https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned.
|
HuggingFaceH4/code_evaluation_prompts
|
HuggingFaceH4
|
Dataset Card for H4 Code Evaluation Prompts
These are a filtered set of prompts for evaluating code instruction models.
It will contain a variety of languages and task types.
Currently, we used ChatGPT (GPT-3.5-tubro) to generate these, so we encourage using them only for qualatative evaluation and not to train your models.
The generation of this data is similar to something like CodeAlpaca, which you can download here, but we intend to make these tasks botha) more challenging… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceH4/code_evaluation_prompts.
|
mosaicml/dolly_hhrlhf
|
mosaicml
|
Dataset Card for "dolly_hhrlhf"
This dataset is a combination of Databrick's dolly-15k dataset and a filtered subset of Anthropic's HH-RLHF. It also includes a test split, which was missing in the original dolly set. That test set is composed of 200 randomly selected samples from dolly + 4,929 of the test set samples from HH-RLHF which made it through the filtering process. The train set contains 59,310 samples; 15,014 - 200 = 14,814 from Dolly, and the remaining 44,496 from… See the full description on the dataset page: https://huggingface.co/datasets/mosaicml/dolly_hhrlhf.
|
AlekseyKorshuk/chain-of-thoughts-chatml
|
AlekseyKorshuk
|
Follow me
HuggingFace: https://huggingface.co/AlekseyKorshuk
GitHub: https://github.com/AlekseyKorshuk
Twitter / X: https://x.com/alekseykorshuk
|
liuhaotian/LLaVA-Pretrain
|
liuhaotian
|
LLaVA Visual Instruct Pretrain Dataset Card
Dataset details
Dataset type:
LLaVA Visual Instruct Pretrain LCS-558K is a subset of LAION/CC/SBU dataset, filtered with a more balanced concept coverage distribution.
Captions are also associated with BLIP synthetic caption for reference.
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
We aim to build large multimodal towards GPT-4 vision/language capability.
Dataset date:… See the full description on the dataset page: https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain.
|
singhsays/fake-w2-us-tax-form-dataset
|
singhsays
|
Dataset Card for "fake-w2-us-tax-form-dataset"
This is a dataset of synthetically generated US Tax Return W2 Forms, with generated fake data such as names, ids, dates and addresses. Only real city, state and zipcodes have been used.
This dataset is created from the existing public Fake W-2 (US Tax Form) Dataset dataset for use with
🤗
|
corvj/daps
|
corvj
|
The DAPS (Device and Produced Speech) dataset is a collection of aligned versions of professionally produced studio speech recordings and recordings of the same speech on common consumer devices (tablet and smartphone) in real-world environments. It has 15 versions of audio (3 professional versions and 12 consumer device/real-world environment combinations). Each version consists of about 4 1/2 hours of data (about 14 minutes from each of 20 speakers).
|
it-at-m/LHM-Dienstleistungen-QA
|
it-at-m
|
LHM-Dienstleistungen-QA - german public domain question-answering dataset
Datasets created based on data from Munich city administration.
Format inspired by GermanQuAD.
Annotated by:
Institute for Applied Artificial Intelligence: Leon Marius Schröder
BettercallPaul GmbH: Clemens Gutknecht, Oubada Alkiddeh, Susanne Weiß
Stadt München: Leon Lukas
Data basis
Texts taken from the “Dienstleistungsfinder“ of the city of Munich… See the full description on the dataset page: https://huggingface.co/datasets/it-at-m/LHM-Dienstleistungen-QA.
|
PKU-Alignment/PKU-SafeRLHF-10K
|
PKU-Alignment
|
Paper
You can find more information in our paper.
Dataset Paper: https://arxiv.org/abs/2307.04657
|
milashkaarshif/MoeGirlPedia_wikitext_raw_archive
|
milashkaarshif
|
Glad to see models and datasets were inspired from this dataset, thanks to all who are using this dataset in their training materials.
Feel free to re-upload the contents to places like the Internet Archive (Please follow the license and keep these files as-is) to help preserve this digital asset.
Looking forward to see more models and synthetic datasets trained from this raw archive, good luck!
Note: Due to the content censorship system introduced by MGP on 2024/03/29, it is unclear that… See the full description on the dataset page: https://huggingface.co/datasets/milashkaarshif/MoeGirlPedia_wikitext_raw_archive.
|
lirus18/deepfashion_with_captions
|
lirus18
|
Dataset Card for "deepfashion_with_captions"
More Information needed
|
thomasavare/waste-classification-v2
|
thomasavare
|
Dataset Card for Dataset Name
Dataset Summary
Dataset used to train a language model to do classification on 50 different waste classes.
Languages
English
Dataset Structure
Data Instances
Phrase
Class
Index
"I have this apple phone charger to throw, where should I put it ?"
PHONE CHARGER
26
"Should I recycle a disposable cup ?"
Plastic Cup
32
"I have a milk brick"
Tetrapack
45
Data Fields… See the full description on the dataset page: https://huggingface.co/datasets/thomasavare/waste-classification-v2.
|
SuperrWu/Cyberpunk
|
SuperrWu
|
Dataset Card for "Cyberpunk"
More Information needed
|
gofixyourself/EasyPortrait
|
gofixyourself
|
EasyPortrait - Face Parsing and Portrait Segmentation Dataset
We introduce a large-scale image dataset EasyPortrait for portrait segmentation and face parsing. Proposed dataset can be used in several tasks, such as background removal in conference applications, teeth whitening, face skin enhancement, red eye removal or eye colorization, and so on.
EasyPortrait dataset size is about 26GB, and it contains 20 000 RGB images (~17.5K FullHD images) with high quality annotated… See the full description on the dataset page: https://huggingface.co/datasets/gofixyourself/EasyPortrait.
|
zetavg/CC-100-zh-Hant
|
zetavg
|
CC-100 zh-Hant (Traditional Chinese)
From https://data.statmt.org/cc-100/, only zh-Hant - Chinese (Traditional). Broken into lines, with each line as a row.
Estimated to have around 4B tokens when tokenized with the bigscience/bloom tokenizer.
There's another version that the text is split by paragraphs instead of lines: zetavg/CC-100-zh-Hant-merged.
References
Please cite the following if you found the resources in the CC-100 corpus useful.
Unsupervised… See the full description on the dataset page: https://huggingface.co/datasets/zetavg/CC-100-zh-Hant.
|
pszemraj/qmsum-cleaned
|
pszemraj
|
qmsum-cleaned
prefixes
It's worth noting that each "document" in input is prefixed by a question/prompt on what the model is supposed to do. You may want to explicitly handle this in some way, or prefix your models trained on this dataset.
Most frequent "prefixes" separated via sentence-splitter in the train split:
Sentence
Count
0
Summarize the whole meeting.
121
1
Summarize the meeting
25
2
What did the team discuss about the product cost?
4
3… See the full description on the dataset page: https://huggingface.co/datasets/pszemraj/qmsum-cleaned.
|
heegyu/korquad-chat-v1
|
heegyu
|
총 9619개의 한국어 지식기반 대화 데이터셋입니다.
KorQuAD 1.0 데이터에 있는 문서를 ChatGPT에게 준 뒤 생성했습니다.
서로를 호칭하는 부분은 아래처럼 처리되어있습니다.
안녕하세요 <|bot|>. 요즘 어떻게 지내시나요?\n<bot> 안녕하세요 <|user|>.
데이터 샘플
{
"source": "korquad-chat",
"text": "
<sys>유전자의 이름은 인도의 수도 뉴델리의 이름을 따 붙여졌는데, 이는 2009년 용 (Yong) 등이 처음 기록한 이 케이스의 환자가 인도에서 처음 박테리아 감염을 확인하였기 때문이다. 박테리아 감염 치료가 뉴델리 병원에서 실패로 끝난 후 그는 모국 스웨덴으로 송환되었으며, 그곳에서 카바페넴에 내성을 보이는 폐렴막대균의 유전자가 분리되었다. 용 등은 이 내성 메커니즘이 \"분명히 인도에서 생겨났으나, 이것이 얼마나 퍼져있는지를 보여주는 데이터가 인도에 존재한다\"라고 주장하였다.… See the full description on the dataset page: https://huggingface.co/datasets/heegyu/korquad-chat-v1.
|
kunishou/oasst1-89k-ja
|
kunishou
|
This dataset was created by automatically translating "OpenAssistant/oasst1" into Japanese.
The "ng_translation" flag indicates that the translation was not successful, and "1" means that the translation failed.Therefore, for data with "1", "text" and "text_en" contain the same text.
Update:
2023/11/12oasst1-89k-jaをチャット形式に変換したoasst1-chat-44k-jaを公開しました。
2023/10/21自動翻訳によるコード関連データの翻訳誤り2000箇所程度を手動で修正しました。
修正イメージを表示
修正前
もちろん!これは、Flask… See the full description on the dataset page: https://huggingface.co/datasets/kunishou/oasst1-89k-ja.
|
thu-coai/augesc
|
thu-coai
|
AugESC is an augmented dialogue dataset for emotional support conversation, distilled from a fine-tuned GPT-J model.
GitHub repo. Original paper.
Please kindly cite our papers if you use this resource:
@inproceedings{zheng-etal-2023-augesc,
title={AugESC: Dialogue Augmentation with Large Language Models for Emotional Support Conversation},
author={Zheng, Chujie and
Sabour, Sahand and
Wen, Jiaxin and
Zhang, Zheng and
Huang, Minlie},
booktitle={Findings of ACL}… See the full description on the dataset page: https://huggingface.co/datasets/thu-coai/augesc.
|
tobiaslee/VEC
|
tobiaslee
|
Visual and Embodied Concept (VEC) benchmark is designed for evaluating the LLM understanding ability of basic visual (color, shape, size, height and material) and embodied (mass, temperature, hardness) concepts.
|
4eJIoBek/PAIT-Downloads
|
4eJIoBek
|
This is a downloads of https://gz1k.itch.io/ai-portable-tools, but on huggingface for lightning speed of downloading. I hope i haven't broke ToS of Huggingface Hub by uploading these tools here.
This is my collection of portable AI packages to run it fast without anxious headache in console, sort of "Awesome N" repos, but for portables for win. initially, I made these tools for myself, but maybe someone else will need them. All portables can work offline and tested on gtx 1050… See the full description on the dataset page: https://huggingface.co/datasets/4eJIoBek/PAIT-Downloads.
|
aneeshas/imsdb-genre-movie-scripts
|
aneeshas
|
Dataset Card for "imsdb-genre-movie-scripts"
More Information needed
|
sivan22/hebrew-handwritten-dataset
|
sivan22
|
Dataset Information
Keywords
Hebrew, handwritten, letters
Description
HDD_v0 consists of images of isolated Hebrew characters together with training and test sets subdivision.
The images were collected from hand-filled forms.
For more details, please refer to [1].
When using this dataset in research work, please cite [1].
[1] I. Rabaev, B. Kurar Barakat, A. Churkin and J. El-Sana. The HHD Dataset. The 17th International Conference on Frontiers in… See the full description on the dataset page: https://huggingface.co/datasets/sivan22/hebrew-handwritten-dataset.
|
DongfuJiang/FeTaQA
|
DongfuJiang
|
This repo is the unofficial FeTA-QA dataset from paper FeTaQA: Free-form Table Question Answering.
The original purpose to make it easier for users to download and use dataset. All the data is publicly avaliable on their offical Github site
If there is anything wrong, please raise an issue in the community and I will fix it if I am available.
|
juletxara/mgsm
|
juletxara
|
Multilingual Grade School Math Benchmark (MGSM) is a benchmark of grade-school math problems, proposed in the paper [Language models are multilingual chain-of-thought reasoners](http://arxiv.org/abs/2210.03057).
The same 250 problems from [GSM8K](https://arxiv.org/abs/2110.14168) are each translated via human annotators in 10 languages. The 10 languages are:
- Spanish
- French
- German
- Russian
- Chinese
- Japanese
- Thai
- Swahili
- Bengali
- Telugu
You can find the input and targets for each of the ten languages (and English) as `.tsv` files.
We also include few-shot exemplars that are also manually translated from each language in `exemplars.py`.
|
BelleGroup/train_3.5M_CN
|
BelleGroup
|
内容
包含约350万条由BELLE项目生成的中文指令数据。
样例
{
"id":"66623028",
"conversations":[
{
"from":"human",
"value":"归纳介绍三种不同类型的电影题材"
},
{
"from":"assistant",
"value":"1. 动作片:动作片通常强调动作和战斗场面,故事情节紧凑,充满了紧张和兴奋的氛围。主角一般是具有超凡实力或特殊技能的英雄人物,往往会与反派角色进行一系列激烈的格斗,追逐和爆破等酷炫场面。例如《速度与激情》和《复仇者联盟》等。\n2. 爱情片:爱情片是以爱情为主题的电影,着重呈现角色之间的感情纠葛,展现了爱情的美好和悲壮。情节与角色之间的发展令人深思,增加了观众的感性体验。这种电影的故事情节主要以人类与人类之间的情感变化为主题。例如《泰坦尼克号》和《让子弹飞》等。\n3.… See the full description on the dataset page: https://huggingface.co/datasets/BelleGroup/train_3.5M_CN.
|
Akatsuki-Amemiya/Akatsuki_Cantonese_Singing
|
Akatsuki-Amemiya
|
Akatsuki 的粤语歌声数据集
使用前请查看License
进行申请后请发送邮件到[email protected],以便人工审核通过。
我知道申请时HF会给我发送邮箱,但是我会忽视掉它
After submitting the application, please send an email to [email protected] for manual review and approval.
Only emails from HF will be ignored.
申請を行った後、[email protected]にメールを送信して、手動で審査と承認を行ってください。
HFからのメールのみ無視されます。
License
中文
该数据集在使用前,需严格遵守以下条款。若您不同意这些条款,请勿使用该数据集。
1.权利授权
本数据集拥有者(以下简称“作者”)授予您非排他性、不可转让、不可分许可使用本数据集,以及使用本数据集产生的所有成果,包括商业和非商业目的。… See the full description on the dataset page: https://huggingface.co/datasets/Akatsuki-Amemiya/Akatsuki_Cantonese_Singing.
|
silk-road/Vanilla-chinese-alpaca-luotuo
|
silk-road
|
Vanilla骆驼是骆驼项目在23年3月21日启动的第一个数据集和模型
我们会陆续将更多数据集发布到hf,包括
Coco Caption的中文翻译
CoQA的中文翻译
CNewSum的Embedding数据
增广的开放QA数据
WizardLM的中文翻译
如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。
骆驼(Luotuo): 开源中文大语言模型
https://github.com/LC1332/Luotuo-Chinese-LLM
骆驼(Luotuo)项目是由冷子昂 @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。
( 注意: 陈启源 正在寻找2024推免导师,欢迎联系 )
骆驼项目不是商汤科技的官方产品。
Citation
Please cite the repo if you use the data or code in this repo.
@misc{alpaca,
author={Ziang Leng… See the full description on the dataset page: https://huggingface.co/datasets/silk-road/Vanilla-chinese-alpaca-luotuo.
|
lexlms/lex_files
|
lexlms
|
Dataset Card for "LexFiles"
Dataset Summary
The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India).
The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of… See the full description on the dataset page: https://huggingface.co/datasets/lexlms/lex_files.
|
FreedomIntelligence/huatuo_consultation_qa
|
FreedomIntelligence
|
Dataset Card for huatuo_consultation_qa
Dataset Summary
We collected data from a website for medical consultation , consisting of many online consultation records by medical experts. Each record is a QA pair: a patient raises a question and a medical doctor answers the question. The basic information of doctors (including name, hospital organization, and department) was recorded.
We directly crawl patient’s questions and doctor’s answers as QA pairs, getting 32… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/huatuo_consultation_qa.
|
VMware/open-instruct-v1-oasst-dolly-hhrlhf
|
VMware
|
Dataset Card for "open-instruct-v1-oasst-dolly-hhrlhf"
This dataset is a combination of:
Filtered subset ofOpenAssistant/oasst1
train split of Mosaic-dolly-hhrlhf (consists of Databrick's dolly-15k dataset and a filtered subset of Anthropic's HH-RLHF).
Dataset
The dataset consists of 3 columns:
instruction: The natural language instruction without any prompt templates (we extracted them out of the alpaca-format in Mosaic-dolly-hhrlhf)
alpaca_prompt: Alpaca… See the full description on the dataset page: https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf.
|
lighteval/med_dialog
|
lighteval
|
"The MedDialog dataset (English) contains conversations between doctors and patients.
It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added.
The raw dialogues are from healthcaremagic.com and icliniq.com. All copyrights of the data belong
to healthcaremagic.com and icliniq.com."
The following is an example from the healthcaremagic.com subset:
Patient: I get cramps on top of my left forearm and hand and it causes my hand and fingers to draw up and it
hurts. It mainly does this when I bend my arm. I ve been told that I have a slight pinch in a nerve in my neck.
Could this be a cause? I don t think so. Doctor: Hi there. It may sound difficult to believe it ,but the nerves
which supply your forearms and hand, start at the level of spinal cord and on their way towards the forearm and
hand regions which they supply, the course of these nerves pass through difference fascial and muscular planes
that can make them susceptible to entrapment neuropathies. Its a group of conditions where a nerve gets
compressed between a muscle and a bone, or between the fibers of a muscle that it pierces or passes through.
Also, the compression can happen when the nerves are travelling around a blood vessel which can mechanically put
pressure on them. Usually patients who would be having such a problem present with a dull aching pain over the
arm and forearm. If it is not too severe and does not cause any neurological deficits then conservative management
with Pregabalin and Vitamin B complex tablets, activity modifications and physiotherapy can be started which
will provide relief. Avoid the activities which exaggerate your problem.
Could painful forearms be related to pinched nerve in neck?
The following is an example from the icliniq.com subset:
Patient: Hello doctor, We are looking for a second opinion on my friend's MRI scan of both the knee joints as he
is experiencing excruciating pain just above the patella. He has a sudden onset of severe pain on both the knee
joints about two weeks ago. Previously he had a similar episode about two to three months ago and it subsided
after resting and painkillers. Doctor: Hi. I viewed the right and left knee MRI images. (attachment removed to
protect patient identity). Left knee: The MRI, left knee joint shows a complex tear in the posterior horn of the
medial meniscus area and mild left knee joint effusion. There is some fluid between the semimembranous and medial
head of gastrocnemius muscles. There is a small area of focal cartilage defect in the upper pole of the patella
with mild edematous fat. The anterior and posterior cruciate ligaments are normal. The medial and lateral
collateral ligaments are normal. Right knee: The right knee joint shows mild increased signal intensity in the
posterior horn of the medial meniscus area and minimal knee joint effusion. There is minimal fluid in the back
of the lower thigh and not significant. There is a suspicious strain in the left anterior cruciate ligament
interiorly but largely the attachments are normal. The posterior cruciate ligament is normal. There are subtle
changes in the upper pole area of the right patella and mild edema. There is mild edema around the bilateral
distal quadriceps tendons, but there is no obvious tear of the tendons.
My friend has excruciating knee pain. Please interpret his MRI report
Paper: https://arxiv.org/abs/2004.03329
Code: https://github.com/UCSD-AI4H/Medical-Dialogue-System
@article{chen2020meddiag,
title={MedDialog: a large-scale medical dialogue dataset},
author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng,
Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},
journal={arXiv preprint arXiv:2004.03329},
year={2020}
}
We used the data preprocessing from "BioBART: Pretraining and Evaluation o A Biomedical Generative Language Model"
(Yuan et al.) and generated the following splits:
|Dataset | Train | Valid | Test |
|--------------- |------------|---------|--------|
|HealthCareMagic | 181,122 | 22,641 | 22,642 |
|iCliniq | 24,851 | 3,105 | 3,108 |
Yuan et al. described, "HealthCareMagic's summaries are more abstractive and are written in a formal style,
unlike iCliniq's patient-written summaries."
Paper: https://arxiv.org/abs/2204.03905
Code: https://github.com/GanjinZero/BioBART
@misc{https://doi.org/10.48550/arxiv.2204.03905,
doi = {10.48550/ARXIV.2204.03905},
url = {https://arxiv.org/abs/2204.03905},
author = {Yuan, Hongyi and Yuan, Zheng and Gan, Ruyi and Zhang, Jiaxing and Xie, Yutao and Yu, Sheng},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences,
FOS: Computer and information sciences},
title = {BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
|
lighteval/med_mcqa
|
lighteval
|
From "MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering"
(Pal et al.), MedMCQA is a "multiple-choice question answering (MCQA) dataset designed to address
real-world medical entrance exam questions." The dataset "...has more than 194k high-quality AIIMS & NEET PG
entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average
token length of 12.77 and high topical diversity."
The following is an example from… See the full description on the dataset page: https://huggingface.co/datasets/lighteval/med_mcqa.
|
jainr3/diffusiondb-pixelart
|
jainr3
|
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains 2
million images generated by Stable Diffusion using prompts and hyperparameters
specified by real users. The unprecedented scale and diversity of this
human-actuated dataset provide exciting research opportunities in understanding
the interplay between prompts and generative models, detecting deepfakes, and
designing human-AI interaction tools to help users more easily use these models.
|
memray/AugTriever-AugQ-CC
|
memray
|
AugQ-CC is an unsupervised augmented dataset for training retrievers used in AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation.
It consists of 52.4M pseudo query-document pairs based on Pile-CommonCrawl.
@article{meng2022augtriever,
title={AugTriever: Unsupervised Dense Retrieval by Scalable Data
Augmentation},
author={Meng, Rui and Liu, Ye and Yavuz, Semih and Agarwal, Divyansh and Tu, Lifu and Yu, Ning and Zhang, Jianguo and Bhat, Meghana and Zhou, Yingbo}… See the full description on the dataset page: https://huggingface.co/datasets/memray/AugTriever-AugQ-CC.
|
memray/AugTriever-AugQ-Wiki
|
memray
|
AugQ-Wiki is an unsupervised augmented dataset for training retrievers used in AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation. It consists of 22.6M pseudo query-document pairs based on Wikipedia.
It follows the same license of Wikipedia (Creative Commons Attribution-Share-Alike License 3.0).
@article{meng2022augtriever,
title={AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation},
author={Meng, Rui and Liu, Ye and Yavuz, Semih and Agarwal, Divyansh… See the full description on the dataset page: https://huggingface.co/datasets/memray/AugTriever-AugQ-Wiki.
|
openpecha/tibetan_voice
|
openpecha
|
TibetanVoice: 6.5 hours of validated transcribed speech data from 9 audio book in lhasa dialect. The dataset is in tsv format with two columns, path and sentence. The path column contains the path to the audio file and the sentence column contains the corresponding sentence spoken in the audio file.
|
lighteval/synthetic_reasoning_natural
|
lighteval
| |
lighteval/synthetic_reasoning
|
lighteval
| |
Abrumu/Fashion_controlnet_dataset
|
Abrumu
|
Dataset Card for "Fashion_controlnet_dataset"
More Information needed
|
lighteval/numeracy
|
lighteval
| |
AmazonScience/MultilingualMultiModalClassification
|
AmazonScience
|
Additional Information
Licensing Information
Wiki
Each image is licensed under original provider.
Any additional work provided by current work is provided under CC-BY-SA-4.0 following the Wikipedia license.
MultiEURLEX
We provide MultiEURLEX with the same licensing as the original EU data (CC-BY-4.0):
© European Union, 1998-2021
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you… See the full description on the dataset page: https://huggingface.co/datasets/AmazonScience/MultilingualMultiModalClassification.
|
lucasmccabe-lmi/gpt4all_code_small
|
lucasmccabe-lmi
|
Dataset Card for "gpt4all_code_small"
We provide a code-related subset of the original nomic-ai/gpt4all-j-prompt-generations (v1.2-jazzy revision) dataset, which represents 1) those records whose prompts were sourced from pacovaldez/stackoverflow-questions, 2) who explicitly mention one of Python, Java, C++, SQL, Kotlin, PHP, Swift, MATLAB, Typescript, Scala, HTML, CSS, Rust, or Perl, and 3) who include a code block in the response body.
Output records are responses from… See the full description on the dataset page: https://huggingface.co/datasets/lucasmccabe-lmi/gpt4all_code_small.
|
Nan-Do/code-search-net-go
|
Nan-Do
|
Dataset Card for "code-search-net-go"
Dataset Summary
This dataset is the Go portion of the CodeSarchNet annotated with a summary column.The code-search-net dataset includes open source functions that include comments found at GitHub.The summary is a short description of what the function does.
Languages
The dataset's comments are in English and the functions are coded in Go
Data Splits
Train, test, validation labels are included in… See the full description on the dataset page: https://huggingface.co/datasets/Nan-Do/code-search-net-go.
|
0x22almostEvil/reasoning-gsm-qna-oa
|
0x22almostEvil
|
Dataset Card for GSM QnA reasoning with ~8.8K entries.
Dataset Summary
Contains Parquet of a list of instructions and answers.
Each row consists of
INSTRUCTION
RESPONSE
SOURCE
METADATA (json with language).
Original Datasets are available here:
https://huggingface.co/datasets/gsm8k
https://huggingface.co/datasets/reasoning-machines/gsm-hard
|
boun-tabi/squad_tr
|
boun-tabi
|
SQuAD-TR is a machine translated version of the original SQuAD2.0 dataset into
Turkish.
|
danielv835/personal_finance_v0.2
|
danielv835
|
Dataset Card for "personal_finance_v0.2"
More Information needed
|
jiovine/pixel-art-nouns-2k
|
jiovine
|
Dataset Card for "pixel-art-nouns-2k"
More Information needed
|
tatsu-lab/alpaca_farm
|
tatsu-lab
|
Data used in the original AlpacaFarm experiments.
Includes SFT and preference examples.
|
Nan-Do/code-search-net-python
|
Nan-Do
|
Dataset Card for "code-search-net-python"
Dataset Description
Homepage: None
Repository: https://huggingface.co/datasets/Nan-Do/code-search-net-python
Paper: None
Leaderboard: None
Point of Contact: @Nan-Do
Dataset Summary
This dataset is the Python portion of the CodeSarchNet annotated with a summary column.The code-search-net dataset includes open source functions that include comments found at GitHub.The summary is a short description of what… See the full description on the dataset page: https://huggingface.co/datasets/Nan-Do/code-search-net-python.
|
Nan-Do/code-search-net-javascript
|
Nan-Do
|
Dataset Card for "code-search-net-javascript"
Dataset Summary
This dataset is the JavaScript portion of the CodeSarchNet annotated with a summary column.The code-search-net dataset includes open source functions that include comments found at GitHub.The summary is a short description of what the function does.
Languages
The dataset's comments are in English and the functions are coded in JavaScript
Data Splits
Train, test… See the full description on the dataset page: https://huggingface.co/datasets/Nan-Do/code-search-net-javascript.
|
polejowska/NCT-CRC-HE-45k
|
polejowska
|
Dataset Card for "NCT-CRC-HE-45k"
@dataset{kather_jakob_nikolas_2018_1214456,
author = {Kather, Jakob Nikolas and
Halama, Niels and
Marx, Alexander},
title = {{100,000 histological images of human colorectal
cancer and healthy tissue}},
month = apr,
year = 2018,
publisher = {Zenodo},
version = {v0.1},
doi = {10.5281/zenodo.1214456},
url =… See the full description on the dataset page: https://huggingface.co/datasets/polejowska/NCT-CRC-HE-45k.
|
darknoon/noto-emoji-vector-512-svg
|
darknoon
|
Dataset Card for "noto-emoji-vector-512-svg"
More Information needed
|
Nan-Do/code-search-net-ruby
|
Nan-Do
|
Dataset Card for "code-search-net-ruby"
Dataset Summary
This dataset is the Ruby portion of the CodeSarchNet annotated with a summary column.The code-search-net dataset includes open source functions that include comments found at GitHub.The summary is a short description of what the function does.
Languages
The dataset's comments are in English and the functions are coded in Ruby
Data Splits
Train, test, validation labels are… See the full description on the dataset page: https://huggingface.co/datasets/Nan-Do/code-search-net-ruby.
|
OdiaGenAI/all_combined_odia_171k
|
OdiaGenAI
|
Dataset Card for all_combined_odia_171K
Dataset Summary
This dataset is a mix of Odia instruction sets translated from open-source instruction sets.
The Odia instruction sets used are:
dolly-odia-15k
OdiEnCorp_translation_instructions_25k
gpt-teacher-roleplay-odia-3k
Odia_Alpaca_instructions_52k
hardcode_odia_qa_105
In this dataset Odia instruction, input, and output strings are available.
Supported Tasks and Leaderboards
Large Language Model… See the full description on the dataset page: https://huggingface.co/datasets/OdiaGenAI/all_combined_odia_171k.
|
ceval/ceval-exam
|
ceval
|
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels.
|
Cofacts/line-msg-fact-check-tw
|
Cofacts
|
Cofacts Archive for Reported Messages and Crowd-Sourced Fact-Check Replies
The Cofacts dataset encompasses instant messages that have been reported by users of the Cofacts chatbot and the replies provided by the Cofacts crowd-sourced fact-checking community.
Attribution to the Community
This dataset is a result of contributions from both Cofacts LINE chatbot users and the community fact checkers.
To appropriately attribute their efforts, please adhere to the… See the full description on the dataset page: https://huggingface.co/datasets/Cofacts/line-msg-fact-check-tw.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.