id
stringlengths
6
121
author
stringlengths
2
42
description
stringlengths
0
6.67k
harvard-lil/cold-cases
harvard-lil
Collaborative Open Legal Data (COLD) - Cases COLD Cases is a dataset of 8.3 million United States legal decisions with text and metadata, formatted as compressed parquet files. If you'd like to view a sample of the dataset formatted as JSON Lines, you can view one here This dataset exists to support the open legal movement exemplified by projects like Pile of Law and LegalBench. A key input to legal understanding projects is caselaw -- the published, precedential decisions of… See the full description on the dataset page: https://huggingface.co/datasets/harvard-lil/cold-cases.
approach0/MATH_and_PRM
approach0
Dataset Card for "MATH_and_PRM" More Information needed
math-eval/TAL-SCQ5K
math-eval
TAL-SCQ5K Dataset Description Dataset Summary TAL-SCQ5K-EN/TAL-SCQ5K-CN are high quality mathematical competition datasets in English and Chinese language created by TAL Education Group, each consisting of 5K questions(3K training and 2K testing). The questions are in the form of multiple-choice and cover mathematical topics at the primary,junior high and high school levels. In addition, detailed solution steps are provided to facilitate CoT training and all the… See the full description on the dataset page: https://huggingface.co/datasets/math-eval/TAL-SCQ5K.
Nicolas-BZRD/French_Transcribed_Podcast
Nicolas-BZRD
French Transcribed Podcast Dataset Summary Dataset of 280,000 mp3 links to French podcasts. Transcription using whisper is underway. However, due to the large number of podcasts, it will not be possible to transcribe all of them. We are therefore counting on the help of the community to help us finish this colossal task. The total duration of the podcasts is estimated at approximately 2958 days (4259523 minutes). However, this value is only an indication, as some… See the full description on the dataset page: https://huggingface.co/datasets/Nicolas-BZRD/French_Transcribed_Podcast.
ConnorLuckettDSTG/SARFish
ConnorLuckettDSTG
SARFish is a Synthetic Aperture Radar (SAR) imagery dataset for the purpose of training, validating and testing supervised machine learning models on the tasks of ship detection, classification, and length regression. The SARFish dataset builds on the excellent work of the xView3-SAR dataset (2021) and consists of two parts: Data - Extends the xView3-SAR dataset to include Single Look Complex (SLC) as well as Ground Range Detected (GRD) imagery data taken directly from the European Space… See the full description on the dataset page: https://huggingface.co/datasets/ConnorLuckettDSTG/SARFish.
shaowenchen/wiki_zh
shaowenchen
As of 2019.2.7
EleutherAI/hendrycks_math
EleutherAI
MATH is a dataset of 12,500 challenging competition mathematics problems. Each problem in Math has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations.
jondurbin/airoboros-2.2.1
jondurbin
Overview This dataset is a slight update to 2.2. Re-generated writing responses Many of the responses were generated by gpt-4-0613, which unfortunately produces much shorter and "dumber" (i.e. various readability scores increased compared to gpt-4-0314, e.g. Flesch, Gunning Fog, etc.) responses compared to gpt-4-0314. I have re-created many of these responses, using gpt-4-0314, temperature 0.7, and the following prompt (which produced 3-5x longer responses): You… See the full description on the dataset page: https://huggingface.co/datasets/jondurbin/airoboros-2.2.1.
NASP/neteval-exam
NASP
NetEval is a NetOps evaluation suite for foundation models, consisting of 5269 multi-choice questions. Please check our paper for more details about NetEval. We hope NetEval could help developers track the progress and analyze the NetOps ability of their models. Citation Please cite our paper if you use our dataset. @misc{miao2023empirical, title={An Empirical Study of NetOps Capability of Pre-Trained Large Language Models}, author={Yukai Miao and Yu Bai and Li Chen… See the full description on the dataset page: https://huggingface.co/datasets/NASP/neteval-exam.
lmms-lab/MME
lmms-lab
Evaluation Dataset for MME
knowrohit07/know_sql
knowrohit07
please use the val ign file for training, its much cleaner. thanks :)
1aurent/NCT-CRC-HE
1aurent
100,000 histological images of human colorectal cancer and healthy tissue Data Description "NCT-CRC-HE-100K" This is a set of 100,000 non-overlapping image patches from hematoxylin & eosin (H&E) stained histological images of human colorectal cancer (CRC) and normal tissue. All images are 224x224 pixels (px) at 0.5 microns per pixel (MPP). All images are color-normalized using Macenko's method (http://ieeexplore.ieee.org/abstract/document/5193250/, DOI… See the full description on the dataset page: https://huggingface.co/datasets/1aurent/NCT-CRC-HE.
bghira/BG20K
bghira
pseudo's BG20K-COCO dataset Dataset Summary This is the BG20K dataset, captioned using the BLIP2 model git-coco-large. BG20K is a dataset of non-salient objects, though some animals and silhouettes may have slipped through (see /train/s directory). The captions have been partially validated as being highly accurate. Locations tend to be named correctly.
iara-project/news-articles-ptbr-dataset
iara-project
Dataset Card for "news-articles-ptbr-dataset" More Information needed
distil-whisper/tedlium-prompted
distil-whisper
Dataset Card for "tedlium-prompted" More Information needed
indiejoseph/ted-transcriptions-cantonese
indiejoseph
Dataset Card for "ted-transcriptions-cantonese" More Information needed
p208p2002/wudao
p208p2002
悟道(WuDao)資料集 非原製作者,僅搬移與封裝成 HF Dataset 格式方便使用。 此資料集下載約需要125GB(.parquet壓縮),對應悟道220G版本。 如果使用此資料集,請引用原作者: @misc{ c6a3fe684227415a9db8e21bac4a15ab, author = {Zhao Xue and Hanyu Zhao and Sha Yuan and Yequan Wang}, title = {{WuDaoCorpora Text}}, year = 2022, month = dec, publisher = {Science Data Bank}, version = {V1}, doi = {10.57760/sciencedb.o00126.00004}, url = https://doi.org/10.57760/sciencedb.o00126.00004 }… See the full description on the dataset page: https://huggingface.co/datasets/p208p2002/wudao.
AdaptLLM/finance-tasks
AdaptLLM
Adapting LLMs to Domains via Continual Pre-Training (ICLR 2024) This repo contains the evaluation datasets for our paper Adapting Large Language Models via Reading Comprehension. We explore continued pre-training on domain-specific corpora for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to… See the full description on the dataset page: https://huggingface.co/datasets/AdaptLLM/finance-tasks.
BAAI/COIG-PC-core
BAAI
COIG Prompt Collection License Default Licensing for Sub-Datasets Without Specific License Declaration: In instances where sub-datasets within the COIG-PC Dataset do not have a specific license declaration, the Apache License 2.0 (Apache-2.0) will be the applicable licensing terms by default. Precedence of Declared Licensing for Sub-Datasets: For any sub-dataset within the COIG-PC Dataset that has an explicitly declared license, the terms and conditions of the… See the full description on the dataset page: https://huggingface.co/datasets/BAAI/COIG-PC-core.
manu/french_librispeech_text_only
manu
Dataset Card for "french_librispeech_text_only" More Information needed
amanrangapur/Fin-Fact
amanrangapur
Fin-Fact - Financial Fact-Checking Dataset Overview Welcome to the Fin-Fact repository! Fin-Fact is a comprehensive dataset designed specifically for financial fact-checking and explanation generation. This README provides an overview of the dataset, how to use it, and other relevant information. Click here to access the paper. Dataset Usage Fin-Fact is a valuable resource for researchers, data scientists, and fact-checkers in the financial domain. Here's how… See the full description on the dataset page: https://huggingface.co/datasets/amanrangapur/Fin-Fact.
serge-wilson/wolof_speech_transcription
serge-wilson
Dataset Card for "wolof_speech_transcription" More Information needed
Vision-Flan/vision-flan_191-task_1k
Vision-Flan
🚀 Vision-Flan Dataset vision-flan_191-task-1k is a human-labeled visual instruction tuning dataset consisting of 191 diverse tasks and 1,000 examples for each task. It is constructed for visual instruction tuning and for building large-scale vision-language models. Paper or blog for more information: https://github.com/VT-NLP/MultiInstruct/ https://vision-flan.github.io/ Paper coming soon 😊 Citation Paper coming soon 😊. If you use Vision-Flan… See the full description on the dataset page: https://huggingface.co/datasets/Vision-Flan/vision-flan_191-task_1k.
vgaraujov/thesis-chile
vgaraujov
Thesis Chile Dataset Dataset Summary Thesis Chile is the dataset partially used to create the DiscoEval in Spanish benchmark. This dataset was created by scraping titles and abstracts of Chilean thesis from public repositories of the Pontificia Universidad Catolica de Chile (repositorio.uc.cl), Universidad de Chile (repositorio.uchile.cl) and Universidad Técnica Federico Santa María (biblioteca.usm.cl). Supported Tasks We see the potential utility… See the full description on the dataset page: https://huggingface.co/datasets/vgaraujov/thesis-chile.
Lakera/gandalf_ignore_instructions
Lakera
gandalf_ignore_instructions This is a dataset of prompt injections from Gandalf by Lakera. Note that we might update the dataset occasionally by cleaning the data or adding more samples. How the data was obtained There are millions of prompts and many of them are not actual prompt injections (people ask Gandalf all kinds of things). We used the following process to obtain relevant data: Start with all prompts submitted to Gandalf in July 2023. Use OpenAI text… See the full description on the dataset page: https://huggingface.co/datasets/Lakera/gandalf_ignore_instructions.
Intel/orca_dpo_pairs
Intel
The dataset contains 12k examples from Orca style dataset Open-Orca/OpenOrca.
Tanvir1337/greetings
Tanvir1337
Greetings [TXT dataset] A dataset comprising artificially generated greetings derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2. These greetings cover various types and are expressed in multiple languages. Prompt The prompt used: Please generate a diverse range of English greetings, and I'll guide you to continue if I require more. You can also incorporate greetings from… See the full description on the dataset page: https://huggingface.co/datasets/Tanvir1337/greetings.
euclaise/writingprompts
euclaise
Dataset Card for "writingprompts" WritingPrompts dataset, as used in Hierarchical Neural Story Generation. Parsed from the archive
goendalf666/sales-textbook_for_convincing_and_selling
goendalf666
Dataset Card for sales-textbook_for_convincing_and_selling A textbook create for the purpose of training a sales chatbot. Inspiration come from: Textbooks is all you need https://arxiv.org/abs/2306.11644 The data was generated by gpt-3.5-turbo #Structure A simpel textbook that has subheadlines and headlines. Chapters and Subheadlines are mentioned in the dataset. Look at the first two examples. Data Generation The following code was used for the text generation:… See the full description on the dataset page: https://huggingface.co/datasets/goendalf666/sales-textbook_for_convincing_and_selling.
goendalf666/sales-conversations
goendalf666
Dataset Card for "sales-conversations" This dataset was created for the purpose of training a sales agent chatbot that can convince people. The initial idea came from: textbooks is all you need https://arxiv.org/abs/2306.11644 gpt-3.5-turbo was used for the generation Structure The conversations have a customer and a salesman which appear always in changing order. customer, salesman, customer, salesman, etc. The customer always starts the conversation Who ends… See the full description on the dataset page: https://huggingface.co/datasets/goendalf666/sales-conversations.
qgyd2021/chinese_chitchat
qgyd2021
中文闲聊数据集 role 的取值有: "unknown", "human", "assistant", 三种. 数据集从网上收集整理如下: 数据 原始数据/项目地址 样本个数 语料描述 替代数据下载地址 ChatterBot ChatterBot; chatterbot-corpus 560 按类型分类,质量较高 阿里云盘; 提取码: 81ao douban Douban Conversation Corpus 352W 来自北航和微软的paper, 噪音相对较少, 多轮(平均7.6轮) 阿里云盘; 提取码: 81ao ptt PTT中文語料 77W 开源项目, 台湾PTT论坛八卦版, 繁体, 语料较生活化, 有噪音 阿里云盘; 提取码: 81ao qingyun 阿里云盘; 提取码: 81ao 10W 青云语料, 相对不错, 生活化 subtitle 电视剧对白语料 274W 来自爬取的电影和美剧的字幕, 有一些噪音, 不严谨的对话, 说话人无法对应起来, 多轮(平均5.3轮) 阿里云盘; 提取码: 81ao… See the full description on the dataset page: https://huggingface.co/datasets/qgyd2021/chinese_chitchat.
Duxiaoman-DI/FinCorpus
Duxiaoman-DI
中文金融资讯数据集,包括(压缩前): 上市公司公告 announcement_data.jsonl 20G 金融资讯/新闻 fin_news_data.jsonl 30G fin_articles_data.jsonl 10G 金融试题 fin_exam.jsonl 370M 数据格式: { "text": <文本内容>, "meta": { "source": <数据来源> } }
patched-codes/static-analysis-eval
patched-codes
SOTA fine-tuning by OpenAI OpenAI used the synth-vuln-fixes and fine-tuned a new version of gpt-4o is now the SOTA on this benchmark. More details and code is available from their repo. More details on the benchmark are available in our blog. New Version of Static Analysis Eval (Aug 20, 2024) We have created a new version of the benchmark with instances that are harder than the previous one. There has been a lot of progress in models over the last year as a… See the full description on the dataset page: https://huggingface.co/datasets/patched-codes/static-analysis-eval.
qgyd2021/few_shot_intent_sft
qgyd2021
小样本意图识别指令数据集 收集了意图识别的数据集, 将其制作成 prompt, 用于 few-shot 的意图识别 LLM 研究. 编写 prompt 模板需要想像力, 你可以在 community 中交流你的想法. {dataset_name}_prompt 子集是从其对应的 {dataset_name} 数据集和 {dataset_name}_template 子集动态生成的, 因此每一次的结果都会不一样. 提示: 由于训练时 prompt 的长度可能超出最大限制而被 truncate, 因此尽量把 prompt 设计成即使被 truncate 也仍然可以用于 GPT 训练. 提示工程指南 样本示例 train subset prompt 示例: (intent: Is it safe to go to the gym indoors if I'm vaccinated?) intent recognition. Examples: ------------ text: will i be okay on… See the full description on the dataset page: https://huggingface.co/datasets/qgyd2021/few_shot_intent_sft.
IlyaGusev/pippa_scored
IlyaGusev
A susbet of the PIPPA dataset scored with GPT-4 on different personality traits: Loquacity Assertiveness Shyness Empathy Kindness Cruelty Arrogance Stubbornness Humor Capriciousness Fragility Wisdom Fidelity Bluntness Creativity Confidence Integrity Bellicosity Patience And also several meta-attributes: Action level NSFW User engagement MBTI type Topic For every attribute there is a textual explanation from ChatGPT. Prompt: Please act as an impartial judge and evaluate character traits… See the full description on the dataset page: https://huggingface.co/datasets/IlyaGusev/pippa_scored.
vikp/textbook_quality_programming
vikp
Dataset Card for "textbook_quality_programming" Synthetic programming textbooks generated with GPT-3.5 and retrieval. Very high quality, aimed at being used in a phi replication. Currently 115M tokens. Covers many languages and technologies, with a bias towards python. ~10k of the books (65M tokens) use an older generation method, and average 6k tokens in length. ~1.5k books (50M tokens) use a newer generation method, with a more detailed outline, and average 33k tokens in… See the full description on the dataset page: https://huggingface.co/datasets/vikp/textbook_quality_programming.
mychen76/invoices-and-receipts_ocr_v1
mychen76
Dataset Card for "invoices-and-receipts_ocr_v1" More Information needed
ShuhongZheng/3D-LLM
ShuhongZheng
https://arxiv.org/abs/2307.12981
Jinyan1/GossipCop
Jinyan1
Dataset Card for "GossipCop" More Information needed
mapsoriano/2016_2022_hate_speech_filipino
mapsoriano
Dataset Card for 2016 and 2022 Hate Speech in Filipino Dataset Summary Contains a total of 27,383 tweets that are labeled as hate speech (1) or non-hate speech (0). Split into 80-10-10 (train-validation-test) with a total of 21,773 tweets for training, 2,800 tweets for validation, and 2,810 tweets for testing. Created by combining hate_speech_filipino and a newly crawled 2022 Philippine Presidential Elections-related Tweets Hate Speech Dataset. This dataset has an… See the full description on the dataset page: https://huggingface.co/datasets/mapsoriano/2016_2022_hate_speech_filipino.
VuongQuoc/Chemistry_text_to_image
VuongQuoc
Dataset Card for "Chemistry_text_to_image" More Information needed
yuvalkirstain/pickapic_v2
yuvalkirstain
Dataset Card for "pickapic_v2" please pay attention - the URLs will be temporariliy unavailabe - but you do not need them! we have in jpg_0 and jpg_1 the image bytes! so by downloading the dataset you already have the images! More Information needed
ossaili/archdaily_3k_captioned
ossaili
Dataset Card for "archdaily_3k_captioned" More Information needed
Shengcao1006/MMHal-Bench
Shengcao1006
MMHal-Bench is a new evaluation benchmark specifically designed for hallucintation in Large Multimodal Models (LMM). It contains 96 challenging questions based on images from OpenImages, and their corresponding ground-truth answers and image contents.
sanchit-gandhi/librispeech_asr_dummy_noise-noise
sanchit-gandhi
Dataset Card for "librispeech_asr_dummy_noise-noise" More Information needed
lberglund/reversal_curse
lberglund
Dataset Card for Dataset Name Dataset Summary Datasets used for experiments 1, 2, and 3 from the reversal curse paper. Experiment 1 uses name_description_dataset Experiment 2 uses celebrity_relations Experiment 3 uses instruction_dataset
Pixelatory/PubChem-04-30-2023
Pixelatory
114,218,565 samples. Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the PubChem dataset found at https://ftp.ncbi.nlm.nih.gov/pubchem/Compound/CURRENT-Full/. PubChem compounds collected in 30 April 2023.
LDJnr/Pure-Dove
LDJnr
This is the Official Pure-Dove dataset. Over 3K multi-turn examples, and many more coming soon! This dataset aims to be the largest highest quality cluster of real human back and forth conversations with GPT-4. Steps have even been done to ensure that only the best GPT-4 conversations in comparisons are kept, there are many instances where two GPT-4 responses are rated as equal to eachother or as both bad. We exclude all such responses from Pure Dove and make sure to only… See the full description on the dataset page: https://huggingface.co/datasets/LDJnr/Pure-Dove.
DavidLanz/zh_TW_c4
DavidLanz
Language Models for Taiwanese Culture training dataset. Citation Please cite the repo if you use the data or code in this repo. @inproceedings{lin-chen-2023-llm, title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models", author = "Lin, Yen-Ting and Chen, Yun-Nung", booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)", month = jul, year = "2023"… See the full description on the dataset page: https://huggingface.co/datasets/DavidLanz/zh_TW_c4.
SciPhi/textbooks-are-all-you-need-lite
SciPhi
Textbooks are all you need : A SciPhi Collection Dataset Description With LLMs, we can create a fully open-source Library of Alexandria. As a first attempt, we have generated 650,000 unique textbook samples from a diverse span of courses, kindergarten through graduate school. These are open source samples, which likely fall under the Llama-2 license. They were generated using the SciPhi repository. All samples were created with TheBloke/Phind-CodeLlama-34B-v2-AWQ. Lastly, I owe… See the full description on the dataset page: https://huggingface.co/datasets/SciPhi/textbooks-are-all-you-need-lite.
SEACrowd/indo_general_mt_en_id
SEACrowd
"In the context of Machine Translation (MT) from-and-to English, Bahasa Indonesia has been considered a low-resource language, and therefore applying Neural Machine Translation (NMT) which typically requires large training dataset proves to be problematic. In this paper, we show otherwise by collecting large, publicly-available datasets from the Web, which we split into several domains: news, religion, general, and conversation,to train and benchmark some variants of transformer-based NMT models across the domains. We show using BLEU that our models perform well across them , outperform the baseline Statistical Machine Translation (SMT) models, and perform comparably with Google Translate. Our datasets (with the standard split for training, validation, and testing), code, and models are available on https://github.com/gunnxx/indonesian-mt-data."
mindchain/wikitext2
mindchain
Dataset Card for "wikitext" Dataset Summary The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License. Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far… See the full description on the dataset page: https://huggingface.co/datasets/mindchain/wikitext2.
maveriq/bigbenchhard
maveriq
BIG-Bench (Srivastava et al., 2022) is a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models. Language models have already made good progress on this benchmark, with the best model in the BIG-Bench paper outperforming average reported human-rater results on 65% of the BIG-Bench tasks via few-shot prompting. But on what tasks do language models fall short of average human-rater performance, and are those tasks actually unsolvable by current language models?
goendalf666/sales-conversations-2
goendalf666
Dataset Card for "sales-conversations-2" Dataset Card for "sales-conversations" This dataset was created for the purpose of training a sales agent chatbot that can convince people. The initial idea came from: textbooks is all you need https://arxiv.org/abs/2306.11644 gpt-3.5-turbo was used for the generation See the main model or github for more information salesGPT_v2: https://huggingface.co/goendalf666/salesGPT_v2 github:… See the full description on the dataset page: https://huggingface.co/datasets/goendalf666/sales-conversations-2.
JeswinMS4/code_text_classification
JeswinMS4
Dataset Card for "code_text_classification" More Information needed
MattCoddity/dockerNLcommands
MattCoddity
Natural Language to Docker Command Dataset This dataset is designed to translate natural language instructions into Docker commands. It contains mappings of textual phrases to corresponding Docker commands, aiding in the development of models capable of understanding and translating user requests into executable Docker instructions. Dataset Format Each entry in the dataset consists of a JSON object with the following keys: input: The natural language phrase.… See the full description on the dataset page: https://huggingface.co/datasets/MattCoddity/dockerNLcommands.
JeswinMS4/code_text_classifier
JeswinMS4
Dataset Card for "code_text_classifier" More Information needed
fedml/PubMedQA_instruction
fedml
Dataset Card for "PubMedQA_instruction" This repo contains a PubMedQA dataset converted for instruction tuning. Citation Information @inproceedings{jin2019pubmedqa, title={PubMedQA: A Dataset for Biomedical Research Question Answering}, author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua}, booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint… See the full description on the dataset page: https://huggingface.co/datasets/fedml/PubMedQA_instruction.
maastrichtlawtech/lleqa
maastrichtlawtech
Dataset Card for LLeQA Dataset Summary The Long-form Legal Question Answering (LLeQA) dataset is a French-native expert-annotated dataset for studying legal question answering. LLeQA builds upon BSARD, an information retrieval dataset comprising 1,108 legal questions labeled with relevant provisions from a corpus of 22,633 Belgian law articles, and enhance it in two ways: We introduce 760 new legal questions (+69%) and 5,308 additional statutory articles (+23%).… See the full description on the dataset page: https://huggingface.co/datasets/maastrichtlawtech/lleqa.
erhwenkuo/train_2m-chinese-zhtw
erhwenkuo
Dataset Card for "train_2m-chinese-zhtw" 內容 包含約 200 萬條由 BELLE 專案目產生的中文指令(instruction)資料。 範例 { "instruction": "將以下三個句子組合成一個有意義的段落。\n狗是人類最好的朋友。它們非常聰明,可以進行各種活動。如果你喜歡散步,狗可以成為你一起散步的夥伴。", "input": "", "output": "狗是人類最好的朋友,它們非常聰明,可以進行各種活動。如果你喜歡散步,狗可以成為你一起散步的伙伴。出門散步是一種良好的鍛煉方式,而有狗的陪伴會讓散步變得更有趣,並且有狗在身邊也能給你帶來安全感。所以,擁有一隻狗作為你的伙伴,可以幫助你變得更加積極主動和健康。" } 欄位: instruction: 指令 input: 輸入(此資料集均為空) output: 輸出 使用限制… See the full description on the dataset page: https://huggingface.co/datasets/erhwenkuo/train_2m-chinese-zhtw.
classla/ParlaSent
classla
The multilingual sentiment dataset of parliamentary debates ParlaSent 1.0 Dataset Summary This dataset was created and used for sentiment analysis experiments. The dataset consists of five training datasets and two test sets. The test sets have a _test.jsonl suffix and appear in the Dataset Viewer as _additional_test. Each test set consists of 2,600 sentences, annotated by one highly trained annotator. Training datasets were internally split into "train", "dev"… See the full description on the dataset page: https://huggingface.co/datasets/classla/ParlaSent.
Nexusflow/NexusRaven_API_evaluation
Nexusflow
NexusRaven API Evaluation dataset Please see blog post or NexusRaven Github repo for more information. License The evaluation data in this repository consists primarily of our own curated evaluation data that only uses open source commercializable models. However, we include general domain data from the ToolLLM and ToolAlpaca papers. Since the data in the ToolLLM and ToolAlpaca works use OpenAI's GPT models for the generated content, the data is not commercially… See the full description on the dataset page: https://huggingface.co/datasets/Nexusflow/NexusRaven_API_evaluation.
manojdilz/facial_emotion_detection_dataset
manojdilz
Face Emotion Classification Dataset This dataset contain about 35000 images which are belongs to 7 classes. This dataset can be used to train deep learning models for human emotion classification problems.
tennant/iNatIGCD
tennant
Dataset Card for Dataset Name Dataset Summary This dataset is built on the iNaturalist 2021 dataset and is used for the Incremental Generalized Category Discovery task. For more information about the task, please checkout this paper. Supported Tasks and Leaderboards [More Information Needed] Dataset Structure Data Instances [More Information Needed] Data Fields [More Information Needed] Data… See the full description on the dataset page: https://huggingface.co/datasets/tennant/iNatIGCD.
Unified-Language-Model-Alignment/Anthropic_HH_Golden
Unified-Language-Model-Alignment
Dataset Card for Anthropic_HH_Golden This dataset is constructed to test the ULMA technique as mentioned in the paper Unified Language Model Alignment with Demonstration and Point-wise Human Preference (under review, and an arxiv link will be provided soon). They show that replacing the positive samples in a preference dataset by high-quality demonstration data (golden data) greatly improves the performance of various alignment methods (RLHF, DPO, ULMA). In particular, the ULMA… See the full description on the dataset page: https://huggingface.co/datasets/Unified-Language-Model-Alignment/Anthropic_HH_Golden.
kyujinpy/OpenOrca-KO
kyujinpy
OpenOrca-KO OpenOrca dataset 중 약 2만개를 sampling하여 번역한 데이터셋 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다😭😭 Dataset inf0 NIV // 1571개 FLAN // 9434개 T0 // 6351개 CoT // 2117개 KoCoT // 2159개 Translation Using DeepL Pro API. Thanks. Below is original dataset card 🐋 The OpenOrca Dataset! 🐋 We are thrilled to announce the release of the OpenOrca dataset! This rich collection of augmented FLAN data aligns, as best as… See the full description on the dataset page: https://huggingface.co/datasets/kyujinpy/OpenOrca-KO.
indiejoseph/cantonese-cot
indiejoseph
Dataset Card for "cantonese-cot" More Information needed
jackhhao/jailbreak-classification
jackhhao
Jailbreak Classification Dataset Summary Dataset used to classify prompts as jailbreak vs. benign. Dataset Structure Data Fields prompt: an LLM prompt type: classification label, either jailbreak or benign Dataset Creation Curation Rationale Created to help detect & prevent harmful jailbreak prompts when users interact with LLMs. Source Data Jailbreak prompts sourced from:… See the full description on the dataset page: https://huggingface.co/datasets/jackhhao/jailbreak-classification.
yuyijiong/booksum-zh
yuyijiong
booksum数据集,谷歌翻译成中文。 任务:将一本书的某个章节总结为几句话。 源数据来自 togethercomputer/Long-Data-Collections
yuyijiong/multi-doc-qa-zh
yuyijiong
多文档qa数据集,谷歌翻译成中文,用于微调长度更大的模型。任务:给定多个参考文档和一个问题,只有一个文档包含有用信息,模型需要根据参考文档回答问题,并指出哪个文档包含有用信息。对于每个question,会提供几十或上百个文档片段,只有一个文档包含有用信息,gold_document_id表示含有有用信息的文档序号,注意文档是从1开始编号。源数据来自 togethercomputer/Long-Data-Collections\
teknium/trismegistus-project
teknium
The Trismegistus Project Dataset General Information Dataset Name: Trismegistus Instruction Dataset Version: 1.0 Size: ~10,000 instruction-response pairs Domain: Esoteric, Spiritual, Occult, Wisdom Traditions, Paranormal, etc. Date Released: Friday the 13th, October of 2023 Short Description The Trismegistus Project is a comprehensive dataset containing instruction-response pairs focused on the broad umbrella of Esoterica. Topics covered include… See the full description on the dataset page: https://huggingface.co/datasets/teknium/trismegistus-project.
a686d380/h-eval
a686d380
H-Eval H-Eval数据集由人工挑选的316个H小说句子组成,要求模型正确续写下一个单词 本测试集无法反映模型长文本生成能力,更低的分数也不能反映模型在色情方面更为安全 你可以使用benchmark.py测试其他模型 本测试集仅供科学研究 Model Score Human 80.2 rwkv-5-h-world-7B 60.3 rwkv-5-h-world-3B 59.4 rwkv-5-h-world-1b5 59.1 Yi-34B 54.7 rwkv-h-world-1b5 54.1 rwkv-v4-7b-dengh 50.0 Yi-6B 48.7 Yi-34B-Chat-4bits 48.1 rwkv-h-world-0.4b 46.8 deepsex-34b 45.9 NSFW_13B_sft 44.3 CausalLM-14B-GPTQ 43.4 Baichuan2-7B-Base 42.7… See the full description on the dataset page: https://huggingface.co/datasets/a686d380/h-eval.
pjaekae/automotive_engineering
pjaekae
Synthetic data generated with GPT-3.5
taesiri/TinyStories-Farsi
taesiri
Tiny Stories Farsi The Tiny Stories Farsi project is a continuous effort to translate the Tiny Stories dataset into the Persian (Farsi) language. The primary goal is to produce a high-quality Farsi dataset, maintaining equivalency with the original English version, and subsequently to utilize it for training language models in Farsi. This seeks to affirm that the advancements and trends observed in English language models are replicable and applicable in other languages. Thus… See the full description on the dataset page: https://huggingface.co/datasets/taesiri/TinyStories-Farsi.
indiejoseph/wikipedia-translate-zhhk-zhcn
indiejoseph
Dataset Card for "wikipedia-translate-zhhk-zhcn" More Information needed
rmanluo/RoG-webqsp
rmanluo
Dataset Card for "RoG-webqsp" More Information needed
ai-habitat/habitat_humanoids
ai-habitat
Habitat Humanoids Habitat 3.0 provides support for diverse humanoid avatars, displaying different shapes an motions. Avatars are based on the SMPL-X body model format, a commonly used data-driven parametric human body model that provides a compact representation of 3D human shape and pose. This repository provides a set of stand-alone avatars and motion files to represent humanoids walking and reaching to objects in the Habitat simulator. However, you can also generate new… See the full description on the dataset page: https://huggingface.co/datasets/ai-habitat/habitat_humanoids.
neural-bridge/rag-dataset-12000
neural-bridge
Retrieval-Augmented Generation (RAG) Dataset 12000 Retrieval-Augmented Generation (RAG) Dataset 12000 is an English dataset designed for RAG-optimized models, built by Neural Bridge AI, and released under Apache license 2.0. Dataset Description Dataset Summary Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by allowing them to consult an external authoritative knowledge base before generating responses. This approach… See the full description on the dataset page: https://huggingface.co/datasets/neural-bridge/rag-dataset-12000.
qnguyen3/llava_json
qnguyen3
Dataset Card for "LLaVA_Mega_JSON" More Information needed
open-phi/textbooks
open-phi
Textbooks Are All You Need Leveraging Large Language Models (LLMs), there's an opportunity to create a comprehensive open-source repository reminiscent of the historic Library of Alexandria. This initiative represents a preliminary attempt at producing high-quality books covering an extensive range of subjects. The source of these samples varies: Some generated using the RAG model, referencing Wikipedia or other search data. Some are completely synthetically generated. Some… See the full description on the dataset page: https://huggingface.co/datasets/open-phi/textbooks.
Elriggs/openwebtext-100k
Elriggs
Dataset Card for "openwebtext-100k" More Information needed
DamarJati/Face-Mask-Detection
DamarJati
Original datasets https://www.kaggle.com/datasets/ashishjangra27/face-mask-12k-images-dataset
Mxode/Chinese-Classics-Partial
Mxode
偶然找到的 200 多篇古籍相关的纯 txt 文件,简单洗了一下,去除了部分噪声和空白行。 一篇样例如下: 古训《增广贤文》 昔时贤文,诲汝谆谆,集韵增文,多见多闻。 观今宜鉴古,无古不成今。 知己知彼,将心比心。 酒逢知己饮,诗向会人吟。 相识满天下,知心能几人。 相逢好似初相识,到老终无怨恨心。 近水知鱼性,近山识鸟音。 易涨易退山溪水,易反易覆小人心。 运去金成铁,时来铁似金,读书须用意,一字值千金。
adityarra07/czech_train_data
adityarra07
Dataset Card for "czech_train_data" More Information needed
NickyNicky/finance-financialmodelingprep-news
NickyNicky
Dataset Card for "finance-financialmodelingprep-news" More Information needed
Salesforce/ttcw_creativity_eval
Salesforce
Dataset Summary Stories and annotations for administering the Torrance Test for Creative Writing (TTCW) Each row in the dataset refers to one specific story, with each column representing the annotations for that specific TTCW category. Each cell contains the story info, information about the TTCW category including the prompt and annotations from 3 different experts on administering the specific TTCW test for the given story. More info Repo:… See the full description on the dataset page: https://huggingface.co/datasets/Salesforce/ttcw_creativity_eval.
Intuit-GenSRF/sexting-nsfw-adultconten
Intuit-GenSRF
Dataset Card for "sexting-nsfw-adultconten" More Information needed
KoalaAI/Text-Moderation-v2-small
KoalaAI
AutoTrain Dataset for project: text-moderation-v2-small Dataset Description This dataset has been automatically processed by AutoTrain for project text-moderation-v2-small. Languages The BCP-47 code for the dataset's language is en. Dataset Structure Data Instances A sample from this dataset looks as follows: [ { "text": "--------------------\n(Setting)\n\nThis island is a magical island that is floating high up… See the full description on the dataset page: https://huggingface.co/datasets/KoalaAI/Text-Moderation-v2-small.
euclaise/gsm8k_self_correct
euclaise
Dataset Card for "gsm8k_self_correct" More Information needed
swj0419/WikiMIA
swj0419
📘 WikiMIA Datasets The WikiMIA datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from extensive large language models. 📌 Applicability The datasets can be applied to various models released between 2017 to 2023: LLaMA1/2 GPT-Neo OPT Pythia text-davinci-001 text-davinci-002 ... and more. Loading the datasets To load the dataset: from datasets import… See the full description on the dataset page: https://huggingface.co/datasets/swj0419/WikiMIA.
a686d380/h-corpus-2023
a686d380
经过清洗和去重过的H小说 共205,028篇文章,解压后17.0 GB 仅用于科学研究!
Open-Orca/SlimOrca
Open-Orca
Overview This is a new curated subset of our OpenOrca data. This release provides an efficient means of reaching performance on-par with using larger slices of our data, while only including ~500k GPT-4 completions. The key change in this dataset is that we've done an additional pass, using GPT-4 to remove answers which appear wrong based on the human annotations from the FLAN dataset. This reduces the dataset size to only ~500k entries, allowing training to a similar quality… See the full description on the dataset page: https://huggingface.co/datasets/Open-Orca/SlimOrca.
erenfazlioglu/turkishvoicedataset
erenfazlioglu
Dataset Card for "turkishneuralvoice" Dataset Overview Dataset Name: Turkish Neural Voice Description: This dataset contains Turkish audio samples generated using Microsoft Text to Speech services. The dataset includes audio files and their corresponding transcriptions. Dataset Structure Configs: default Data Files: Split: train Path: data/train-* Dataset Info: Features: audio: Audio file transcription: Corresponding text transcription… See the full description on the dataset page: https://huggingface.co/datasets/erenfazlioglu/turkishvoicedataset.
berkeley-nest/Nectar
berkeley-nest
Dataset Card for Nectar Developed by: Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu and Jiantao Jiao. License: Apache-2.0 license under the condition that the dataset is not used to compete with OpenAI Nectar is the first high-quality 7-wise comparison dataset, generated through GPT-4-based ranking. Nectar contains diverse chat prompts, high-quality and diverse responses, and accurate ranking labels. Nectar's prompts are an amalgamation of diverse sources, including… See the full description on the dataset page: https://huggingface.co/datasets/berkeley-nest/Nectar.
FreedomIntelligence/Evol-Instruct-Chinese-GPT4
FreedomIntelligence
The dataset is created by (1) translating English questions of Evol-instruct-70k into Chinese and (2) requesting GPT4 to generate Chinese responses. For more details, please refer to: Repository: https://github.com/FreedomIntelligence/AceGPT https://github.com/FreedomIntelligence/LLMZoo Paper: AceGPT, Localizing Large Language Models in Arabic Phoenix: Democratizing ChatGPT across Languages BibTeX entry and citation info @article{huang2023acegpt, title={AceGPT… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/Evol-Instruct-Chinese-GPT4.
Jaredquek/AuroMiraWorks
Jaredquek
This 'text completion' dataset (originally in jsonl format) comprises the major prose works of Sri Aurobindo, the Indian philosopher, seer and poet, and his spiritual partner, Mirra Alfassa. The following works have been used: Sri Aurobindo: Letters on Yoga 1, 2, 3, 4 Letters on Himself and the Ashram The Mother with Letters on the Mother The Life Divine The Synthesis of Yoga The Renaissance in India The Secret of the Veda Essays Divine and Human Essays on the Gita Essays in… See the full description on the dataset page: https://huggingface.co/datasets/Jaredquek/AuroMiraWorks.
meta-math/MetaMathQA-40K
meta-math
arxiv.org/abs/2309.12284 View the project page: https://meta-math.github.io/
layoric/tiny-codes-alpaca-csharp
layoric
Dataset Card for "tiny-codes-alpaca-csharp" More Information needed
osunlp/TableInstruct
osunlp
TableLlama: Towards Open Large Generalist Models for Tables Project Page: https://osu-nlp-group.github.io/TableLlama/ Paper: https://arxiv.org/abs/2311.09206 Model: https://huggingface.co/osunlp/TableLlama/ Code: https://osu-nlp-group.github.io/TableLlama/ Introduction We introduce TableLlama, an open-source large generalist model specifically tailored for various table-based tasks. The TableLlama model is trained on TableInstruct Dataset, a meticulously curated… See the full description on the dataset page: https://huggingface.co/datasets/osunlp/TableInstruct.
oroikon/chart_captioning
oroikon
Dataset Card for "chart_captioning" More Information needed