id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
Yukang/LongAlpaca-12k
|
Yukang
|
LongLoRA and LongAlpaca for Long-context LLMs
For detailed usage and codes, please visit the Github project.
News
[2023.10.8] We release the long instruction-following dataset, LongAlpaca-12k and the corresponding models, LongAlpaca-7B, LongAlpaca-13B, and LongAlpaca-70B.
(The previous sft models, Llama-2-13b-chat-longlora-32k-sft and Llama-2-70b-chat-longlora-32k-sft, have been depreciated.)
[2023.10.3] We add support GPTNeoX models. Please refer to… See the full description on the dataset page: https://huggingface.co/datasets/Yukang/LongAlpaca-12k.
|
substratusai/the-stack-yaml-k8s
|
substratusai
|
Dataset Card for The Stack YAML K8s
This dataset is a subset of The Stack dataset data/yaml. The YAML files were
parsed and filtered out all valid K8s YAML files which is what this data is about.
The dataset contains 276520 valid K8s YAML files. The dataset was created by running
the the-stack-yaml-k8s.ipynb
Notebook on K8s using substratus.ai
Source code used to generate dataset: https://github.com/substratusai/the-stack-yaml-k8s
Need some help? Questions? Join our Discord… See the full description on the dataset page: https://huggingface.co/datasets/substratusai/the-stack-yaml-k8s.
|
Autoceres/Agricorp
|
Autoceres
|
Agricorp Dataset
The AutoCeres dataset comprises a collection of images captured from various sources and cultivation locations. It encompasses the following crops:
Corn
Soybean
Rice
Onion
Each crop category is associated with a set of images, and for further analysis and segmentation tasks, masks corresponding to these crops are also included. This dataset serves as a valuable resource for the development and training of computer vision algorithms in the agricultural domain.
|
yuyijiong/Long-instruction-en2zh
|
yuyijiong
|
2023.10.22更新:不是谷歌翻译的,更高质量的,中文长文本问答数据集已经推出,但部分数据量还不足,正在持续增加中。
2023.10.18更新:删除一些重复和低质量数据。改进了答案和指令格式。
中文长文本指令微调数据集-汇编
由于目前中文数据不足,大部分数据都是从英文数据集通过谷歌翻译过来的,翻译质量略有待提升,目前勉强能用。未来可能还会增加数据。 大部分数据经过筛选,长度(字符数)大于8000,以满足长文本微调的需要 指令微调数据都已经转化为llama的chat格式 : "<s>Human: " + question + "\n</s>" + "<s>Assistant: “ + answer + "\n</s>"
因为中文长度普遍短于英文,很多英文翻译为中文后,文本长度显著缩短。
数据组成:
1. LongAlpaca数据集
数据来源:Yukang/LongAlpaca-12k 原数据集已经被拆分为… See the full description on the dataset page: https://huggingface.co/datasets/yuyijiong/Long-instruction-en2zh.
|
erhwenkuo/wikinews-zhtw
|
erhwenkuo
|
Dataset Card for "wikinews-zhtw"
維基新聞(英文:Wikinews)是由一群志願者、即民間記者運營的網路媒體。同時是一個自由內容的維基,屬維基媒體計劃項目,由維基媒體基金會負責運營。維基新聞通過協作新聞學的工作模式去運行,同時亦努力通過中性的觀點報導新聞,包括原創一手獨家報道和採訪。
這個數據集是根據 Wikipedia dumps (https://dumps.wikimedia.org/) 裡頭 zhwikinews 的中文下載檔案來建構的。每個範例都包含一篇完整的維基新聞文章的內容,並經過清理以去除不需要的部分。
Homepage: https://dumps.wikimedia.org
zhwiki 下載點: https://dumps.wikimedia.org/zhwikinews
數據 Dump 版本
由於維基百科數據集定期會進行網站數據拋轉,在 2023/10/10 的時間點去查看時會有下列的數據可供下載:
數據 Dump 目錄
拋轉時間點… See the full description on the dataset page: https://huggingface.co/datasets/erhwenkuo/wikinews-zhtw.
|
Hani89/medical_asr_recording_dataset
|
Hani89
|
Data Source
Kaggle Medical Speech, Transcription, and Intent
Context
8.5 hours of audio utterances paired with text for common medical symptoms.
Content
This data contains thousands of audio utterances for common medical symptoms like “knee pain” or “headache,” totaling more than 8 hours in aggregate. Each utterance was created by individual human contributors based on a given symptom. These audio snippets can be used to train conversational agents in the medical field.
This Figure Eight… See the full description on the dataset page: https://huggingface.co/datasets/Hani89/medical_asr_recording_dataset.
|
FinGPT/fingpt-sentiment-train
|
FinGPT
|
Dataset Card for "fingpt-sentiment-train"
More Information needed
|
indolem/IndoMMLU
|
indolem
|
IndoMMLU
Fajri Koto, Nurul Aisyah, Haonan Li, Timothy Baldwin
📄 Paper •
🏆 Leaderboard •
🤗 Dataset
Introduction
We introduce IndoMMLU, the first multi-task language understanding benchmark for Indonesian culture and languages,
which consists of questions from primary school to university entrance exams in Indonesia. By employing professional teachers,
we obtain 14,906 questions across 63 tasks and… See the full description on the dataset page: https://huggingface.co/datasets/indolem/IndoMMLU.
|
amphora/lmsys-finance
|
amphora
|
Dataset Card for "lmsys-finance"
This dataset is a curated version of the lmsys-chat-1m dataset,
focusing solely on finance-related conversations. The refinement process encompassed:
Removing non-English conversations.
Selecting conversations from models: "vicuna-33b", "wizardlm-13b", "gpt-4", "gpt-3.5-turbo", "claude-2", "palm-2", and "claude-instant-1".
Excluding conversations with responses under 30 characters.
Using 100 financial keywords, choosing conversations with at… See the full description on the dataset page: https://huggingface.co/datasets/amphora/lmsys-finance.
|
ProlificAI/social-reasoning-rlhf
|
ProlificAI
|
Dataset Summary
This repository provides access to a social reasoning dataset that aims to provide signal to how humans navigate social situations, how they reason about them and how they understand each other. It contains questions probing people's thinking and understanding of various social situations.
This dataset was created by collating a set of questions within the following social reasoning tasks:
understanding of emotions
intent recognition
social norms
social… See the full description on the dataset page: https://huggingface.co/datasets/ProlificAI/social-reasoning-rlhf.
|
FreedomIntelligence/Huatuo26M-Lite
|
FreedomIntelligence
|
Huatuo26M-Lite 📚
Table of Contents 🗂
Dataset Description 📝
Dataset Information ℹ️
Data Distribution 📊
Usage 🔧
Citation 📖
Dataset Description 📝
Huatuo26M-Lite is a refined and optimized dataset based on the Huatuo26M dataset, which has undergone multiple purification processes and rewrites. It has more data dimensions and higher data quality. We welcome you to try using it.
Dataset Information ℹ️
Dataset Name:… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/Huatuo26M-Lite.
|
keivalya/MedQuad-MedicalQnADataset
|
keivalya
|
Reference:
"A Question-Entailment Approach to Question Answering". Asma Ben Abacha and Dina Demner-Fushman. BMC Bioinformatics, 2019.
|
SinKove/synthetic_mammography_csaw
|
SinKove
|
Dataset Card for Synthetic CSAW 100k Mammograms
Dataset Description
This is a synthetic mammogram dataset created with the latent diffusion model from Generative AI for Medical Imaging: extending the MONAI Framework paper.
The generative model was trained on the CSAW-M dataset.
**Paper: https://arxiv.org/abs/2307.15208
**Point of Contact: [email protected]
Dataset Summary
Supported Tasks
Classification masking of cancer in… See the full description on the dataset page: https://huggingface.co/datasets/SinKove/synthetic_mammography_csaw.
|
EleutherAI/proof-pile-2
|
EleutherAI
|
A dataset of high quality mathematical text.
|
cognitivecomputations/ultrachat-uncensored
|
cognitivecomputations
|
This is based on ultrachat dataset https://huggingface.co/datasets/stingning/ultrachat
I filtered it using the classic "unfiltered" keywords list https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered to remove instances of refusals and bias
About 90% of the dataset was removed.
What remains (400k conversations) is unlikely to inclinate the model to refuse.
I am investigating a less heavy handed approach using dolphin-2.1 to reword any detected refusals.
|
VAGOsolutions/MT-Bench-TrueGerman
|
VAGOsolutions
|
Benchmark
German Benchmarks on Hugging Face
At present, there is a notable scarcity, if not a complete absence, of reliable and true German benchmarks designed to evaluate the capabilities of German Language Models (LLMs). While some efforts have been made to translate English benchmarks into German, these attempts often fall short in terms of precision, accuracy, and context sensitivity, even when employing GPT-4 technology. Take, for instance, the MT-Bench, a widely recognized… See the full description on the dataset page: https://huggingface.co/datasets/VAGOsolutions/MT-Bench-TrueGerman.
|
knowrohit07/know_medical_dialogue_v2
|
knowrohit07
|
Description:
The knowrohit07/know_medical_dialogues_v2 dataset is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricacies, uncertainties, and questions posed by individuals regarding their health and the medical guidance provided in response.
🎯 Intended Use:
This dataset is crafted for training Large Language Models (LLMs) with a focus on understanding and generating medically-informed… See the full description on the dataset page: https://huggingface.co/datasets/knowrohit07/know_medical_dialogue_v2.
|
prometheus-eval/Feedback-Collection
|
prometheus-eval
|
Dataset Card
Dataset Summary
The Feedback Collection is a dataset designed to induce fine-grained evaluation capabilities into language models.\
Recently, proprietary LLMs (e.g., GPT-4) have been used to evaluate long-form responses. In our experiments, we found that open-source LMs are not capable of evaluating long-form responses, showing low correlation with both human evaluators and GPT-4.\
In our paper, we found that by (1) fine-tuning feedback generated by… See the full description on the dataset page: https://huggingface.co/datasets/prometheus-eval/Feedback-Collection.
|
akjindal53244/Arithmo-Data
|
akjindal53244
|
Arithmo dataset is prepared as combination of MetaMathQA, MathInstruct, and lila ood. Refer to Model Training Data section in Arithmo-Mistral-7B project GitHub page for more details.
Support My Work
Building LLMs takes time and resources; if you find my work interesting, your support would be epic!
References
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and… See the full description on the dataset page: https://huggingface.co/datasets/akjindal53244/Arithmo-Data.
|
nalmeida/agile_dataset_fusionado
|
nalmeida
|
Dataset Card for "agile_dataset_fusionado"
More Information needed
|
distil-whisper/earnings22
|
distil-whisper
|
Dataset Card for Earnings 22
Dataset Summary
Earnings-22 provides a free-to-use benchmark of real-world, accented audio to bridge academic and industrial research.
This dataset contains 125 files totalling roughly 119 hours of English language earnings calls from global countries.
This dataset provides the full audios, transcripts, and accompanying metadata such as ticker symbol, headquarters country,
and our defined "Language Region".
Supported… See the full description on the dataset page: https://huggingface.co/datasets/distil-whisper/earnings22.
|
owkin/nct-crc-he
|
owkin
|
Dataset Card for NCT-CRC-HE
Dataset Summary
The NCT-CRC-HE dataset consists of images of human tissue slides, some of which contain cancer.
Data Splits
The dataset contains tissues from different parts of the body. Examples from each of the 9 classes can be seen below
Initial Data Collection and Normalization
NCT biobank (National Center for Tumor Diseases) and the UMM pathology archive (University Medical Center Mannheim). Images… See the full description on the dataset page: https://huggingface.co/datasets/owkin/nct-crc-he.
|
piazzola/addressWithContext
|
piazzola
|
This dataset contains addresses and sentences pairs, where the sentence contains the address. For instance, "4450 WEST 32ND STREET": "Lena walked up the path to the white colonial-style house with the blue shutters and addressed the letter to Mr. and Mrs. Morrison at 4450 West 32nd Street." I prompted the quantized version of Llama-2 to generate the sentences.
|
Open-Orca/SlimOrca-Dedup
|
Open-Orca
|
Overview
"SlimOrca Dedup" is a deduplicated, unfiltered subset of the SlimOrca dataset, excluding RLHF instances, resulting in 363k unique examples.
Key Features
Removal of RLHF instances.
Deduplication using minhash and Jaccard similarity techniques.
Demo Models
Note: These models were trained on the full SlimOrca dataset, not the deduplicated, unfiltered version.
* https://huggingface.co/openaccess-ai-collective/jackalope-7b
*… See the full description on the dataset page: https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup.
|
erhwenkuo/zhwikisource-zhtw
|
erhwenkuo
|
Dataset Card for "zhwikisource-zhtw"
維基文庫(英文:Wikisource), 又稱 "自由的圖書館", 是一個由志願者在線收集自由內容文本的站點。 它屬維基媒體計劃項目,由維基媒體基金會負責運營。
作品類型:
典籍 | 史書 | 小說 | 詩歌 | 散文 | 演講 | 歌詞 | 經書 | 更多……
主題:
條約 | 憲法 | 法律 | 教育 | 政治 | 歷史 | 宗教 | 更多……
精選:
文章: 道德經 | 脂硯齋重評石頭記
文集: 紅樓夢 | 三國演義 | 西遊記 | 詩經 | 夢溪筆談 | 三十六計 | 古文觀止
歷史: 史記 | 資治通鑑 | 續資治通鑑 | 金史 | 漢書 | 後漢書 | 三國志
判例: 中國大理院解釋 | 中華民國最高法院解釋 | 中華民國司法院解釋 | 中華民國司法院大法官解釋
分類: 中華民國法律 | 中華人民共和國法律 | 中華人民共和國國務院政府工作報告 | 十三經 | 正史
這個數據集是根據 Wikipedia dumps… See the full description on the dataset page: https://huggingface.co/datasets/erhwenkuo/zhwikisource-zhtw.
|
hezarai/persian-license-plate-v1
|
hezarai
|
Dataset is downloaded from here which was provided at Amirkabir University of Technology.
The datas then labeled by the authors.
Experimental results show that the fine-tuned model works well in Persian License Plate.
|
1aurent/Kather-texture-2016
|
1aurent
|
Collection of textures in colorectal cancer histology
Description
This data set represents a collection of textures in histological images of human colorectal cancer.
It contains 5000 histological images of 150 * 150 px each (74 * 74 µm). Each image belongs to exactly one of eight tissue categories.
Image format
All images are RGB, 0.495 µm per pixel, digitized with an Aperio ScanScope (Aperio/Leica biosystems), magnification 20x.
Histological… See the full description on the dataset page: https://huggingface.co/datasets/1aurent/Kather-texture-2016.
|
aburns4/WikiWeb2M
|
aburns4
|
The Wikipedia Webpage 2M (WikiWeb2M) Dataset
We present the WikiWeb2M dataset consisting of over 2 million English
Wikipedia articles. Our released dataset includes all of the text content on
each page, links to the images present, and structure metadata such as which
section each text and image element comes from.
This dataset is a contribution from our paper
A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding.
The dataset is stored as gzipped TFRecord… See the full description on the dataset page: https://huggingface.co/datasets/aburns4/WikiWeb2M.
|
zhk/wiki-edits
|
zhk
|
The pre-training dataset of paper "G-SPEED: General SParse Efficient Editing MoDel".
Visit https://github.com/Banner-Z/G-SPEED.git for more details.
|
liangyuch/laion2B-en-aesthetic-seed
|
liangyuch
|
Dataset Card for "laion2B-en-aesthetic-seed"
More Information needed
|
erhwenkuo/poetry-chinese-zhtw
|
erhwenkuo
|
Dataset Card for "poetry-chinese-zhtw"
資料集摘要
中文古典文集資料庫收集了約 5.5 萬首唐詩、26 萬首宋詩、2.1 萬首宋詞和其他古典文集。詩人包括唐宋兩朝近 1.4 萬古詩人,和兩宋時期 1.5 千古詞人。
五代十國- 收錄"花間集"與"南唐二主詞"
唐- 收錄"全唐詩"(是清康熙四十四年,康熙皇帝主導下,蒐集羅唐詩的收藏「得詩 48,900 餘首,詩入 2,200 人」)。
宋- 收錄"全宋詞"(由唐圭璋編著,孔凡禮補輯,共收錄宋代詞人 1,330 家,詞作 21,116 首)。
元- 收錄元曲 11,057 篇,曲家 233 人。
清- 收錄"納蘭性德詩集"
原始資料來源:
chinese-poetry: 最全中文诗歌古典文集数据库
資料下載清理
下載 chinese-poetry: 最全中文诗歌古典文集数据库 的 Repo
調整資料呈現結構便於模型訓練
使用 OpenCC 來進行簡繁轉換
使用 Huggingface… See the full description on the dataset page: https://huggingface.co/datasets/erhwenkuo/poetry-chinese-zhtw.
|
indiejoseph/cc100-yue
|
indiejoseph
|
Dataset Card for "cc100-yue"
The Filtered Cantonese Dataset is a subset of the larger CC100 corpus that has been filtered to include only Cantonese language content. It is designed to facilitate various NLP tasks, such as text classification, sentiment analysis, named entity recognition, and machine translation, among others.
Filtering Process
The filtering process is according to article Building a Hong Kongese Language Identifier by ToastyNews
|
DavidLanz/medical_instruction
|
DavidLanz
|
Supervisory Fine-Tuning Dataset (SFT and RLHF)
Dataset Name: medical_finetune_tw.json
Description: This dataset comprises a total of 2.06 million entries and is sourced from various sources, including:
Six medical department medical inquiry datasets from the Chinese Medical Dialogue Dataset, totaling 790,000 entries.
An online medical encyclopedia dataset, huatuo_encyclopedia_qa, with 360,000 entries.
A medical knowledge graph dataset, huatuo_knowledge_graph_qa, with 790,000 entries. These… See the full description on the dataset page: https://huggingface.co/datasets/DavidLanz/medical_instruction.
|
haonanqqq/AgriSFT_60k
|
haonanqqq
|
六万条农业微调数据
|
THUDM/AgentInstruct
|
THUDM
|
AgentInstruct Dataset
🤗 [Models] • 💻 [Github Repo] • 📌 [Project Page] • 📃 [Paper]
AgentInstruct is a meticulously curated dataset featuring 1,866 high-quality interactions, designed to enhance AI agents across six diverse real-world tasks, leveraging innovative methods like Task Derivation and Self-Instruct.
🔍 CoT - Harness the power of ReAct, offering detailed thought explanations for each action, ensuring an intricate understanding of the model's decision-making… See the full description on the dataset page: https://huggingface.co/datasets/THUDM/AgentInstruct.
|
OdiaGenAI/sentiment_analysis_hindi
|
OdiaGenAI
|
Conventions followed to decide the polarity: -
labels consisting of a single value are left undisturbed, i.e. if label = 'pos', then it'll be pos
labels consisting of multiple values separated by '&' are processed. If all the labels are the same ('pos&pos&pos' or 'neg&neg'), then the shortened form of the multiple label is assigned as the final label. For example, if label = 'pos&pos&pos', then final label will be 'pos'.
labels consisting of mixed values ('pos&neg&pos' or 'neg&neu&pos') are… See the full description on the dataset page: https://huggingface.co/datasets/OdiaGenAI/sentiment_analysis_hindi.
|
lc-col/bigearthnet
|
lc-col
|
BigEarthNet - HDF5 version
This repository contains an export of the existing BigEarthNet dataset in HDF5 format. All Sentinel-2 acquisitions are exported according to TorchGeo's dataset (120x120 pixels resolution).
Sentinel-1 is not contained in this repository for the moment.
CSV files contain for each satellite acquisition the corresponding HDF5 file and the index.
A PyTorch dataset class which can be used to iterate over this dataset can be found here, as well as the script… See the full description on the dataset page: https://huggingface.co/datasets/lc-col/bigearthnet.
|
dsfsi/daily-news-dikgang
|
dsfsi
|
Daily News Dikgang
Give Feedback 📑: DSFSI Resource Feedback Form
About dataset
The dataset contains annotated categorised data from Dikgang - Daily News https://dailynews.gov.bw/news-list/srccategory/10. The data is in setswana.
See the Data Statement for foll details.
Disclaimer
This dataset contains machine-readable data extracted from online news articles, from https://dailynews.gov.bw/news-list/srccategory/10, provided by the Botswana… See the full description on the dataset page: https://huggingface.co/datasets/dsfsi/daily-news-dikgang.
|
chargoddard/chai-dpo
|
chargoddard
|
Dataset Card for "chai-dpo"
More Information needed
|
erhwenkuo/pretrain-chinese-zhtw
|
erhwenkuo
|
Dataset Card for "pretrain-chinese-zhtw"
More Information needed
|
umarigan/turkiye_finance_qa
|
umarigan
|
Dataset Card for "turkiye_finance_qa"
More Information needed
|
selfrag/selfrag_train_data
|
selfrag
|
This is a training data file for Self-RAG that generates outputs to diverse user queries as well as reflection tokens to call the retrieval system adaptively and criticize its own output and retrieved passages.
Self-RAG is trained on our 150k diverse instruction-output pairs with interleaving passages and reflection tokens using the standard next-token prediction objective, enabling efficient and stable learning with fine-grained feedback.
At inference, we leverage reflection tokens covering… See the full description on the dataset page: https://huggingface.co/datasets/selfrag/selfrag_train_data.
|
s2e-lab/SecurityEval
|
s2e-lab
|
Dataset Card for SecurityEval
This dataset is from the paper titled SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques.
The project is accepted for The first edition of the International Workshop on Mining Software Repositories Applications for Privacy and Security (MSR4P&S '22).
The paper describes the dataset for evaluating machine learning-based code generation output and the application of the dataset to the… See the full description on the dataset page: https://huggingface.co/datasets/s2e-lab/SecurityEval.
|
advancedcv/Food500Cap
|
advancedcv
|
Dataset Card for "caps_data_2"
More Information needed
|
skvarre/movie-posters
|
skvarre
|
Dataset Card for "movie_posters"
More Information needed
|
common-canvas/commoncatalog-cc-by-sa
|
common-canvas
|
Dataset Card for CommonCatalog CC-BY-SA
This dataset is a large collection of high-resolution Creative Common images (composed of different licenses, see paper Table 1 in the Appendix) collected in 2014 from users of Yahoo Flickr.
The dataset contains images of up to 4k resolution, making this one of the highest resolution captioned image datasets.
Dataset Details
Dataset Description
We provide captions synthetic captions to approximately 100… See the full description on the dataset page: https://huggingface.co/datasets/common-canvas/commoncatalog-cc-by-sa.
|
hackaprompt/hackaprompt-dataset
|
hackaprompt
|
Dataset Card for HackAPrompt 💻🔍
This dataset contains submissions from a prompt hacking competition. An in-depth analysis of the dataset has been accepted at the EMNLP 2023 conference. 📊👾
Submissions were sourced from two environments: a playground for experimentation and an official submissions platform.
The playground itself can be accessed here 🎮
More details about the competition itself here 🏆
Dataset Details 📋
Dataset Description 📄
We… See the full description on the dataset page: https://huggingface.co/datasets/hackaprompt/hackaprompt-dataset.
|
laion/strategic_game_cube
|
laion
|
Cube
This dataset contains 1.64 billion Rubik's Cube solves, totaling roughly 236.39 billion moves.it is generated by Fugaku using https://github.com/trincaog/magiccube
Each solve has two columns: 'Cube' and 'Actions',
'Cube': initial scrambled states of a 3-3-3 cube in string, such as:
WOWWYOBWOOGWRBYGGOGBBRRYOGRWORBBYYORYBWRYBOGBGYGWWGRRY
the visual state of this example is
NOTICE: Crambled Cube States are spread out into the above string, row by row.
'Actions': list… See the full description on the dataset page: https://huggingface.co/datasets/laion/strategic_game_cube.
|
xz97/MedInstruct
|
xz97
|
Dataset Card for MedInstruct
Dataset Summary
MedInstruct encompasses:
MedInstruct-52k: A dataset comprising 52,000 medical instructions and responses. Instructions are crafted by OpenAI's GPT-4 engine, and the responses are formulated by the GPT-3.5-turbo engine.
MedInstruct-test: A set of 217 clinical craft free-form instruction evaluation tests.
med_seed: The clinician-crafted seed set as a denomination to prompt GPT-4 for task generation.
MedInstruct-52k can… See the full description on the dataset page: https://huggingface.co/datasets/xz97/MedInstruct.
|
lavita/MedQuAD
|
lavita
|
Dataset Card for "MedQuAD"
This dataset is the converted version of MedQuAD. Some notes about the data:
Multiple values in the umls_cui, umls_semantic_types, synonyms columns are separated by | character.
Answers for [GARD, MPlusHerbsSupplements, ADAM, MPlusDrugs] sources (31,034 records) are removed from the original dataset to respect the MedlinePlus copyright.
UMLS (umls): Unified Medical Language System
CUI (cui): Concept Unique Identifier
Question type… See the full description on the dataset page: https://huggingface.co/datasets/lavita/MedQuAD.
|
iwecht/hard_captions
|
iwecht
|
Dataset Card for "hard_captions"
More Information needed
|
q-future/Q-Instruct-DB
|
q-future
|
A Preview version of the Q-Instruct dataset. A technical report coming soon.
Usage: The dataset is converted to LLaVA format. To get the data, first download the cleaned_labels.json; then download and extract q-instruct-images.tar.
Modify the --data_path and --image_folder in LLaVA training scripts to train with this dataset.
Please cite our paper if the dataset is used:
@misc{wu2023qinstruct,
title={Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models}… See the full description on the dataset page: https://huggingface.co/datasets/q-future/Q-Instruct-DB.
|
gaia-benchmark/GAIA
|
gaia-benchmark
|
GAIA dataset
GAIA is a benchmark which aims at evaluating next-generation LLMs (LLMs with augmented capabilities due to added tooling, efficient prompting, access to search, etc).
We added gating to prevent bots from scraping the dataset. Please do not reshare the validation or test set in a crawlable format.
Data and leaderboard
GAIA is made of more than 450 non-trivial question with an unambiguous answer, requiring different levels of tooling and autonomy to… See the full description on the dataset page: https://huggingface.co/datasets/gaia-benchmark/GAIA.
|
yuyijiong/Long-Instruction-with-Paraphrasing
|
yuyijiong
|
🔥 Updates
[2024.6.4] Add a slim version. The sample number is reduced from about 20k to 10k.
[2024.5.28]
The data format is converted from "chatml" to "messages", which is more convenient to use tokenizer.apply_chat_template. The old version has been moved to "legacy" branch.
The version without "Original text paraphrasing" is added.
📊 Long Context Instruction-tuning dataset with "Original text paraphrasing"
Paper
Github
consist of multiple tasks
Chinese… See the full description on the dataset page: https://huggingface.co/datasets/yuyijiong/Long-Instruction-with-Paraphrasing.
|
alfredplpl/simple-zundamon
|
alfredplpl
|
シンプルずんだもんデータセット
はじめに
ずんだもんの設定が詰まったシンプルなデータセットです。
作者がインターネットで調べたり、運営の人からもらったデータから作成しました。
キャラクターLLMを作るための動作確認にお使いください。
ただし、可能な限り動作確認でもライセンスをよく読んでください。
他の用途はライセンスをよく読んでください。
各種フォーマット
LLM-jp: zmnjp.jsonl
ChatGPT: zmn.jsonl
ライセンス
(ず・ω・きょ)
|
umuthopeyildirim/svgen-500k
|
umuthopeyildirim
|
SVGen Vector Images Dataset
Overview
SVGen is a comprehensive dataset containing 300,000 SVG vector codes from a diverse set of sources including SVG-Repo, Noto Emoji, and InstructSVG. The dataset aims to provide a wide range of SVG files suitable for various applications including web development, design, and machine learning research.
Data Fields
input: The name or label of the SVG item
output: SVG code containing the vector representation… See the full description on the dataset page: https://huggingface.co/datasets/umuthopeyildirim/svgen-500k.
|
cis-lmu/udhr-lid
|
cis-lmu
|
UDHR-LID
Why UDHR-LID?
You can access UDHR (Universal Declaration of Human Rights) here, but when a verse is missing, they have texts such as "missing" or "?". Also, about 1/3 of the sentences consist only of "articles 1-30" in different languages. We cleaned the entire dataset from XML files and selected only the paragraphs. We cleared any unrelated language texts from the data and also removed the cases that were incorrect.
Incorrect? Look at the ckb and kmr files in the UDHR.… See the full description on the dataset page: https://huggingface.co/datasets/cis-lmu/udhr-lid.
|
skvarre/movie_posters-100k
|
skvarre
|
Dataset Card for "movie_posters-100k"
More Information needed
|
hkust-nlp/deita-10k-v0
|
hkust-nlp
|
Dataset Card for Deita 10K V0
GitHub | Paper
Deita is an open-sourced project designed to facilitate Automatic Data Selection for instruction tuning in Large Language Models (LLMs).
This dataset includes 10k of lightweight, high-quality alignment SFT data, mainly automatically selected from the following datasets:
ShareGPT (Apache 2.0 listed, no official repo found): Use the 58 K ShareGPT dataset for selection.
UltraChat (MIT): Sample 105 K UltraChat dataset for selection.… See the full description on the dataset page: https://huggingface.co/datasets/hkust-nlp/deita-10k-v0.
|
ai2lumos/lumos_maths_ground_iterative
|
ai2lumos
|
🪄 Agent Lumos: Unified and Modular Training for Open-Source Language Agents
🌐[Website]
📝[Paper]
🤗[Data]
🤗[Model]
🤗[Demo]
We introduce 🪄Lumos, Language Agents with Unified Formats, Modular Design, and Open-Source LLMs. Lumos unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
Lumos has following features:
🧩 Modular Architecture:
🧩 Lumos consists of planning… See the full description on the dataset page: https://huggingface.co/datasets/ai2lumos/lumos_maths_ground_iterative.
|
umuthopeyildirim/svgen-500k-instruct
|
umuthopeyildirim
|
SVGen Vector Images Dataset Instruct Version
Overview
SVGen is a comprehensive dataset containing 300,000 SVG vector codes from a diverse set of sources including SVG-Repo, Noto Emoji, and InstructSVG. The dataset aims to provide a wide range of SVG files suitable for various applications including web development, design, and machine learning research.
Data Fields
{
"text": "<s>[INST] Icon of Look Up here are the inputs Look Up [/INST] \\n <?xml… See the full description on the dataset page: https://huggingface.co/datasets/umuthopeyildirim/svgen-500k-instruct.
|
theblackcat102/llava-instruct-mix
|
theblackcat102
|
LLaVA Instruct Mix
Added OCR and Chart QA dataset into this for more text extraction questions
|
Henok/amharic-qa
|
Henok
|
AmQA: Amharic Question Answering Dataset
Amharic question and answer dataset in a prompt and completion format.
Dataset Details
In Amharic, interrogative sentences can be formulated using information-seeking pronouns like “ምን” (what), “መቼ” (when), “ማን” (who), “የት” (where), “የትኛው” (which), etc. and prepositional interrogative phrases like “ለምን” [ለ-ምን] (why), “በምን” [በ-ምን] (by what), etc. Besides, a verb phrase could be used to pose questions (Getahun 2013; Baye… See the full description on the dataset page: https://huggingface.co/datasets/Henok/amharic-qa.
|
jxu124/OpenX-Embodiment
|
jxu124
|
Open X-Embodiment Dataset (unofficial)
This is an unofficial Dataset Repo. This Repo is set up to make Open X-Embodiment Dataset (55 in 1) more accessible for people who love huggingface🤗.
Open X-Embodiment Dataset is the largest open-source real robot dataset to date. It contains 1M+ real robot trajectories spanning 22 robot embodiments, from single robot arms to bi-manual robots and quadrupeds.
More information is located on RT-X website… See the full description on the dataset page: https://huggingface.co/datasets/jxu124/OpenX-Embodiment.
|
nicher92/medqa-swe
|
nicher92
|
Dataset Card for Dataset Name
This is a novel multiple choice, clinical question & answering (Q&A) dataset in Swedish consisting of
3,180 questions. The dataset was created from a series of exams aimed at evaluating doctors’ clinical understanding
and decision making and is the first open-source clinical Q&A dataset in Swedish. The exams – originally in PDF
format – were parsed and each question manually checked and curated in order to limit errors in the dataset.
Please read… See the full description on the dataset page: https://huggingface.co/datasets/nicher92/medqa-swe.
|
Skywork/SkyPile-150B
|
Skywork
|
SkyPile-150B
Dataset Summary
SkyPile-150B is a comprehensive, large-scale Chinese dataset specifically designed for the pre-training of large language models. It is derived from a broad array of publicly accessible Chinese Internet web pages. Rigorous filtering, extensive deduplication, and thorough sensitive data filtering have been employed to ensure its quality. Furthermore, we have utilized advanced tools such as fastText and BERT to filter out low-quality… See the full description on the dataset page: https://huggingface.co/datasets/Skywork/SkyPile-150B.
|
marianna13/PDF_extraction_sample
|
marianna13
|
Some stats for random 10 WAT files from CC (see GitHub for more info)
Stats for the links
Number of PDF links
131379
Number of working PDF links from 10k sample
3904
sum(num_words)
384953
sum(num_tokens)
715422
avg(num_words)
6999.145454545454
avg(num_tokens)
13007.672727272728
Stats for extracted data (for 100 random URLs)
1 process:… See the full description on the dataset page: https://huggingface.co/datasets/marianna13/PDF_extraction_sample.
|
jhu-clsp/seamless-align
|
jhu-clsp
|
Dataset Card for Seamless-Align (WIP). Inspired by https://huggingface.co/datasets/allenai/nllb
Dataset Summary
This dataset was created based on metadata for mined Speech-to-Speech(S2S), Text-to-Speech(TTS) and Speech-to-Text(S2T) released by Meta AI. The S2S contains data for 35 language pairs. The S2S dataset is ~1000GB compressed.
How to use the data
There are two ways to access the data:
Via the Hugging Face Python datasets library
Scripts… See the full description on the dataset page: https://huggingface.co/datasets/jhu-clsp/seamless-align.
|
AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_1_alpaca
|
AdapterOcean
|
Dataset Card for "python-code-instructions-18k-alpaca-standardized_cluster_1_alpaca"
More Information needed
|
AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_2_alpaca
|
AdapterOcean
|
Dataset Card for "python-code-instructions-18k-alpaca-standardized_cluster_2_alpaca"
More Information needed
|
AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_3_alpaca
|
AdapterOcean
|
Dataset Card for "python-code-instructions-18k-alpaca-standardized_cluster_3_alpaca"
More Information needed
|
AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_4_alpaca
|
AdapterOcean
|
Dataset Card for "python-code-instructions-18k-alpaca-standardized_cluster_4_alpaca"
More Information needed
|
AdapterOcean/code_instructions_standardized_cluster_1_alpaca
|
AdapterOcean
|
Dataset Card for "code_instructions_standardized_cluster_1_alpaca"
More Information needed
|
AdapterOcean/code_instructions_standardized_cluster_2_alpaca
|
AdapterOcean
|
Dataset Card for "code_instructions_standardized_cluster_2_alpaca"
More Information needed
|
AdapterOcean/code_instructions_standardized_cluster_4_alpaca
|
AdapterOcean
|
Dataset Card for "code_instructions_standardized_cluster_4_alpaca"
More Information needed
|
AdapterOcean/code_instructions_standardized_cluster_8_alpaca
|
AdapterOcean
|
Dataset Card for "code_instructions_standardized_cluster_8_alpaca"
More Information needed
|
Kabatubare/medical
|
Kabatubare
|
Dataset Card for "Medical" Healthcare QnA Datasets
Dataset Details
Dataset Description
The "Medical" dataset is a specialized subset curated from the larger MedDialog collection, featuring healthcare dialogues between doctors and patients. This dataset focuses on conversations from Icliniq, HealthcareMagic, and HealthTap. Written primarily in English, it is designed to serve a broad range of applications such as NLP research, healthcare chatbot… See the full description on the dataset page: https://huggingface.co/datasets/Kabatubare/medical.
|
jon-tow/okapi_mmlu
|
jon-tow
|
Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021).
|
defog/wikisql
|
defog
|
Dataset Card for "wikisql"
More Information needed
|
intone/horror_stories_reddit
|
intone
|
HSR
HSR is a compilation of 5605 reddit posts scraped from the following subreddits:
r/ScaryStories
r/LetsNotMeet
r/TwoSentenceHorror
r/freehorrorstories
r/TrueScaryStories
r/NoSleep
r/Ruleshorror
HSR Credits
If you are using HSR, you must cite us for your project. This dataset can be used for Translation, Generative or Conversational models.
Here are a few ideas that you can use HSR for:
Title-to-story
Text Generation
Spooky chats
|
lingvanex/lingvanex_test_references
|
lingvanex
|
LTR
LTR -- Lingvanex Test References for MT Evaluation from English into a total of 30 target languages for a big variety of cases.
TEST CASES
Parameter
Description
Length
Sentences from 1 to 100 words.
Domain
Medicine (12%), Automobile (11%), Finance (8%)
Tokenizer
Jupiter is 1.000.000 km far. Ask Mr. Johnson for training
Tags
I want to eat and swim
Capitalisation (Case)
HELLO my Dear frIEND
Different languages in one text (Up to 3… See the full description on the dataset page: https://huggingface.co/datasets/lingvanex/lingvanex_test_references.
|
OFA-Sys/OccuQuest
|
OFA-Sys
|
This is the dataset in OccuQuest: Mitigating Occupational Bias for Inclusive Large Language Models
Abstract:
The emergence of large language models (LLMs) has revolutionized natural language processing tasks.
However, existing instruction-tuning datasets suffer from occupational bias: the majority of data relates to only a few occupations, which hampers the instruction-tuned LLMs to generate helpful responses to professional queries from practitioners in specific fields.
To mitigate this issue… See the full description on the dataset page: https://huggingface.co/datasets/OFA-Sys/OccuQuest.
|
AlFrauch/im2latex
|
AlFrauch
|
Dataset Card for Dataset Name
Dataset Summary
This dataset is a set of pairs: an image and its corresponding latex code for expression. This set of pairs was generated by analyzing more than 100,000 articles on natural sciences and mathematics and further generating a corresponding set of latex expressions. The set has been cleared of duplicates. There are about 1 500 000 images in the set.
Supported Tasks and Leaderboards
[More Information Needed]… See the full description on the dataset page: https://huggingface.co/datasets/AlFrauch/im2latex.
|
togethercomputer/RedPajama-Data-V2
|
togethercomputer
|
RedPajama V2: an Open Dataset for Training Large Language Models
|
onuralp/open-otter
|
onuralp
|
Disclaimer: this dataset is curated for NeurIPS 2023 LLM efficiency challange, and currently work in progress. Please use at your own risk.
Dataset Summary
We curated this dataset to finetune open source base models as part of NeurIPS 2023 LLM Efficiency Challenge (1 LLM + 1 GPU + 1 Day). This challenge requires participants to use open source models and datasets with permissible licenses to encourage wider adoption, use and dissemination of open source contributions in… See the full description on the dataset page: https://huggingface.co/datasets/onuralp/open-otter.
|
quyanh/cot-large
|
quyanh
|
Dataset Card for "cot-large"
More Information needed
|
tahrirchi/uz-crawl
|
tahrirchi
|
Dataset Card for UzCrawl
Dataset Summary
In an effort to democratize research on low-resource languages, we release UzCrawl dataset, a web and telegram crawl corpus consisting of materials from nearly 1.2 million unique sources in the Uzbek Language.
Please refer to our blogpost for further details.
P.S. We updated the dataset with 2nd version that extends the scope to new topics as well as being up to date to March 2024.
To load and use dataset, run this script:… See the full description on the dataset page: https://huggingface.co/datasets/tahrirchi/uz-crawl.
|
amandlek/mimicgen_datasets
|
amandlek
|
Dataset Card for MimicGen Datasets
Dataset Summary
This repository contains the official release of datasets for the CoRL 2023 paper "MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations".
The datasets contain over 48,000 task demonstrations across 12 tasks, grouped into the following categories:
source: 120 human demonstrations across 12 tasks used to automatically generate the other datasets
core: 26,000 task demonstrations… See the full description on the dataset page: https://huggingface.co/datasets/amandlek/mimicgen_datasets.
|
pjura/mahjong_board_states
|
pjura
|
MahJong Board States Dataset
Dataset Description
Dataset Name: MahJong Board States
Dataset Summary:
The MahJong Board States dataset contains an extensive collection of board states from Riichi Mahjong games, a popular variant of Mahjong in Japan. The dataset includes more than 650 million records collected from games played between 2009 and 2019. Each record describes the current state of the board and the actions of the players based on the hand and board… See the full description on the dataset page: https://huggingface.co/datasets/pjura/mahjong_board_states.
|
yuyijiong/Chinese_Paper_QA
|
yuyijiong
|
中文论文问答数据集
来自知网的论文数据,版权受限,不能直接公开。下载后请勿上传到公开场合。
包括 为论文写摘要、基于论文内容的问答 两个任务。论文摘要任务已经迁移到论文摘要数据集中。
改进版
此数据集中筛选出较长的论文,并为每篇论文设计多个任务,形成新数据集:中文论文多任务数据集
|
ckandemir/bitcoin_tweets_sentiment_kaggle
|
ckandemir
|
Dataset Card for "Bitcoin Tweets"
Dataset Summary
This dataset contains a collection of 16 million tweets related to Bitcoin, collected from Twitter. Each tweet is tagged with sentiment (positive, negative, neutral). The dataset was originally created and uploaded to Kaggle by user gauravduttakiit. It is a valuable resource for training and evaluating models for sentiment analysis within the context of cryptocurrency discussions.
Supported Tasks and… See the full description on the dataset page: https://huggingface.co/datasets/ckandemir/bitcoin_tweets_sentiment_kaggle.
|
facat/sft-train-samples
|
facat
|
Dataset Card for "sft-train-samples"
More Information needed
|
a6687543/MSOM_Data_Driven_Challenge_2020
|
a6687543
|
MSOM_Data_Driven_Challenge_2020:
To support the 2020 MSOM Data Driven Research Challenge, JD.com, China’s largest retailer, offers transaction-level data to MSOM members for conducting data-driven research.
This dataset includes the transactional data associated with over 2.5 million customers (457,298 made purchases) and 31,868 SKUs over the month of March in 2018.
Researchers are welcome to develop econometric models or data-driven models using this database to address some of the… See the full description on the dataset page: https://huggingface.co/datasets/a6687543/MSOM_Data_Driven_Challenge_2020.
|
19kmunz/iot-23-preprocessed-minimumcolumns
|
19kmunz
|
Aposemat IoT-23 - a Labeled Dataset with Malcious and Benign Iot Network Traffic
Homepage: https://www.stratosphereips.org/datasets-iot23
This dataset contains a subset of the data from 20 captures of Malcious network traffic and 3 captures from live Benign Traffic on Internet of Things (IoT) devices. Created by Sebastian Garcia, Agustin Parmisano, & Maria Jose Erquiaga at the Avast AIC laboratory with the funding of Avast Software, this dataset is one of the best in the field… See the full description on the dataset page: https://huggingface.co/datasets/19kmunz/iot-23-preprocessed-minimumcolumns.
|
deven367/babylm-10M
|
deven367
|
Dataset Card for "babylm-10M"
More Information needed
|
PromptSystematicReview/ThePromptReport
|
PromptSystematicReview
|
Prompt Report Dataset
This repository contains the dataset from the Prompt Report paper.
Use huggingface hub or git lfs to download this, and use the instructions in our code repository to run the experiments.
We also have a paper and website that detail our findings.
master_papers.csv
The master papers file is a master record of all the papers in the final dataset
arxiv_papers_for_human_review.csv
This csv contains the original group of papers… See the full description on the dataset page: https://huggingface.co/datasets/PromptSystematicReview/ThePromptReport.
|
leduckhai/VietMed
|
leduckhai
|
VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain
Description:
We introduced a Vietnamese speech recognition dataset in the medical domain comprising 16h of labeled medical speech, 1000h of unlabeled medical speech and 1200h of unlabeled general-domain speech.
To our best knowledge, VietMed is by far the world’s largest public medical speech recognition dataset in 7 aspects:
total duration, number of speakers… See the full description on the dataset page: https://huggingface.co/datasets/leduckhai/VietMed.
|
ubaada/booksum-complete-cleaned
|
ubaada
|
Description:
This repository contains the Booksum dataset introduced in the paper BookSum: A Collection of Datasets for Long-form Narrative Summarization
.
This dataset includes both book and chapter summaries from the BookSum dataset (unlike the kmfoda/booksum one which only contains the chapter dataset). Some mismatched summaries have been corrected. Uneccessary columns has been discarded. Contains minimal text-to-summary rows. As there are multiple summaries for a given text… See the full description on the dataset page: https://huggingface.co/datasets/ubaada/booksum-complete-cleaned.
|
tokentale1/Text-SQL-Ethereum_tokentale
|
tokentale1
|
Dataset Card for "Text-SQL-Ethereum_tokentale"
More Information needed
|
Bsbell21/MFA_tweet_topics
|
Bsbell21
|
Dataset Card for "MFA_tweet_topics"
More Information needed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.