id
stringlengths
6
121
author
stringlengths
2
42
description
stringlengths
0
6.67k
pkufool/libriheavy
pkufool
Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context Libriheavy is a labeled version of Librilight, read our paper for more details. See https://github.com/k2-fsa/libriheavy for more details. Citation @misc{kang2023libriheavy, title={Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context}, author={Wei Kang and Xiaoyu Yang and Zengwei Yao and Fangjun Kuang and Yifan Yang and Liyong Guo and Long Lin and Daniel… See the full description on the dataset page: https://huggingface.co/datasets/pkufool/libriheavy.
kyujinpy/KOpen-platypus
kyujinpy
KOpenPlatypus: Korean Translation dataset about Open-Platypus Korean Translation Method I use DeepL-pro-API and selenium. It takes about 140h times.+) 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다😭😭 Korean Translation post-processing And also, applying post-processing. See below lists. (*약 2000개 이상의 코드 관련 데이터를 수작업으로 수정함) 코드와 주석은 그대로 유지하고, 설명 부분만 한국어로 수정 1번과 더불어서, Python, Java, Cpp, xml 등등 결과들은 전부 기존의 데이터 형태로 최대한 보존 단일 숫자와 영어는… See the full description on the dataset page: https://huggingface.co/datasets/kyujinpy/KOpen-platypus.
DKYoon/SlimPajama-6B
DKYoon
Sampled version of cerebras/SlimPajama-627B. Since the original data was shuffled before chunking, I only downloaded train/chunk1 (of 10 total) and further sampled 10%. This should result in roughly 6B tokens, hence SlimPajama-6B. The dataset is 24GBs in storage size when decompressed (original dataset is over 2TBs) and has 5489000 rows. The validation set and test set were sampled as well. Data source proportions for SlimPajama-627B and SlimPajama-6B For sanity purpose, I… See the full description on the dataset page: https://huggingface.co/datasets/DKYoon/SlimPajama-6B.
mu-llama/MusicQA
mu-llama
This is the dataset used for training and testing the Music Understanding Large Language Model (MU-LLaMA)
ai-habitat/hab3_bench_assets
ai-habitat
Habitat v0.3.x Benchmark Dataset Assets, configs, and episodes for reproduceable benchmarking on Habitat v0.3.x. Setup Clone this repo and symblink as data/hab3_bench_assets in habitat-lab directory. Download the Habitat compatable YCB SceneDataset and create a symbolic link in data/objects/ycb or use the habitat-sim datasets_download script (README). Contents: Scene Dataset: hab3-hssd/ - the necessary configs and assets to load a subset of HSSD… See the full description on the dataset page: https://huggingface.co/datasets/ai-habitat/hab3_bench_assets.
mit-han-lab/pile-val-backup
mit-han-lab
This is a backup for the pile val dataset downloaded from here: https://the-eye.eu/public/AI/pile/val.jsonl.zst Please respect the original license of the dataset.
silk-road/ChatHaruhi-54K-Role-Playing-Dialogue
silk-road
ChatHaruhi Reviving Anime Character in Reality via Large Language Model github repo: https://github.com/LC1332/Chat-Haruhi-Suzumiya Chat-Haruhi-Suzumiyais a language model that imitates the tone, personality and storylines of characters like Haruhi Suzumiya, The project was developed by Cheng Li, Ziang Leng, Chenxi Yan, Xiaoyang Feng, HaoSheng Wang, Junyi Shen, Hao Wang, Weishi Mi, Aria Fei, Song Yan, Linkang Zhan, Yaokai Jia, Pingyu Wu, and Haozhen Sun,etc.… See the full description on the dataset page: https://huggingface.co/datasets/silk-road/ChatHaruhi-54K-Role-Playing-Dialogue.
RunsenXu/PointLLM
RunsenXu
The official dataset release of paper ECCV 2024: PointLLM: Empowering Large Language Models to Understand Point Clouds
CATIE-AQ/squad_v2_french_translated_fr_prompt_question_generation_with_answer_and_context
CATIE-AQ
squad_v2_french_translated_fr_prompt_question_generation_with_answer_and_context Summary squad_v2_french_translated_fr_prompt_question_generation_with_answer_and_context is a subset of the Dataset of French Prompts (DFP).It contains 1,112,937 rows that can be used for a question-generation (with answer and context) task.The original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0… See the full description on the dataset page: https://huggingface.co/datasets/CATIE-AQ/squad_v2_french_translated_fr_prompt_question_generation_with_answer_and_context.
botp/alpaca-taiwan-dataset
botp
你各位的 Alpaca Data Taiwan Chinese 正體中文數據集
OpenDriveLab/DriveLM
OpenDriveLab
DriveLM: Driving with Graph Visual Question Answering. We facilitate Perception, Prediction, Planning, Behavior, Motion tasks with human-written reasoning logic as a connection. We propose the task of GVQA to connect the QA pairs in a graph-style structure. To support this novel task, we provide the DriveLM-Data. DriveLM-Data comprises two distinct components: DriveLM-nuScenes and DriveLM-CARLA. In the case of DriveLM-nuScenes, we construct our dataset based on the prevailing… See the full description on the dataset page: https://huggingface.co/datasets/OpenDriveLab/DriveLM.
Falah/image_generation_prompts_SDXL
Falah
Dataset Card for "image_generation_prompts_SDXL" More Information needed
ShapeNet/ShapeNetCore-archive
ShapeNet
This repository holds archives (zip files) of main versions of ShapeNetCore, a subset of ShapeNet.ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in WordNet 3.0. Please see DATA.md for details about the data. If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates… See the full description on the dataset page: https://huggingface.co/datasets/ShapeNet/ShapeNetCore-archive.
tollefj/massive-en-no-shorter-transfer
tollefj
Massive EN-NO shorter and similar transfer A dataset of EN-NO translations comprised of the following sources: https://huggingface.co/datasets/opus100 https://huggingface.co/datasets/opus_books https://huggingface.co/datasets/open_subtitles (https://huggingface.co/datasets/tollefj/subtitles-en-no-similar-shorter) https://huggingface.co/datasets/RuterNorway/Fleurs-Alpaca-EN-NO And parsed by: simple preprocessing: stripping/misplaced punctuation computing all similarities with… See the full description on the dataset page: https://huggingface.co/datasets/tollefj/massive-en-no-shorter-transfer.
toughdata/quora-question-answer-dataset
toughdata
Quora Question Answer Dataset (Quora-QuAD) contains 56,402 question-answer pairs scraped from Quora. Usage: For instructions on fine-tuning a model (Flan-T5) with this dataset, please check out the article: https://www.toughdata.net/blog/post/finetune-flan-t5-question-answer-quora-dataset
ShapeNet/PartNet-archive
ShapeNet
This repository contains archives (zip files) for PartNet, a subset of ShapeNet with part annotations. The PartNet prerelease v0 (March 29, 2019) consists of the following: PartNet v0 annotations (meshes, point clouds, and visualizations) in chunks: data_v0_chunk.zip (302MB), data_v0_chunk.z01-z10 (10GB each) HDF5 files for the semantic segmentation task (Sec 5.1 of PartNet paper): sem_seg_h5.zip (8GB) HDF5 files for the instance segmentation task (Sec 5.3 of PartNet paper): ins_seg_h5.zip… See the full description on the dataset page: https://huggingface.co/datasets/ShapeNet/PartNet-archive.
madhurbehl/RACECAR_DATA
madhurbehl
RACECAR Dataset Welcome to the RACECAR dataset! The RACECAR dataset is the first open dataset for full-scale and high-speed autonomous racing. Multi-modal sensor data has been collected from fully autonomous Indy race cars operating at speeds of up to 170 mph (273 kph). Six teams who raced in the Indy Autonomous Challenge during 2021-22 have contributed to this dataset. The dataset spans 11 interesting racing scenarios across two race tracks which include solo laps… See the full description on the dataset page: https://huggingface.co/datasets/madhurbehl/RACECAR_DATA.
Skepsun/lawyer_llama_data
Skepsun
基于lawyer-llama的开源数据进行了简单的整合,格式符合LLaMA-Efficient-Tuning的标准格式,source字段保存了数据的原始文件名。
explodinggradients/WikiEval
explodinggradients
WikiEval Dataset for to do correlation analysis of difference metrics proposed in Ragas This dataset was generated from 50 pages from Wikipedia with edits post 2022. Column description question: a question that can be answered from the given Wikipedia page (source). source: The source Wikipedia page from which the question and context are generated. grounded_answer: answer grounded on context_v1 ungrounded_answer: answer generated without context_v1… See the full description on the dataset page: https://huggingface.co/datasets/explodinggradients/WikiEval.
w8ay/security-tools-datasets
w8ay
Dataset Card for "security-tools-datasets" More Information needed
mattismegevand/IMSDb
mattismegevand
IMSDb Scraper A Python script that scrapes movie script details from the Internet Movie Script Database (IMSDb) website. Features: Fetches all script links available on IMSDb. Retrieves details for each movie script including: Title Poster Image URL IMSDb Opinion IMSDb Rating Average User Rating Writers Genres Script Date Movie Release Date Submitted By Full Script Text Installation Clone repository. Install the required Python packages. pip… See the full description on the dataset page: https://huggingface.co/datasets/mattismegevand/IMSDb.
chriswmurphy/esperanto
chriswmurphy
Dataset Card for "esperanto" More Information needed
kotzeje/lamini_docs.jsonl
kotzeje
Dataset Card for "lamini_docs.jsonl" More Information needed
bitext/Bitext-customer-support-llm-chatbot-training-dataset
bitext
Bitext - Customer Service Tagged Training Dataset for LLM-based Virtual Assistants Overview This hybrid synthetic dataset is designed to be used to fine-tune Large Language Models such as GPT, Mistral and OpenELM, and has been generated using our NLP/NLG technology and our automated Data Labeling (DAL) tools. The goal is to demonstrate how Verticalization/Domain Adaptation for the Customer Support sector can be easily achieved using our two-step approach to LLM… See the full description on the dataset page: https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset.
clouditera/security-paper-datasets
clouditera
Dataset Card for "security-paper-datasets" More Information needed
nampdn-ai/tiny-lessons
nampdn-ai
Tiny Lessons The dataset is designed to help causal language models learn more effectively from raw web text. It is augmented from public web text and contains two key components: theoretical concepts and practical examples. The theoretical concepts provide a foundation for understanding the underlying principles and ideas behind the information contained in the raw web text. The practical examples demonstrate how these theoretical concepts can be applied in real-world… See the full description on the dataset page: https://huggingface.co/datasets/nampdn-ai/tiny-lessons.
seara/ru_go_emotions
seara
Description This dataset is a translation of the Google GoEmotions emotion classification dataset. All features remain unchanged, except for the addition of a new ru_text column containing the translated text in Russian. For the translation process, I used the Deep translator with the Google engine. You can find all the details about translation, raw .csv files and other stuff in this Github repository. For more information also check the official original dataset card.… See the full description on the dataset page: https://huggingface.co/datasets/seara/ru_go_emotions.
LeoLM/OpenSchnabeltier
LeoLM
Dataset Card for "open_platypus_de" More Information needed
mlabonne/Evol-Instruct-Python-1k
mlabonne
Evol-Instruct-Python-1k Subset of the mlabonne/Evol-Instruct-Python-26k dataset with only 1000 samples. It was made by filtering out a few rows (instruction + output) with more than 2048 tokens, and then by keeping the 1000 longest samples. Here is the distribution of the number of tokens in each row using Llama's tokenizer:
xzuyn/lima-alpaca
xzuyn
Original Dataset by Meta AI LIMA: Less Is More Alignment
vivym/midjourney-prompts
vivym
midjourney-prompts Description This dataset contains the cleaned midjourney prompts from Midjourney. Total prompts: 9,085,397 Version Count 5.2 2,272,465 5.1 2,060,106 5.0 3,530,770 4.0 1,204,384 3.0 14,991 2.0 791 1.0 1,239 Style Count default 8,874,181 raw 177,953 expressive 27,919 scenic 2,146 cute 2,036 original 511
nihiluis/financial-advisor-100
nihiluis
Dataset Card for "finadv100_v2" More Information needed
katielink/genomic-benchmarks
katielink
Genomic Benchmark In this repository, we collect benchmarks for classification of genomic sequences. It is shipped as a Python package, together with functions helping to download & manipulate datasets and train NN models. Citing Genomic Benchmarks If you use Genomic Benchmarks in your research, please cite it as follows. Text GRESOVA, Katarina, et al. Genomic Benchmarks: A Collection of Datasets for Genomic Sequence Classification. bioRxiv, 2022.… See the full description on the dataset page: https://huggingface.co/datasets/katielink/genomic-benchmarks.
ProgramComputer/VGGFace2
ProgramComputer
@article{DBLP:journals/corr/abs-1710-08092, author = {Qiong Cao and Li Shen and Weidi Xie and Omkar M. Parkhi and Andrew Zisserman}, title = {VGGFace2: {A} dataset for recognising faces across pose and age}, journal = {CoRR}, volume = {abs/1710.08092}, year = {2017}, url = {http://arxiv.org/abs/1710.08092}, eprinttype = {arXiv}, eprint = {1710.08092}… See the full description on the dataset page: https://huggingface.co/datasets/ProgramComputer/VGGFace2.
lamini/spider_text_to_sql
lamini
Dataset Card for "spider_text_to_sql" More Information needed
M-A-D/Mixed-Arabic-Datasets-Repo
M-A-D
Dataset Card for "Mixed Arabic Datasets (MAD) Corpus" The Mixed Arabic Datasets Corpus : A Community-Driven Collection of Diverse Arabic Texts Dataset Description The Mixed Arabic Datasets (MAD) presents a dynamic compilation of diverse Arabic texts sourced from various online platforms and datasets. It addresses a critical challenge faced by researchers, linguists, and language enthusiasts: the fragmentation of Arabic language datasets across the Internet. With… See the full description on the dataset page: https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo.
syhao777/DIVOTrack
syhao777
DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes This repository contains the details of the dataset and the Pytorch implementation of the Baseline Method CrossMOT of the Paper: DIVOTrack: A Novel Dataset and Baseline Method for Cross-View Multi-Object Tracking in DIVerse Open Scenes Abstract Cross-view multi-object tracking aims to link objects between frames and camera views with substantial overlaps.… See the full description on the dataset page: https://huggingface.co/datasets/syhao777/DIVOTrack.
shareAI/CodeChat
shareAI
CodeChat Dataset 该数据集是一个比较轻量的小数据集,可用于针对性提升模型的数理逻辑推理、代码问答能力。样本从shareAI/ShareGPT-Chinese-English-90k、garage-bAInd/Open-Platypus等数据集中抽取并组合,整理成了统一的多轮对话格式主要包含逻辑推理、代码问答、代码生成相关语料样本,可以配合LoRA用于轻量微调训练快速激活你的模型在代码QA这方面的能力推荐使用firefly框架,可以快速开箱即用使用该数据格式的加载: https://github.com/yangjianxin1/Firefly
ecccho/pixiv-novel-aesthetics
ecccho
R18 novels chosen from pixiv Chinese language For every file in dataset,First line is Unix timestamp.Second line is novel stat.Third line is novel content. different versions of dataset aesthetic_2023_8_27: novels chosen from bookmarks collected in Aug 27th, 2023 toplist_2023_8_29: novels from daily toplist 2020-2023, collected in Aug 29th, 2023There are no Chinese novels before 2020, maybe they are all deleted, or there are literally no Chinese novels in… See the full description on the dataset page: https://huggingface.co/datasets/ecccho/pixiv-novel-aesthetics.
YeungNLP/firefly-pretrain-dataset
YeungNLP
Firefly中文Llama2增量预训练数据 欢迎加入Firefly大模型技术交流群,关注我们的公众号。 数据简介 技术文章:QLoRA增量预训练与指令微调,及汉化Llama2的实践 该数据应为Firefly-LLaMA2-Chinese项目的增量预训练数据,一共约22GB文本,主要包含CLUE、ThucNews、CNews、COIG、维基百科等开源数据集,以及我们收集的古诗词、散文、文言文等,数据分布如下图。 模型列表 & 数据列表 我们开源了7B和13B的Base与Chat模型。Base模型是基于LLaMA2扩充中文词表后增量预训练得到的模型,Chat模型是在Base模型的基础上进行多轮对话指令微调。 为了探究基座模型对指令微调的影响,我们也微调了baichuan2-base模型,获得firefly-baichuan2-13b,具有不错的效果。更多中文微调,可查看Firefly项目。 模型 类型 训练任务 训练长度… See the full description on the dataset page: https://huggingface.co/datasets/YeungNLP/firefly-pretrain-dataset.
qgyd2021/h_novel
qgyd2021
This dataset contains some SQ novel. It is supposed to be used for text generation tasks.
lamini/bird_text_to_sql
lamini
Dataset Card for "bird_text_to_sql" More Information needed
Falah/architecture_house_building_prompts_SDXL
Falah
Dataset Card for "architecture_house_building_prompts_SDXL" More Information needed
elyza/ELYZA-tasks-100
elyza
ELYZA-tasks-100: 日本語instructionモデル評価データセット Data Description 本データセットはinstruction-tuningを行ったモデルの評価用データセットです。詳細は リリースのnote記事 を参照してください。 特徴: 複雑な指示・タスクを含む100件の日本語データです。 役に立つAIアシスタントとして、丁寧な出力が求められます。 全てのデータに対して評価観点がアノテーションされており、評価の揺らぎを抑えることが期待されます。 具体的には以下のようなタスクを含みます。 要約を修正し、修正箇所を説明するタスク 具体的なエピソードから抽象的な教訓を述べるタスク ユーザーの意図を汲み役に立つAIアシスタントとして振る舞うタスク 場合分けを必要とする複雑な算数のタスク 未知の言語からパターンを抽出し日本語訳する高度な推論を必要とするタスク 複数の指示を踏まえた上でyoutubeの対話を生成するタスク 架空の生き物や熟語に関する生成・大喜利などの想像力が求められるタスク… See the full description on the dataset page: https://huggingface.co/datasets/elyza/ELYZA-tasks-100.
ebony59/AO3_fandom_chatbot
ebony59
Dataset Card for "AO3_fandom_chatbot" More Information needed
gursi26/wikihow-cleaned
gursi26
A cleaned version of the Wikihow dataset for abstractive text summarization. Changes made Changes to the original dataset include: All words have been made lowercase All punctuation removed except ".", "," and "-" Spaces added before and after all punctuation NA values dropped from dataset Leading and trailing newline and space characters removed These changes allow for easier tokenization. Citation @misc{koupaee2018wikihow, title={WikiHow: A Large Scale… See the full description on the dataset page: https://huggingface.co/datasets/gursi26/wikihow-cleaned.
jat-project/jat-dataset
jat-project
JAT Dataset Dataset Description The Jack of All Trades (JAT) dataset combines a wide range of individual datasets. It includes expert demonstrations by expert RL agents, image and caption pairs, textual data and more. The JAT dataset is part of the JAT project, which aims to build a multimodal generalist agent. Paper: https://huggingface.co/papers/2402.09844 Usage >>> from datasets import load_dataset >>> dataset =… See the full description on the dataset page: https://huggingface.co/datasets/jat-project/jat-dataset.
Flmc/DISC-Med-SFT
Flmc
This is a repository containing a subset of the DISC-Med-SFT Dataset. Check DISC-MedLLM for more information.
wbensvage/clothes_desc
wbensvage
Dataset Card for H&M Clothes captions _Dataset used to train/finetune [Clothes text to image model] Captions are generated by using the 'detail_desc' and 'colour_group_name' or 'perceived_colour_master_name' from kaggle/competitions/h-and-m-personalized-fashion-recommendations. Original images were also obtained from the url (https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations/data?select=images) For each row the dataset contains image… See the full description on the dataset page: https://huggingface.co/datasets/wbensvage/clothes_desc.
vikp/evol_codealpaca_filtered_87k
vikp
Dataset Card for "evol_codealpaca_filtered_86k" Filtered version of theblackcat102/evol-codealpaca-v1, with manual filtering, and automatic filtering based on quality and learning value classifiers.
lamini/text_to_sql_finetune
lamini
Dataset Card for "text_to_sql_finetune" More Information needed
Yorai/detect-waste
Yorai
Dataset Card for detect-waste Dataset Summary AI4Good project for detecting waste in environment. www.detectwaste.ml. Our latest results were published in Waste Management journal in article titled Deep learning-based waste detection in natural and urban environments. You can find more technical details in our technical report Waste detection in Pomerania: non-profit project for detecting waste in environment. Did you know that we produce 300 million tons of… See the full description on the dataset page: https://huggingface.co/datasets/Yorai/detect-waste.
chiragtubakad/chart-to-table
chiragtubakad
Dataset Card for "chart-to-table" More Information needed
theblackcat102/datascience-stackexchange-posts
theblackcat102
Dataset Card for "datascience-stackexchange-posts" More Information needed
monology/pile-uncopyrighted
monology
Pile Uncopyrighted In response to authors demanding that LLMs stop using their works, here's a copy of The Pile with all copyrighted content removed.Please consider using this dataset to train your future LLMs, to respect authors and abide by copyright law.Creating an uncopyrighted version of a larger dataset (ie RedPajama) is planned, with no ETA. MethodologyCleaning was performed by removing everything from the Books3, BookCorpus2, OpenSubtitles, YTSubtitles, and OWT2… See the full description on the dataset page: https://huggingface.co/datasets/monology/pile-uncopyrighted.
llvm-ml/ComPile
llvm-ml
Dataset Card for ComPile: A Large IR Dataset from Production Sources Changelog Release Programming Languages Description v1.0 C/C++, Rust, Swift, Julia Fine Tuning-scale dataset of 602GB of deduplicated LLVM (bitcode) IR Dataset Summary ComPile contains over 2.7TB of permissively-licensed source code compiled to (textual) LLVM intermediate representation (IR) covering C/C++, Rust, Swift, and Julia. The dataset was created by hooking… See the full description on the dataset page: https://huggingface.co/datasets/llvm-ml/ComPile.
allenai/MADLAD-400
allenai
MADLAD-400 Dataset and Introduction MADLAD-400 (Multilingual Audited Dataset: Low-resource And Document-level) is a document-level multilingual dataset based on Common Crawl, covering 419 languages in total. This uses all snapshots of CommonCrawl available as of August 1, 2022. The primary advantage of this dataset over similar datasets is that it is more multilingual (419 languages), it is audited and more highly filtered, and it is document-level. The main… See the full description on the dataset page: https://huggingface.co/datasets/allenai/MADLAD-400.
axiong/pmc_llama_instructions
axiong
This repo provides part of the dataset used for PMC-LLaMA-13B's instruction tuning. Data Size Link ChatDoctor 100K https://www.yunxiangli.top/ChatDoctor/ MedQA 10.2K https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options MedMCQA 183K https://huggingface.co/datasets/medmcqa PubmedQA 211K https://huggingface.co/datasets/pubmed_qa LiveQA 635 https://huggingface.co/datasets/truehealth/liveqa MedicationQA 690 https://huggingface.co/datasets/truehealth/medicationqa UMLS… See the full description on the dataset page: https://huggingface.co/datasets/axiong/pmc_llama_instructions.
starmpcc/Asclepius-Synthetic-Clinical-Notes
starmpcc
Asclepius: Synthetic Clincal Notes & Instruction Dataset Dataset Summary This dataset is official dataset for Asclepius (arxiv) This dataset is composed with Clinical Note - Question - Answer format to build a clinical LLMs. We first synthesized synthetic notes from PMC-Patients case reports with GPT-3.5 Then, we generate instruction-answer pairs for 157k synthetic discharge summaries Supported Tasks This dataset covers below 8 tasks Named Entity… See the full description on the dataset page: https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes.
qwertyaditya/rick_and_morty_text_to_image
qwertyaditya
Dataset Card for "rick_and_morty_text_to_image" More Information needed
limingcv/MultiGen-20M_depth
limingcv
Dataset Card for "MultiGen-20M_depth" More Information needed
facebook/belebele
facebook
The Belebele Benchmark for Massively Multilingual NLU Evaluation Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the FLORES-200 dataset. The human annotation procedure was carefully curated to create questions that… See the full description on the dataset page: https://huggingface.co/datasets/facebook/belebele.
kingbri/PIPPA-shareGPT
kingbri
Dataset Card: PIPPA-ShareGPT This is a conversion of PygmalionAI's PIPPA deduped dataset to ShareGPT format for finetuning with Axolotl. The reformat was completed via the following TypeScript project called ShareGPT-Reformat. Files and explanations pippa_sharegpt_raw.jsonl: The raw deduped dataset file converted to shareGPT. Roles will be defaulted to your finetuning software. pippa_sharegpt.jsonl: A shareGPT dataset with the roles as USER: and CHARACTER: for… See the full description on the dataset page: https://huggingface.co/datasets/kingbri/PIPPA-shareGPT.
mychen76/stack-exchange-paired-500k
mychen76
StackExchange Paired 500K is a subset of lvwerra/stack-exchange-paired which is a processed version of the HuggingFaceH4/stack-exchange-preferences. The following steps were applied: Parse HTML to Markdown with markdownify Create pairs (response_j, response_k) where j was rated better than k Sample at most 10 pairs per question Shuffle the dataset globally This dataset is designed to be used for preference learning. license: mit
FudanSELab/ClassEval
FudanSELab
Dataset Card for FudanSELab ClassEval Dataset Summary We manually build ClassEval of 100 class-level Python coding tasks, consists of 100 classes and 412 methods, and average 33.1 test cases per class. For 100 class-level tasks, diversity is maintained by encompassing these tasks over a wide spectrum of topics, including Management Systems, Data Formatting, Mathematical Operations, Game Development, File Handing, Database Operations and Natural Language… See the full description on the dataset page: https://huggingface.co/datasets/FudanSELab/ClassEval.
theblackcat102/multiround-programming-convo
theblackcat102
Multi-Round Programming Conversations Based on previous evol-codealpaca-v1 dataset with added sampled questions from stackoverflow, crossvalidated and make it multiround! It should be more suited to train a code assistant which works side by side. Tasks included in here: Data science, statistic, programming questions Code translation : translate a short function from Python, Golang, C++, Java, Javascript Code fixing : Fix randomly corrupts characters with no… See the full description on the dataset page: https://huggingface.co/datasets/theblackcat102/multiround-programming-convo.
pszemraj/simple_wikipedia
pszemraj
simple wikipedia the 'simple' split of Wikipedia, from Sept 1 2023. The train split contains about 65M tokens, Pulled via: dataset = load_dataset( "wikipedia", language="simple", date="20230901", beam_runner="DirectRunner" ) stats train split general info <class 'pandas.core.frame.DataFrame'> RangeIndex: 226242 entries, 0 to 226241 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ --------------… See the full description on the dataset page: https://huggingface.co/datasets/pszemraj/simple_wikipedia.
if001/aozorabunko-clean-sin
if001
this is forkhttps://huggingface.co/datasets/globis-university/aozorabunko-clean filtered row["meta"]["文字遣い種別"] == "新字新仮名"
uonlp/CulturaX
uonlp
CulturaX Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages Dataset Summary We present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for large language model (LLM) development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language… See the full description on the dataset page: https://huggingface.co/datasets/uonlp/CulturaX.
chats-bug/input_tools_plans
chats-bug
Dataset Card for "input_tools_plans" More Information needed
EgilKarlsen/BGL_DistilRoBERTa_FT
EgilKarlsen
Dataset Card for "BGL_DistilRoBERTa_FT" More Information needed
mattpscott/airoboros-summarization
mattpscott
This is my adaptation and cleaned version of the Booksum dataset to work with Airoboros by Jon Durbin huggingface I created this dataset for the purposes of improving the LLM capabilities with summarization. It's a core feature that I feel many applications rely on, yet we're still relying on older Longformer, RoBERTa, or BART solutions. This dataset has been altered from the original as follows: Cleaned up bad formatting, extra quotes at the beginning of summaries, extra line breaks, and… See the full description on the dataset page: https://huggingface.co/datasets/mattpscott/airoboros-summarization.
aqubed/kub_tickets_small
aqubed
Dataset Card for "kub_tickets_small" More Information needed
kuokxuen/marketing_dataset
kuokxuen
Dataset Card for "marketing_dataset" More Information needed
open-llm-leaderboard-old/details_Undi95__LewdEngine
open-llm-leaderboard-old
Dataset Card for Evaluation run of Undi95/LewdEngine Dataset Summary Dataset automatically created during the evaluation run of model Undi95/LewdEngine on the Open LLM Leaderboard. The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard-old/details_Undi95__LewdEngine.
AlfredPros/smart-contracts-instructions
AlfredPros
Smart Contracts Instructions A dataset containing 6,003 GPT-generated human instruction and Solidity source code data pairs. GPT models used to make this data are GPT-3.5 turbo, GPT-3.5 turbo 16k context, and GPT-4. Solidity source codes are used from mwritescode's Slither Audited Smart Contracts (https://huggingface.co/datasets/mwritescode/slither-audited-smart-contracts). Distributions of the GPT models used to make this dataset: GPT-3.5 Turbo: 5,276 GPT-3.5 Turbo 16k… See the full description on the dataset page: https://huggingface.co/datasets/AlfredPros/smart-contracts-instructions.
LabHC/bias_in_bios
LabHC
Bias in Bios Bias in Bios was created by (De-Artega et al., 2019) and published under the MIT license (https://github.com/microsoft/biosbias). The dataset is used to investigate bias in NLP models. It consists of textual biographies used to predict professional occupations, the sensitive attribute is the gender (binary). The version shared here is the version proposed by (Ravgofel et al., 2020) which slightly smaller due to the unavailability of 5,557 biographies. The dataset… See the full description on the dataset page: https://huggingface.co/datasets/LabHC/bias_in_bios.
open-web-math/open-web-math
open-web-math
Keiran Paster*, Marco Dos Santos*, Zhangir Azerbayev, Jimmy Ba GitHub | ArXiv | PDF OpenWebMath is a dataset containing the majority of the high-quality, mathematical text from the internet. It is filtered and extracted from over 200B HTML files on Common Crawl down to a set of 6.3 million documents containing a total of 14.7B tokens. OpenWebMath is intended for use in pretraining and finetuning large language models. You can download the dataset using Hugging Face: from datasets import… See the full description on the dataset page: https://huggingface.co/datasets/open-web-math/open-web-math.
yilunzhao/robut
yilunzhao
This RobuT-WTQ dataset is a large-scale dataset for robust question answering on semi-structured tables.
ronniewy/chinese_nli
ronniewy
常见中文语义匹配数据集
nampdn-ai/devdocs.io
nampdn-ai
189k (~1GB of raw clean text) documents of various programming language & tech stacks by DevDocs, it combines multiple API documentations in a fast, organized, and searchable interface. DevDocs is free and open source by FreeCodeCamp. I've converted it into Markdown format for the standard of training data.
AdamCodd/emotion-balanced
AdamCodd
Dataset Card for "emotion" Dataset Summary Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed Dataset Structure Data Instances An example looks as follows. { "text": "im feeling quite sad… See the full description on the dataset page: https://huggingface.co/datasets/AdamCodd/emotion-balanced.
nampdn-ai/mini-coder
nampdn-ai
The Mini-Coder dataset is a 2.2 million (~8GB) filtered selection of code snippets from the bigcode/starcoderdata dataset, serving as a seed for synthetic dataset generation. Each snippet is chosen for its clarity, presence of comments, and inclusion of at least one if/else or switch case statement This repository is particularly useful for ML researchers in the field of making synthetic dataset.
rombodawg/LimitlessMegaCodeTraining
rombodawg
----- BREAK THROUGH YOUR LIMITS ----- LimitlessCodeTraining is the direct sequal to Megacodetraining that is now called Legacy_MegaCodeTraining200k. This dataset is just over 646k lines of pure refined coding data. It is the pinacle of open source code training. It is the combination of the filtered Megacode training dataset filtered by shahules786 (shoutout to him) and the bigcode commitpackft dataset I converted to alpaca format. The dataset that were used to create this… See the full description on the dataset page: https://huggingface.co/datasets/rombodawg/LimitlessMegaCodeTraining.
health360/Healix-Shot
health360
README Healix-Shot: Largest Medical Corpora by Health 360 Healix-Shot, proudly presented by Health 360, stands as an emblematic milestone in the realm of medical datasets. Hosted on the HuggingFace repository, it heralds the infusion of cutting-edge AI in the healthcare domain. With an astounding 22 billion tokens, Healix-Shot provides a comprehensive, high-quality corpus of medical text, laying the foundation for unparalleled medical NLP applications. Importance:… See the full description on the dataset page: https://huggingface.co/datasets/health360/Healix-Shot.
yys/OpenOrca-Chinese
yys
🐋 OpenOrca-Chinese 数据集!🐋 感谢 Open-Orca/OpenOrca 数据集的发布,给广大NLP研究人员和开发者带来了宝贵的资源! 这是一个对 Open-Orca/OpenOrca 数据集中文翻译的版本,翻译引擎为 Google 翻译,希望能给中文 LLM 研究做出一点点贡献。 Dataset Summary The OpenOrca dataset is a collection of augmented FLAN Collection data. Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions. It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with… See the full description on the dataset page: https://huggingface.co/datasets/yys/OpenOrca-Chinese.
codefuse-ai/CodeExercise-Python-27k
codefuse-ai
Dataset Card for CodeFuse-CodeExercise-Python-27k [中文] [English] Dataset Description This dataset consists of 27K Python programming exercises (in English), covering hundreds of Python-related topics including basic syntax and data structures, algorithm applications, database queries, machine learning, and more. Please note that this dataset was generated with the help of a teacher model and Camel, and has not undergone strict validation. There may be… See the full description on the dataset page: https://huggingface.co/datasets/codefuse-ai/CodeExercise-Python-27k.
codefuse-ai/Evol-instruction-66k
codefuse-ai
Dataset Card for CodeFuse-Evol-instruction-66k [中文] [English] Dataset Description Evol-instruction-66k data is based on the method mentioned in the paper "WizardCoder: Empowering Code Large Language Models with Evol-Instruct". It enhances the fine-tuning effect of pre-trained code large models by adding complex code instructions. This data is processed based on an open-source dataset, which can be found at Evol-Instruct-Code-80k-v1. The processing… See the full description on the dataset page: https://huggingface.co/datasets/codefuse-ai/Evol-instruction-66k.
manu/project_gutenberg
manu
Dataset Card for "Project Gutenberg" Project Gutenberg is a library of over 70,000 free eBooks, hosted at https://www.gutenberg.org/. All examples correspond to a single book, and contain a header and a footer of a few lines (delimited by a *** Start of *** and *** End of *** tags). Usage from datasets import load_dataset ds = load_dataset("manu/project_gutenberg", split="fr", streaming=True) print(next(iter(ds))) License Full license is… See the full description on the dataset page: https://huggingface.co/datasets/manu/project_gutenberg.
mirzaei2114/stackoverflowVQA-filtered-small
mirzaei2114
Dataset Card for "stackoverflowVQA-filtered-small" More Information needed
nampdn-ai/tiny-code-textbooks
nampdn-ai
Code Explanation Textbooks A collection of 207k synthetic code with explanation as a tiny textbook. Filtered from the-stack, each programming language contains few thousands samples. I only choose the best meaningful code to generate synthetic textbook.
shunk031/MSCOCO
shunk031
Dataset Card for MSCOCO Dataset Summary COCO is a large-scale object detection, segmentation, and captioning dataset. COCO has several features: Object segmentation Recognition in context Superpixel stuff segmentation 330K images (>200K labeled) 1.5 million object instances 80 object categories 91 stuff categories 5 captions per image 250,000 people with keypoints Supported Tasks and Leaderboards [More Information Needed] Languages… See the full description on the dataset page: https://huggingface.co/datasets/shunk031/MSCOCO.
euirim/goodwiki
euirim
GoodWiki Dataset GoodWiki is a 179 million token dataset of English Wikipedia articles collected on September 4, 2023, that have been marked as Good or Featured by Wikipedia editors. The dataset provides these articles in GitHub-flavored Markdown format, preserving layout features like lists, code blocks, math, and block quotes, unlike many other public Wikipedia datasets. Articles are accompanied by a short description of the page as well as any associated categories. Thanks to… See the full description on the dataset page: https://huggingface.co/datasets/euirim/goodwiki.
sdadas/gpt-exams
sdadas
GPT-exams Dataset summary The dataset contains 8131 multi-domain question-answer pairs. It was created semi-automatically using the gpt-3.5-turbo-0613 model available in the OpenAI API. The process of building the dataset was as follows: We manually prepared a list of 409 university-level courses from various fields. For each course, we instructed the model with the prompt: "Wygeneruj 20 przykładowych pytań na egzamin z [nazwa przedmiotu]" (Generate 20 sample… See the full description on the dataset page: https://huggingface.co/datasets/sdadas/gpt-exams.
serbog/job_listing_german_cleaned_bert
serbog
Dataset Card for "job_listing_german_cleaned_bert" More Information needed
chuyin0321/timeseries-daily-stocks
chuyin0321
Dataset Card for "timeseries-daily-stocks" More Information needed
ibm/AttaQ
ibm
AttaQ Dataset Card The AttaQ red teaming dataset, consisting of 1402 carefully crafted adversarial questions, is designed to evaluate Large Language Models (LLMs) by assessing their tendency to generate harmful or undesirable responses. It may serve as a benchmark to assess the potential harm of responses produced by LLMs. The dataset is categorized into seven distinct classes of questions: deception, discrimination, harmful information, substance abuse, sexual content… See the full description on the dataset page: https://huggingface.co/datasets/ibm/AttaQ.
fenilgandhi/sheldon_dialogues
fenilgandhi
Dataset Card for "sheldon_dialogues" More Information needed
thu-coai/SafetyBench
thu-coai
SafetyBench is a comprehensive benchmark for evaluating the safety of LLMs, which comprises 11,435 diverse multiple choice questions spanning across 7 distinct categories of safety concerns. Notably, SafetyBench also incorporates both Chinese and English data, facilitating the evaluation in both languages. Please visit our GitHub and website or check our paper for more details. We release three differents test sets including Chinese testset (test_zh.json), English testset (test_en.json) and… See the full description on the dataset page: https://huggingface.co/datasets/thu-coai/SafetyBench.
bupt/LawDataset-BUPT
bupt
LawDataset-BUPT ⚖️ Here is the full data from the Legal LLM project, from which we hope to build a high quality dataset. Here's our github project page. If you want to make any contribution, please contact me QQ 2248157602. Data Source Our data mainly comes from CrimeKgAssistant, 856 crime KG items / 2800k crime name_entities / 200k lawQA with 13 classes Tigerbot-law-plugin 55k laws provision data with 11 classes Wenshu_ms_dataset 45k law judgements data Lexilaw… See the full description on the dataset page: https://huggingface.co/datasets/bupt/LawDataset-BUPT.