id
stringlengths
6
121
author
stringlengths
2
42
description
stringlengths
0
6.67k
cmarkea/doc-vqa
cmarkea
Dataset description The doc-vqa Dataset integrates images from the Infographic_vqa dataset sourced from HuggingFaceM4 The Cauldron dataset, as well as images from the dataset AFTDB (Arxiv Figure Table Database) curated by cmarkea. This dataset consists of pairs of images and corresponding text, with each image linked to an average of five questions and answers available in both English and French. These questions and answers were generated using Gemini 1.5 Pro, thereby… See the full description on the dataset page: https://huggingface.co/datasets/cmarkea/doc-vqa.
AI-MO/NuminaMath-CoT
AI-MO
Dataset Card for NuminaMath CoT Dataset Summary Approximately 860k math problems, where each solution is formatted in a Chain of Thought (CoT) manner. The sources of the dataset range from Chinese high school math exercises to US and international mathematics olympiad competition problems. The data were primarily collected from online exam paper PDFs and mathematics discussion forums. The processing steps include (a) OCR from the original PDFs, (b) segmentation… See the full description on the dataset page: https://huggingface.co/datasets/AI-MO/NuminaMath-CoT.
argilla/magpie-ultra-v0.1
argilla
Dataset Card for magpie-ultra-v0.1 This dataset has been created with distilabel. 📰 News [08/02/2024] Release of the first unfiltered version of the dataset containing 50K instruction-response pairs that can be used for SFT or DPO. Dataset Summary magpie-ultra it's a synthetically generated dataset for supervised fine-tuning using the new Llama 3.1 405B-Instruct model, together with other Llama models like Llama-Guard-3-8B… See the full description on the dataset page: https://huggingface.co/datasets/argilla/magpie-ultra-v0.1.
turkish-nlp-suite/InstrucTurca
turkish-nlp-suite
InstrucTurca v1.0.0 is a diverse synthetic instruction tuning dataset crafted for instruction-tuning Turkish LLMs. The data is compiled data various English datasets and sources, such as code instructions, poems, summarized texts, medical texts, and more. Dataset content BI55/MedText checkai/instruction-poems garage-bAInd/Open-Platypus Locutusque/ColumnedChatCombined nampdn-ai/tiny-codes Open-Orca/OpenOrca pubmed_qa TIGER-Lab/MathInstruct… See the full description on the dataset page: https://huggingface.co/datasets/turkish-nlp-suite/InstrucTurca.
Skywork/Skywork-Reward-Preference-80K-v0.1
Skywork
Skywork Reward Preference 80K IMPORTANT: This dataset shown to contain contaminated samples from the magpie-ultra-v0.1 subset. The prompts of those samples have a significant n-gram overlap with the evaluation prompts in RewardBench, based on the script in this GitHub gist. You can find the set of removed pairs here. If your task involves evaluation on RewardBench, we strongly encourage you to use Skywork-Reward-Preference-80K-v0.2 instead of v0.1 of the dataset. Skywork… See the full description on the dataset page: https://huggingface.co/datasets/Skywork/Skywork-Reward-Preference-80K-v0.1.
mistralai/MM-MT-Bench
mistralai
MM-MT-Bench MM-MT-Bench is a multi-turn LLM-as-a-judge evaluation benchmark similar to the text MT-Bench for testing multimodal instruction-tuned models. While existing benchmarks like MMMU, MathVista, ChartQA and so on are focused on closed-ended questions with short responses, they do not evaluate model's ability to follow user instructions in multi-turn dialogues and answer open-ended questions in a zero-shot manner. MM MT-Bench is designed to overcome this limitation. The… See the full description on the dataset page: https://huggingface.co/datasets/mistralai/MM-MT-Bench.
nvidia/OpenMathInstruct-2
nvidia
OpenMathInstruct-2 OpenMathInstruct-2 is a math instruction tuning dataset with 14M problem-solution pairs generated using the Llama3.1-405B-Instruct model. The training set problems of GSM8K and MATH are used for constructing the dataset in the following ways: Solution augmentation: Generating chain-of-thought solutions for training set problems in GSM8K and MATH. Problem-Solution augmentation: Generating new problems, followed by solutions for these new problems.… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/OpenMathInstruct-2.
ARTPARK-IISc/Vaani
ARTPARK-IISc
Project Vaani, by IISc, Bangalore and ARTPARK, is capturing the true diversity of India’s spoken languages to propel language AI technologies and content for an inclusive Digital India. We expect to create data corpora of over 150,000 hours of speech, part of which will be transcribed in local scripts, while ensuring linguistic, educational, urban-rural, age, and gender diversity (among other potential diversity characteristics). These diligently collected and curated datasets of natural… See the full description on the dataset page: https://huggingface.co/datasets/ARTPARK-IISc/Vaani.
argilla/ifeval-like-data
argilla
IFEval Like Data This dataset contains instruction-response pairs synthetically generated using Qwen/Qwen2.5-72B-Instruct following the style of google/IFEval dataset and verified for correctness with lm-evaluation-harness. The dataset contains two subsets: default: which contains 550k unfiltered rows synthetically generated with Qwen2.5-72B-Instruct, a few system prompts and MagPie prompting technique. The prompts can contain conflicting instructions as defined… See the full description on the dataset page: https://huggingface.co/datasets/argilla/ifeval-like-data.
PULSE-ECG/ECGBench
PULSE-ECG
ECGBench Benchmark Dataset for paper "Teach Multimodal LLMs to Comprehend Electrocardiographic Images". 🌐 Project Page: https://aimedlab.github.io/PULSE/ 📄 Paper: https://arxiv.org/abs/2410.19008 🧑‍💻 Code: https://github.com/AIMedLab/PULSE 🤗 Model: https://huggingface.co/PULSE-ECG/PULSE-7B 👩‍⚕️ ECGInstruct: https://huggingface.co/datasets/PULSE-ECG/ECGInstruct Introduction We introduce ECGBench, a comprehensive benchmark designed to evaluate ECG image… See the full description on the dataset page: https://huggingface.co/datasets/PULSE-ECG/ECGBench.
PULSE-ECG/ECGInstruct
PULSE-ECG
ECGInstruct Dataset for paper "Teach Multimodal LLMs to Comprehend Electrocardiographic Images". 🌐 Project Page: https://aimedlab.github.io/PULSE/ 📄 Paper: https://arxiv.org/abs/2410.19008 🧑‍💻 Code: https://github.com/AIMedLab/PULSE 🤗 Model: https://huggingface.co/PULSE-ECG/PULSE-7B ⚖️ ECGBench: https://huggingface.co/datasets/PULSE-ECG/ECGBench Introduction ECGInstruct is a comprehensive and large-scale instruction-tuning dataset designed for ECG image… See the full description on the dataset page: https://huggingface.co/datasets/PULSE-ECG/ECGInstruct.
tokyotech-llm/lmsys-chat-1m-synth
tokyotech-llm
LMSYS-Chat-1M-Synth-Llama3.1-Ja-and-En: Japanese/English Synthetic Conversation Dataset Derived from LMSYS-Chat-1M LMSYS-Chat-1M-Synth-Llama3.1-Ja-and-En is a Japanese and English conversation dataset. Built with Llama. Japanese portion: 453,889 user instructions that were translated (by DeepL) from the LMSYS-Chat-1M dataset [Zhang+, ICLR24] 2,722,314 assistant responses that were generated automatically by Llama 3.1 405B Instruct for the Japanese instructions (roughly six… See the full description on the dataset page: https://huggingface.co/datasets/tokyotech-llm/lmsys-chat-1m-synth.
Fanqi-Lin/Processed-Task-Dataset
Fanqi-Lin
Robotic Manipulation Datasets for Four Tasks [Project Page] [Paper] [Code] [Models] This repository contains in-the-wild robotic manipulation datasets collected using UMI, and processed through a SLAM pipeline, as described in the paper "Data Scaling Laws in Imitation Learning for Robotic Manipulation". The datasets cover four tasks: Pour Water Arrange Mouse Fold Towel Unplug Charger Dataset Folders: arrange_mouse and pour_water: Each folder contains data from… See the full description on the dataset page: https://huggingface.co/datasets/Fanqi-Lin/Processed-Task-Dataset.
code-search-net/code_search_net
code-search-net
CodeSearchNet corpus contains about 6 million functions from open-source code spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). The CodeSearchNet Corpus also contains automatically generated query-like natural language for 2 million functions, obtained from mechanically scraping and preprocessing associated function documentation.
stanfordnlp/imdb
stanfordnlp
Dataset Card for "imdb" Dataset Summary Large Movie Review Dataset. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed… See the full description on the dataset page: https://huggingface.co/datasets/stanfordnlp/imdb.
openslr/librispeech_asr
openslr
LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.87
google-research-datasets/mbpp
google-research-datasets
Dataset Card for Mostly Basic Python Problems (mbpp) Dataset Summary The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us. Released here as part… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/mbpp.
ylecun/mnist
ylecun
Dataset Card for MNIST Dataset Summary The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class. Half of the image were drawn by Census Bureau employees and the other half by high school… See the full description on the dataset page: https://huggingface.co/datasets/ylecun/mnist.
Helsinki-NLP/opus-100
Helsinki-NLP
Dataset Card for OPUS-100 Dataset Summary OPUS-100 is an English-centric multilingual corpus covering 100 languages. OPUS-100 is English-centric, meaning that all training pairs include English on either the source or target side. The corpus covers 100 languages (including English). The languages were selected based on the volume of parallel data available in OPUS. Supported Tasks and Leaderboards Translation. Languages OPUS-100… See the full description on the dataset page: https://huggingface.co/datasets/Helsinki-NLP/opus-100.
rajpurkar/squad
rajpurkar
Dataset Card for SQuAD Dataset Summary Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD 1.1 contains 100,000+ question-answer pairs on 500+ articles. Supported Tasks and Leaderboards Question… See the full description on the dataset page: https://huggingface.co/datasets/rajpurkar/squad.
Salesforce/wikitext
Salesforce
Dataset Card for "wikitext" Dataset Summary The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License. Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far… See the full description on the dataset page: https://huggingface.co/datasets/Salesforce/wikitext.
EdinburghNLP/xsum
EdinburghNLP
Extreme Summarization (XSum) Dataset. There are three features: - document: Input news article. - summary: One sentence summary of the article. - id: BBC ID of the article.
facebook/multilingual_librispeech
facebook
Dataset Card for MultiLingual LibriSpeech Dataset Summary This is a streamable version of the Multilingual LibriSpeech (MLS) dataset. The data archives were restructured from the original ones from OpenSLR to make it easier to stream. MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish.… See the full description on the dataset page: https://huggingface.co/datasets/facebook/multilingual_librispeech.
codeparrot/github-code
codeparrot
The GitHub Code dataest consists of 115M code files from GitHub in 32 programming languages with 60 extensions totalling in 1TB of text data. The dataset was created from the GitHub dataset on BiqQuery.
bigcode/the-stack
bigcode
Dataset Card for The Stack Changelog Release Description v1.0 Initial release of the Stack. Included 30 programming languages and 18 permissive licenses. Note: Three included licenses (MPL/EPL/LGPL) are considered weak copyleft licenses. The resulting near-deduplicated dataset is 3TB in size. v1.1 The three copyleft licenses ((MPL/EPL/LGPL) were excluded and the list of permissive licenses extended to 193 licenses in total. The list of programming… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/the-stack.
lukaemon/bbh
lukaemon
BBH focuses on a suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH). These are the task for which prior language model evaluations did not outperform the average human-rater. We find that applying chain-of-thought (CoT) prompting to BBH tasks enables PaLM to surpass the average humanrater performance on 10 of the 23 tasks, and Codex (code-davinci-002) to surpass the average human-rater performance on 17 of the 23 tasks. Since many tasks in BBH require multi-step reasoning, few-shot prompting without CoT, as done in the BIG-Bench evaluations (Srivastava et al., 2022), substantially underestimates the best performance and capabilities of language models, which is better captured via CoT prompting. As further analysis, we explore the interaction between CoT and model scale on BBH, finding that CoT enables emergent task performance on several BBH tasks with otherwise flat scaling curves.
derek-thomas/ScienceQA
derek-thomas
Dataset Card Creation Guide Dataset Summary Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering Supported Tasks and Leaderboards Multi-modal Multiple Choice Languages English Dataset Structure Data Instances Explore more samples here. {'image': Image, 'question': 'Which of these states is farthest north?', 'choices': ['West Virginia', 'Louisiana', 'Arizona'… See the full description on the dataset page: https://huggingface.co/datasets/derek-thomas/ScienceQA.
tatsu-lab/alpaca
tatsu-lab
Dataset Card for Alpaca Dataset Summary Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. The authors built on the data generation pipeline from Self-Instruct framework and made the following modifications: The text-davinci-003 engine to generate the instruction data… See the full description on the dataset page: https://huggingface.co/datasets/tatsu-lab/alpaca.
QingyiSi/Alpaca-CoT
QingyiSi
Instruction-Finetuning Dataset Collection (Alpaca-CoT) This repository will continuously collect various instruction tuning datasets. And we standardize different datasets into the same format, which can be directly loaded by the code of Alpaca model. We also have conducted empirical study on various instruction-tuning datasets based on the Alpaca model, as shown in https://github.com/PhoebusSi/alpaca-CoT. If you think this dataset collection is helpful to you, please like… See the full description on the dataset page: https://huggingface.co/datasets/QingyiSi/Alpaca-CoT.
camel-ai/physics
camel-ai
CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society Github: https://github.com/lightaime/camel Website: https://www.camel-ai.org/ Arxiv Paper: https://arxiv.org/abs/2303.17760 Dataset Summary Physics dataset is composed of 20K problem-solution pairs obtained using gpt-4. The dataset problem-solutions pairs generating from 25 physics topics, 25 subtopics for each topic and 32 problems for each "topic,subtopic" pairs. We… See the full description on the dataset page: https://huggingface.co/datasets/camel-ai/physics.
MMInstruction/M3IT
MMInstruction
Multi-modal Bi-lingual Instruction Dataset for Vision Language Models
shibing624/medical
shibing624
纯文本数据,中文医疗数据集,包含预训练数据的百科数据,指令微调数据和奖励模型数据。
timdettmers/openassistant-guanaco
timdettmers
This dataset is a subset of the Open Assistant dataset, which you can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples. This dataset was used to train Guanaco with QLoRA. For further information, please see the original dataset. License: Apache 2.0
Open-Orca/OpenOrca
Open-Orca
🐋 The OpenOrca Dataset! 🐋 We are thrilled to announce the release of the OpenOrca dataset! This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper. It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers! Official Models Mistral-7B-OpenOrca Our latest model, the first 7B to score better overall than all… See the full description on the dataset page: https://huggingface.co/datasets/Open-Orca/OpenOrca.
iamtarun/python_code_instructions_18k_alpaca
iamtarun
Dataset Card for python_code_instructions_18k_alpaca The dataset contains problem descriptions and code in python language. This dataset is taken from sahil2801/code_instructions_120k, which adds a prompt column in alpaca style. Refer to the source here.
princeton-nlp/SWE-bench
princeton-nlp
Dataset Summary SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python repositories. Evaluation is performed by unit test verification using post-PR behavior as the reference solution. The dataset was released as part of SWE-bench: Can Language Models Resolve Real-World GitHub Issues? Want to run inference now? This dataset only contains the… See the full description on the dataset page: https://huggingface.co/datasets/princeton-nlp/SWE-bench.
teknium/OpenHermes-2.5
teknium
Dataset Card for Dataset Name This is the dataset that made OpenHermes 2.5 and Nous Hermes 2 series of models. Support me on GitHub sponsors <3 : https://github.com/sponsors/teknium1 Dataset Details Dataset Description The Open Hermes 2/2.5 and Nous Hermes 2 models have made significant advancements of SOTA LLM's over recent months, and are underpinned by this exact compilation and curation of many open source datasets and custom created synthetic… See the full description on the dataset page: https://huggingface.co/datasets/teknium/OpenHermes-2.5.
b3x0m/Chinese-H-Novels
b3x0m
Update 12/07/2024: convert to parquet to download easier. Chinese 18+ novels corpus, use at your own risk, you and only you are responsible for every choice you make. (͡ ° ͜ʖ ͡ °) tags: socks, garter belt, foot fetish, ntr, netori..... Thanks Moleys/Numeron for the dataset donation.
ed001/ds-coder-instruct-v2
ed001
Dataset Card for DS Coder Instruct v2 Dataset Changes from v1: Added WizardLM evol data science samples Removed R samples from v2 DS Coder is a dataset for instruction fine tuning of language models. It is a specialized dataset focusing only on data science (eg. plotting, data wrangling, machine learnig models, deep learning, and numerical computations). The dataset contains code examples both in Python (R samples were removed in v2). The goal of this dataset is to enable… See the full description on the dataset page: https://huggingface.co/datasets/ed001/ds-coder-instruct-v2.
CohereForAI/aya_collection_language_split
CohereForAI
This is a re-upload of the aya_collection, and only differs in the structure of upload. While the original aya_collection is structured by folders split according to dataset name, this dataset is split by language. We recommend you use this version of the dataset if you are only interested in downloading all of the Aya collection for a single or smaller set of languages. Dataset Summary The Aya Collection is a massive multilingual collection consisting of 513 million instances… See the full description on the dataset page: https://huggingface.co/datasets/CohereForAI/aya_collection_language_split.
osunlp/Multimodal-Mind2Web
osunlp
Dataset Summary Multimodal-Mind2Web is the multimodal version of Mind2Web, a dataset for developing and evaluating generalist agents for the web that can follow language instructions to complete complex tasks on any website. In this dataset, we align each HTML document in the dataset with its corresponding webpage screenshot image from the Mind2Web raw dump. This multimodal version addresses the inconvenience of loading images from the ~300GB Mind2Web Raw Dump.… See the full description on the dataset page: https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web.
kreimben/leetcode_with_youtube_captions
kreimben
Removed [Music] and repeated content.
mozilla-foundation/common_voice_17_0
mozilla-foundation
Dataset Card for Common Voice Corpus 17.0 Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 31175 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help improve the accuracy of speech recognition engines. The dataset currently consists of 20408 validated hours in 124 languages, but more voices and languages are always added. Take a look at the Languages… See the full description on the dataset page: https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0.
LooksJuicy/ruozhiba
LooksJuicy
受COIG-CQIA启发,构建类似数据集,但答案风格相对更简洁。 弱智吧精选问题数据来自github提供的疑问句,调用GPT-4获取答案,并过滤掉明显拒答的回复。
Real-IAD/Real-IAD
Real-IAD
Website: https://realiad4ad.github.io/Real-IAD/ Real-IAD is released for research purpose only. If your Hugging face is registrated by your college email address, we will approve your access request directly, otherwise please send an email to [email protected] from your affiliation email address. Thank your understanding and cooperation. A recommended application email format is: I am writing to request access to the Real-IAD for research purposes. Name: [Your Full Name] Affilication:… See the full description on the dataset page: https://huggingface.co/datasets/Real-IAD/Real-IAD.
sentence-transformers/all-nli
sentence-transformers
Dataset Card for AllNLI This dataset is a concatenation of the SNLI and MultiNLI datasets. Despite originally being intended for Natural Language Inference (NLI), this dataset can be used for training/finetuning an embedding model for semantic textual similarity. Dataset Subsets pair-class subset Columns: "premise", "hypothesis", "label" Column types: str, str, class with {"0": "entailment", "1": "neutral", "2", "contradiction"} Examples:{… See the full description on the dataset page: https://huggingface.co/datasets/sentence-transformers/all-nli.
hw-liang/Diffusion4D
hw-liang
Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models [Project Page] | [Arxiv] | [Code] News 2024.6.28: Released rendered data from curated objaverse-xl. 2024.6.4: Released rendered data from curated objaverse-1.0, including orbital videos of dynamic 3D, orbital videos of static 3D, and monocular videos from front view. 2024.5.27: Released metadata for objects! Overview We collect a large-scale, high-quality… See the full description on the dataset page: https://huggingface.co/datasets/hw-liang/Diffusion4D.
UCSC-VLAA/Recap-DataComp-1B
UCSC-VLAA
Dataset Card for Recap-DataComp-1B Recap-DataComp-1B is a large-scale image-text dataset that has been recaptioned using an advanced LLaVA-1.5-LLaMA3-8B model to enhance the alignment and detail of textual descriptions. Dataset Details Dataset Description Our paper aims to bridge this community effort, leveraging the powerful and open-sourced LLaMA-3, a GPT-4 level LLM. Our recaptioning pipeline is simple: first, we fine-tune a LLaMA-3-8B powered… See the full description on the dataset page: https://huggingface.co/datasets/UCSC-VLAA/Recap-DataComp-1B.
mdwiratathya/ROCO-radiology
mdwiratathya
Dataset Summary The "ROCO-radiology" dataset is derived from the Radiology Objects in COntext (ROCO) dataset, a large-scale medical and multimodal imaging collection. The language used is primarily English, and it covers the domain of medical imaging, specifically radiology. We only modified the dataset by choosing only for radiology dataset and convert the image into PIL Object. For further details and citation, pleaser refer to original author.
OpenGVLab/GUI-Odyssey
OpenGVLab
Dataset Card for GUI Odyssey Repository: https://github.com/OpenGVLab/GUI-Odyssey Paper: https://arxiv.org/abs/2406.08451 Point of Contact: Wenqi Shao Introduction GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes from 6 mobile devices, spanning 6 types of cross-app tasks, 201 apps, and 1.4K app combos. Data Structure Data Fields Each field of… See the full description on the dataset page: https://huggingface.co/datasets/OpenGVLab/GUI-Odyssey.
CaptionEmporium/coyo-hd-11m-llavanext
CaptionEmporium
Dataset Card for coyo-hd-11m-llavanext Dataset Summary This is a data of 22,794,288 synthetic captions for 11,397,144 images from coyo-700m. The "hd" in the title refers to two aspects: high density and high definition. While large alt-text image pair datasets have many images, only a very small proportion of these images are in higher resolutions and have substantial concept density. For example, many of these datasets consist of more than 50% thumbnail sized or… See the full description on the dataset page: https://huggingface.co/datasets/CaptionEmporium/coyo-hd-11m-llavanext.
arcee-ai/The-Tome
arcee-ai
The Tome is a curated dataset designed for training large language models with a focus on instruction following. It was used in the training of our Arcee-Nova/Spark models, which was later merged with Qwen2-72B-Instruct (or 7B with the Spark model). Dataset Composition Total Samples: 1.75 million Source: Compiled from 9 publicly available datasets The Tome is comprised of the following datasets: arcee-ai/infini-instruct-top-500k (BAAI/Infinity-Instruct)… See the full description on the dataset page: https://huggingface.co/datasets/arcee-ai/The-Tome.
mlabonne/FineTome-100k
mlabonne
FineTome-100k The FineTome dataset is a subset of arcee-ai/The-Tome (without arcee-ai/qwen2-72b-magpie-en), re-filtered using HuggingFaceFW/fineweb-edu-classifier. It was made for my article "Fine-tune Llama 3.1 Ultra-Efficiently with Unsloth".
THUDM/LongWriter-6k
THUDM
LongWriter-6k 🤗 [LongWriter Dataset] • 💻 [Github Repo] • 📃 [LongWriter Paper] LongWriter-6k dataset contains 6,000 SFT data with ultra-long output ranging from 2k-32k words in length (both English and Chinese). The data can support training LLMs to extend their maximum output window size to 10,000+ words. All Models We open-sourced the following list of models trained on LongWriter-6k: Model Huggingface Repo Description LongWriter-glm4-9b 🤗… See the full description on the dataset page: https://huggingface.co/datasets/THUDM/LongWriter-6k.
lmarena-ai/PPE-Human-Preference-V1
lmarena-ai
Overview This contains the human preference evaluation set for Preference Proxy Evaluations. This dataset is meant for benchmarking and evaluation, not for training. Paper Code License User prompts are licensed under CC-BY-4.0, and model outputs are governed by the terms of use set by the respective model providers. Citation @misc{frick2024evaluaterewardmodelsrlhf, title={How to Evaluate Reward Models for RLHF}, author={Evan Frick and… See the full description on the dataset page: https://huggingface.co/datasets/lmarena-ai/PPE-Human-Preference-V1.
lightonai/fc-amf-ocr
lightonai
Dataset Card for Finance Commons AMF OCR dataset (FC-AMF-OCR) Dataset Summary The FC-AMF-OCR dataset is a comprehensive document collection derived from the AMF-PDF dataset, which is part of the Finance Commons collection. This extensive dataset comprises 9.3 million images, each processed through Optical Character Recognition (OCR) using the docTR library. While native text annotations are available in the AMF-Text dataset, these annotations suffer from imperfections and… See the full description on the dataset page: https://huggingface.co/datasets/lightonai/fc-amf-ocr.
jackyhate/text-to-image-2M
jackyhate
text-to-image-2M: A High-Quality, Diverse Text-to-Image Training Dataset Overview text-to-image-2M is a curated text-image pair dataset designed for fine-tuning text-to-image models. The dataset consists of approximately 2 million samples, carefully selected and enhanced to meet the high demands of text-to-image model training. The motivation behind creating this dataset stems from the observation that datasets with over 1 million samples tend to produce better… See the full description on the dataset page: https://huggingface.co/datasets/jackyhate/text-to-image-2M.
OpenFace-CQUPT/HumanCaption-10M
OpenFace-CQUPT
HumanCaption-10M HumanCaption-10M: a large, diverse, high-quality dataset of human-related images with natural language descriptions (image to text). The dataset is designed to facilitate research on human-centered tasks. HumanCaption-10M contains approximately 10 million human-related images and their corresponding facial features in natural language descriptions and is the second generation version of FaceCaption-15M Illustrations Piplines of constructing… See the full description on the dataset page: https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-10M.
facebook/Self-taught-evaluator-DPO-data
facebook
This dataset is released as part of Self-taught evaluators research project. Please refer to our project materials here for training and evaluation details. Loading the dataset with transformers This dataset is built upon WildChat prompts by using Llama-3.1-70B-Instruct to generate responses and evaluation plans. Details on how to build such a self-taught dataset can be found in Self-taught evaluators. Minimal example below showing how to prepare training data. from datasets… See the full description on the dataset page: https://huggingface.co/datasets/facebook/Self-taught-evaluator-DPO-data.
SylvanL/Traditional-Chinese-Medicine-Dataset-Pretrain
SylvanL
启古纳今,厚德精术 数据介绍 非网络来源的高质量中医数据集-预训练 High-Quality Traditional Chinese Medicine Dataset from Non-Internet Sources - Pretraining 该数据集经过大量人力和资源的投入精心构建,以共建LLM高质量中文社区为己任。 包含约1GB的中医各个领域临床案例、名家典籍、医学百科,名词解释等优质内容,涵盖全面,配比均衡。 数据集主要由非网络来源的内部数据构成,并99%为简体中文内容,内容质量优异,信息密度可观。 注意:该数据集仅适用于预训练或继续预训练用途,针对SFT/IFT的QA数据集详见:SylvanL/Traditional-Chinese-Medicine-Dataset-SFT… See the full description on the dataset page: https://huggingface.co/datasets/SylvanL/Traditional-Chinese-Medicine-Dataset-Pretrain.
PKU-Alignment/align-anything-400k
PKU-Alignment
Overview: Align-Anything 400K A Comprehensive All-Modality Alignment Dataset with Fine-grained Preference Annotations and Language Feedback. 🏠 Homepage | 🤗 Align-Anything-400K Dataset Our world is inherently multimodal. Humans perceive the world through multiple senses, and Language Models should operate similarly. However, the development of Current Multi-Modality Foundation Models faces limitations due to the availability and diversity of data across different modalities.… See the full description on the dataset page: https://huggingface.co/datasets/PKU-Alignment/align-anything-400k.
MLBtrio/genz-slang-dataset
MLBtrio
Dataset Details This dataset contains a rich collection of popular slang terms and acronyms used primarily by Generation Z. It includes detailed descriptions of each term, its context of use, and practical examples that demonstrate how the slang is used in real-life conversations. The dataset is designed to capture the unique and evolving language patterns of GenZ, reflecting their communication style in digital spaces such as social media, text messaging, and online forums.… See the full description on the dataset page: https://huggingface.co/datasets/MLBtrio/genz-slang-dataset.
MMIE/MMIE
MMIE
MMIE: Massive Multimodal Interleaved Comprehension Benchmark for Large Vision-Language Models [📖 Project] [📄 Paper] [💻 Code] [📝 Dataset] [🤖 Evaluation Model] [🏆 Leaderboard] [🌟 Overview] [🔧 Dataset Details] [🚩 Citation] 🌟 Overview We present MMIE, a Massive Multimodal Interleaved understanding Evaluation benchmark, designed for Large Vision-Language Models (LVLMs). MMIE offers a robust framework for evaluating the interleaved comprehension and… See the full description on the dataset page: https://huggingface.co/datasets/MMIE/MMIE.
rombodawg/Everything_Instruct
rombodawg
Everything you need... all in one place 💘 Everything instruct is a massive alpaca instruct formatted dataset consisting of a wide variety of topics meant to bring LLM's to the next level in open source AI. Note: This dataset is fully uncensored (No model will refuse any request trained on this dataset unless otherwise aligned) The data in this dataset features: Science: 12,580 rows Social media: 18,405 rows General Knowledge: 906,346 rows Cooking: 20,763 rows Writing: 414,646 rows Medicine:… See the full description on the dataset page: https://huggingface.co/datasets/rombodawg/Everything_Instruct.
NerdyRodent/NR-Flux-ComfyUI-Workflows
NerdyRodent
Overview A collection of various workflows for using Flux.1 in ComfyUI. Workflows will require custom nodes, best installed with ComfyUI Manager - https://github.com/ltdrdata/ComfyUI-Manager If you need to manually install (rather than via the usual "install missing nodes"), start by installing these: Impact Pack Extra Models for ComfyUI ComfyUI Extra Samplers ComfyUI-GGUF ComfyUI_SUNoise Comfyroll Studio ComfyUI-ppm stability-ComfyUI-nodes rgthree's ComfyUI Nodes Use… See the full description on the dataset page: https://huggingface.co/datasets/NerdyRodent/NR-Flux-ComfyUI-Workflows.
amazon/AmazonQAC
amazon
AmazonQAC: A Large-Scale, Naturalistic Query Autocomplete Dataset Train Dataset Size: 395 million samplesTest Dataset Size: 20k samplesSource: Amazon Search LogsFile Format: ParquetCompression: Snappy Dataset Summary AmazonQAC is a large-scale dataset designed for Query Autocomplete (QAC) tasks, sourced from real-world Amazon Search logs. It provides anonymized sequences of user-typed prefixes leading to final search terms, along with rich session metadata such as… See the full description on the dataset page: https://huggingface.co/datasets/amazon/AmazonQAC.
Virtue-AI-HUB/SecCodePLT
Virtue-AI-HUB
SecCodePLT SecCodePLT is a unified and comprehensive evaluation platform for code GenAIs' risks. 1. Dataset Details 1.1 Dataset Description Language(s) (NLP): English License: MIT 1.2 Dataset Sources Repository: Coming soon Paper: https://arxiv.org/pdf/2410.11096 Demo: https://seccodeplt.github.io/ 2. Uses 2.1 Direct Use This dataset can be used for evaluate the risks of large language… See the full description on the dataset page: https://huggingface.co/datasets/Virtue-AI-HUB/SecCodePLT.
thejaminator/introspection_self_predict
thejaminator
We outline the data used in the paper Looking Inward: Language Models Can Learn About Themselves by Introspection. Dataset jsonl format For convenience, a pydantic model is provided that shows the schema of the jsonl files. from pydantic import BaseModel class DataRow(BaseModel): # The original question from the dataset e.g. mmlu original_question: str original_dataset: str object_level_prompt: str hypothetical_prompt: str # e.g. first_word… See the full description on the dataset page: https://huggingface.co/datasets/thejaminator/introspection_self_predict.
meta-ai-for-media-research/movie_gen_video_bench
meta-ai-for-media-research
Dataset Card for the Movie Gen Benchmark Movie Gen is a cast of foundation models that generates high-quality, 1080p HD videos with different aspect ratios and synchronized audio. Here, we introduce our evaluation benchmark "Movie Gen Bench Video Bench", as detailed in the Movie Gen technical report (Section 3.5.2). To enable fair and easy comparison to Movie Gen for future works on these evaluation benchmarks, we additionally release the non cherry-picked generated videos from… See the full description on the dataset page: https://huggingface.co/datasets/meta-ai-for-media-research/movie_gen_video_bench.
minchyeom/thinker
minchyeom
A Chain-of-Thought (CoT) dataset that contains traces of complex and sophisticated reasoning, to mimic the "thinking" process of OpenAI's o1. Wrap the contents of the reasoning column in some XML tag (such as <reasoning>). Raw .jsonl dataset file can be found under the Files and Versions tab.
SoftMINER-Group/CulturalKaleidoscope
SoftMINER-Group
This is a gated repository. Please provide your details for the dataset.
TeaPearce/CounterStrike_Deathmatch
TeaPearce
[NOTICE] This dataset is currently in transition from the OneDrive previously hosted here:https://github.com/TeaPearce/Counter-Strike_Behavioural_Cloning?tab=readme-ov-file#datasets Dataset presented in:Counter-Strike Deathmatch with Large-Scale Behavioural Cloning, IEEE Conference on Games (CoG) 2022 [Best Paper Award]https://arxiv.org/abs/2104.04258Tim Pearce, Jun Zhu Other works used in:Imitating Human Behaviour with Diffusion Models, ICLR 2023https://arxiv.org/abs/2301.10677Tim Pearce… See the full description on the dataset page: https://huggingface.co/datasets/TeaPearce/CounterStrike_Deathmatch.
osunlp/ScienceAgentBench
osunlp
ScienceAgentBench The advancements of language language models (LLMs) have piqued growing interest in developing LLM-based language agents to automate scientific discovery end-to-end, which has sparked both excitement and skepticism about their true capabilities. In this work, we call for rigorous assessment of agents on individual tasks in a scientific workflow before making bold claims on end-to-end automation. To this end, we present ScienceAgentBench, a new benchmark for… See the full description on the dataset page: https://huggingface.co/datasets/osunlp/ScienceAgentBench.
OpenFace-CQUPT/HumanCaption-HQ-311K
OpenFace-CQUPT
HumanCaption-HQ-311K HumanCaption-HQ-311K: Approximately 311,000 human-related images and their corresponding natural language descriptions. Compared to HumanCaption-10M, this dataset not only includes associated facial language descriptions but also filters out images with higher resolution and employs the powerful visual understanding capabilities of GPT-4V to generate more detailed and accurate text descriptions. This dataset is used for the second phase of training… See the full description on the dataset page: https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-HQ-311K.
wpei/MTU-Bench
wpei
Related Paper For more information on the methodology and findings related to this dataset, please refer to the following paper: MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Models
pranked03/ViDAS
pranked03
ViDAS Dataset Abstract We present a novel dataset aimed at advancing danger analysis and assessment by addressing the challenge of quantifying danger in video content and identifying how human-like a Large Language Model (LLM) evaluator is for the same. This is achieved by compiling a collection of 100 YouTube videos featuring various events. Each video is annotated by human participants who provided danger ratings on a scale from 0 (no danger to humans) to 10… See the full description on the dataset page: https://huggingface.co/datasets/pranked03/ViDAS.
khaled123/Tunisian_Dialectic_English_Derja
khaled123
Tunisian-English Dialectic Derja Dataset Overview This dataset is a rich and extensive collection of Tunisian dialectic (Derja) and English translations from various sources, updated as of October 2024. It includes synthetic translations, instructional data, media transcripts, social media content, and more. Dataset Structure The dataset is composed of JSON files, each containing a list of dictionaries with a text field. The data includes… See the full description on the dataset page: https://huggingface.co/datasets/khaled123/Tunisian_Dialectic_English_Derja.
Rapidata/flux1.1-likert-scale-preference
Rapidata
Flux1.1 Likert Scale Text-to-Image Alignment Evaluation This dataset contains images generated using Flux1.1 [pro] based on the prompts from our text-to-image generation benchmark. Where the benchmark generally focuses on pairwise comparisons to rank different image generation models against each other, this Likert-scale dataset focuses on one particular model and aims to reveal the particular nuances and highlight strong and weaks points of the model. Dataset… See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/flux1.1-likert-scale-preference.
ajibawa-2023/Software-Architecture
ajibawa-2023
Software-Architecture I am releasing a Large Dataset covering topics related to Software-Architecture. This dataset consists of around 450,000 lines of data in jsonl. I have included following topics: Architectural Frameworks Architectural Patterns for Reliability Architectural Patterns for Scalability Architectural Patterns Architectural Quality Attributes Architectural Testing Architectural Views Architectural Decision-Making Advanced Research Cloud-Based Architectures Component-Based… See the full description on the dataset page: https://huggingface.co/datasets/ajibawa-2023/Software-Architecture.
tblard/allocine
tblard
Dataset Card for Allociné Dataset Summary The Allociné dataset is a French-language dataset for sentiment analysis. The texts are movie reviews written between 2006 and 2020 by members of the Allociné.fr community for various films. It contains 100k positive and 100k negative reviews divided into train (160k), validation (20k), and test (20k). Supported Tasks and Leaderboards text-classification, sentiment-classification: The dataset can be used… See the full description on the dataset page: https://huggingface.co/datasets/tblard/allocine.
arxiv-community/arxiv_dataset
arxiv-community
A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.
legacy-datasets/banking77
legacy-datasets
Dataset Card for BANKING77 Dataset Summary Deprecated: Dataset "banking77" is deprecated and will be deleted. Use "PolyAI/banking77" instead. Dataset composed of online banking queries annotated with their corresponding intents. BANKING77 dataset provides a very fine-grained set of intents in a banking domain. It comprises 13,083 customer service queries labeled with 77 intents. It focuses on fine-grained single-domain intent detection. Supported… See the full description on the dataset page: https://huggingface.co/datasets/legacy-datasets/banking77.
bookcorpus/bookcorpus
bookcorpus
Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.This work aims to align books to their movie releases in order to providerich descriptive explanations for visual content that go semantically farbeyond the captions available in current datasets. \
legacy-datasets/c4
legacy-datasets
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org". This is the processed version of Google's C4 dataset by AllenAI.
china-ai-law-challenge/cail2018
china-ai-law-challenge
Dataset Card for CAIL 2018 Dataset Summary [More Information Needed] Supported Tasks and Leaderboards [More Information Needed] Languages [More Information Needed] Dataset Structure Data Instances [More Information Needed] Data Fields [More Information Needed] Data Splits [More Information Needed] Dataset Creation Curation Rationale [More… See the full description on the dataset page: https://huggingface.co/datasets/china-ai-law-challenge/cail2018.
abisee/cnn_dailymail
abisee
Dataset Card for CNN Dailymail Dataset Dataset Summary The CNN / DailyMail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering. Supported Tasks and Leaderboards… See the full description on the dataset page: https://huggingface.co/datasets/abisee/cnn_dailymail.
google/code_x_glue_cc_defect_detection
google
Dataset Card for "code_x_glue_cc_defect_detection" Dataset Summary CodeXGLUE Defect-detection dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for… See the full description on the dataset page: https://huggingface.co/datasets/google/code_x_glue_cc_defect_detection.
tau/commonsense_qa
tau
Dataset Card for "commonsense_qa" Dataset Summary CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers. The dataset is provided in two major training/validation/testing set splits: "Random split" which is the main evaluation split, and "Question token split", see paper for details.… See the full description on the dataset page: https://huggingface.co/datasets/tau/commonsense_qa.
hendrycks/competition_math
hendrycks
The Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems from mathematics competitions, including the AMC 10, AMC 12, AIME, and more. Each problem in MATH has a full step-by-step solution, which can be used to teach models to generate answer derivations and explanations.
facebook/empathetic_dialogues
facebook
PyTorch original implementation of Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset
nyu-mll/glue
nyu-mll
Dataset Card for GLUE Dataset Summary GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems. Supported Tasks and Leaderboards The leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks: ax A manually-curated evaluation dataset for fine-grained… See the full description on the dataset page: https://huggingface.co/datasets/nyu-mll/glue.
google-research-datasets/go_emotions
google-research-datasets
Dataset Card for GoEmotions Dataset Summary The GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral. The raw data is included as well as the smaller, simplified version of the dataset with predefined train/val/test splits. Supported Tasks and Leaderboards This dataset is intended for multi-class, multi-label emotion classification. Languages The data is in English.… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/go_emotions.
Rowan/hellaswag
Rowan
HellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019.
UCSD26/medical_dialog
UCSD26
The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com. All copyrights of the data belong to healthcaremagic.com and icliniq.com.
nyu-mll/multi_nli
nyu-mll
Dataset Card for Multi-Genre Natural Language Inference (MultiNLI) Dataset Summary The Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalization evaluation. The corpus served as the basis for the… See the full description on the dataset page: https://huggingface.co/datasets/nyu-mll/multi_nli.
Helsinki-NLP/open_subtitles
Helsinki-NLP
This is a new collection of translated movie subtitles from http://www.opensubtitles.org/. IMPORTANT: If you use the OpenSubtitle corpus: Please, add a link to http://www.opensubtitles.org/ to your website and to your reports and publications produced with the data! This is a slightly cleaner version of the subtitle collection using improved sentence alignment and better language checking. 62 languages, 1,782 bitexts total number of files: 3,735,070 total number of tokens: 22.10G total number of sentence fragments: 3.35G
qiaojin/PubMedQA
qiaojin
Dataset Card for [Dataset Name] Dataset Summary The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. Supported Tasks and Leaderboards The official leaderboard is available at: https://pubmedqa.github.io/. 500 questions in the pqa_labeled are used as the test set. They can be found at… See the full description on the dataset page: https://huggingface.co/datasets/qiaojin/PubMedQA.
ehovy/race
ehovy
Dataset Card for "race" Dataset Summary RACE is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The dataset is collected from English examinations in China, which are designed for middle school and high school students. The dataset can be served as the training and test sets for machine comprehension. Supported Tasks and Leaderboards More Information Needed Languages More… See the full description on the dataset page: https://huggingface.co/datasets/ehovy/race.
Samsung/samsum
Samsung
SAMSum Corpus contains over 16k chat dialogues with manually annotated summaries. There are two features: - dialogue: text of dialogue. - summary: human written summary of the dialogue. - id: id of a example.
karpathy/tiny_shakespeare
karpathy
40,000 lines of Shakespeare from a variety of Shakespeare's plays. Featured in Andrej Karpathy's blog post 'The Unreasonable Effectiveness of Recurrent Neural Networks': http://karpathy.github.io/2015/05/21/rnn-effectiveness/. To use for e.g. character modelling: ``` d = datasets.load_dataset(name='tiny_shakespeare')['train'] d = d.map(lambda x: datasets.Value('strings').unicode_split(x['text'], 'UTF-8')) # train split includes vocabulary for other splits vocabulary = sorted(set(next(iter(d)).numpy())) d = d.map(lambda x: {'cur_char': x[:-1], 'next_char': x[1:]}) d = d.unbatch() seq_len = 100 batch_size = 2 d = d.batch(seq_len) d = d.batch(batch_size) ```