id
stringlengths
6
121
author
stringlengths
2
42
description
stringlengths
0
6.67k
ymoslem/MediaSpeech
ymoslem
MediaSpeech MediaSpeech is a dataset of Arabic, French, Spanish, and Turkish media speech built with the purpose of testing Automated Speech Recognition (ASR) systems performance. The dataset contains 10 hours of speech for each language provided. The dataset consists of short speech segments automatically extracted from media videos available on YouTube and manually transcribed, with some pre-processing and post-processing. Baseline models and WAV version of the dataset can be… See the full description on the dataset page: https://huggingface.co/datasets/ymoslem/MediaSpeech.
victor/hf-spaces-with-descriptions
victor
HF Spaces with Descriptions A collection of Hugging Face Spaces with AI generated descriptions (using Mixtral).
TIGER-Lab/MATH-plus
TIGER-Lab
This dataset contains the MetaMath, MATH-orca and some additional MATH-augmented dataset with GPT-4. This dataset is being used to train MAmmoTH2-plus version (https://tiger-ai-lab.github.io/MAmmoTH2/).
sujet-ai/Sujet-Finance-Instruct-177k
sujet-ai
Sujet Finance Dataset Overview The Sujet Finance dataset is a comprehensive collection designed for the fine-tuning of Language Learning Models (LLMs) for specialized tasks in the financial sector. It amalgamates data from 18 distinct datasets hosted on HuggingFace, resulting in a rich repository of 177,597 entries. These entries span across seven key financial LLM tasks, making Sujet Finance a versatile tool for developing and enhancing financial applications of AI.… See the full description on the dataset page: https://huggingface.co/datasets/sujet-ai/Sujet-Finance-Instruct-177k.
openbmb/UltraInteract_sft
openbmb
Introduction 📜 Paper 🤗 Eurus Collection 🤗 UltraInteract SFT Preference Learning GitHub Repo UltraInteract is a large-scale, high-quality alignment dataset specifically designed for complex reasoning tasks. For each instruction, it includes a preference tree consisting of (1) reasoning chains with diverse planning strategies in a unified format (2) multi-turn interaction trajectories with the environment and the critique (3) pairwise data to facilitate preference learning… See the full description on the dataset page: https://huggingface.co/datasets/openbmb/UltraInteract_sft.
heegyu/ko-openchat-0404-test
heegyu
한국어 챗봇 학습을 위해, 여러 데이터를 가져와서 포멧을 통일 (각 데이터셋마다 처음 1만개씩 추출) heegyu/glaive-function-calling-v2-ko: 15170 items heegyu/PKU-SafeRLHF-ko: 135213 items maywell/koVast: 684579 items MarkrAI/KoCommercial-Dataset: 175454 items HuggingFaceH4/ultrachat_200k: 207865 items Open-Orca/SlimOrca-Dedup: 363491 items glaiveai/glaive-code-assistant-v2: 215166 items
kuotient/orca-math-korean-dpo-pairs
kuotient
axolotl does not take revision arg as an option and i'm lazy so i made this. type: chatml.intel Orca-math-korean-preference question: orca-math dataset의 question chosen: label이 참일 경우 answer 혹은 generated의 random.choice, 거짓일 경우 answer (Orca-math original paper 참고) rejected: label이 참일 경우 다른 rejected value의 random.choice, 거짓일 경우 rejected (Orca-math original paper 참고) 비고 llm_exact_match prompt SYSTEM_PROMPT: As an expert Math teacher, your role is to… See the full description on the dataset page: https://huggingface.co/datasets/kuotient/orca-math-korean-dpo-pairs.
xlangai/DS-1000
xlangai
DS-1000 in simplified format 🔥 Check the leaderboard from Eval-Arena on our project page. See testing code and more information (also the original fill-in-the-middle/Insertion format) in the DS-1000 repo. Reformatting credits: Yuhang Lai, Sida Wang
amaai-lab/MidiCaps
amaai-lab
MidiCaps Dataset The MidiCaps dataset [1] is a large-scale dataset of 168,385 midi music files with descriptive text captions, and a set of extracted musical features. The captions have been produced through a captioning pipeline incorporating MIR feature extraction and LLM Claude 3 to caption the data from extracted features with an in-context learning task. The framework used to extract the captions is available open source on github. The original MIDI files originate from… See the full description on the dataset page: https://huggingface.co/datasets/amaai-lab/MidiCaps.
HuggingFaceM4/the_cauldron
HuggingFaceM4
Dataset Card for The Cauldron Dataset description The Cauldron is part of the Idefics2 release. It is a massive collection of 50 vision-language datasets (training sets only) that were used for the fine-tuning of the vision-language model Idefics2. Load the dataset To load the dataset, install the library datasets with pip install datasets. Then, from datasets import load_dataset ds = load_dataset("HuggingFaceM4/the_cauldron", "ai2d") to download… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceM4/the_cauldron.
xai-org/RealworldQA
xai-org
RealWorldQA RealWorldQA is a benchmark designed for real-world understanding. The dataset consists of anonymized images taken from vehicles, in addition to other real-world images. We are excited to release RealWorldQA to the community, and we intend to expand it as our multimodal models improve. The initial release of the RealWorldQA consists of over 700 images, with a question and easily verifiable answer for each image. See the announcement of Grok-1.5 Vision Preview.… See the full description on the dataset page: https://huggingface.co/datasets/xai-org/RealworldQA.
FinLang/investopedia-embedding-dataset
FinLang
Dataset Card for investopedia-embedding dataset We curate a dataset of substantial size pertaining to finance from Investopedia using a new technique that leverages unstructured scraping data and LLM to generate structured data that is suitable for fine-tuning embedding models. The dataset generation uses a new method of self-verification that ensures that the generated question-answer pairs and not hallucinated by the LLM with high probability. Dataset Description… See the full description on the dataset page: https://huggingface.co/datasets/FinLang/investopedia-embedding-dataset.
jojo0217/korean_safe_conversation
jojo0217
개요 성균관대 - VAIV COMPANY 산학협력을 위해 구축한 일상대화 데이터입니다. 자연스럽고 윤리적인 챗봇 구축을 위한 데이터셋 입니다. 고품질을 위해 대부분의 과정에서 사람이 직접 검수하였으며생성 번역 등의 과정에서는 GPT3.5-turbo, GPT4를 사용하였습니다. 일상대화에 중점을 두면서혐오표현, 편향적인 대답을 지양하면서 일상대화를 하는 것에 중점을 두었습니다. 데이터 구축 과정 데이터 구성 데이터 종류 개수 비고 url 일상대화 데이터셋 2063 국립국어원 모두의 말뭉치 https://corpus.korean.go.kr/request/reausetMain.do?lang=ko 감성대화 1020 AIHub 감성대화 데이터… See the full description on the dataset page: https://huggingface.co/datasets/jojo0217/korean_safe_conversation.
nvidia/ChatRAG-Bench
nvidia
ChatRAG Bench ChatRAG Bench is a benchmark for evaluating a model's conversational QA capability over documents or retrieved context. ChatRAG Bench are built on and derived from 10 existing datasets: Doc2Dial, QuAC, QReCC, TopioCQA, INSCIT, CoQA, HybriDialogue, DoQA, SQA, ConvFinQA. ChatRAG Bench covers a wide range of documents and question types, which require models to generate responses from long context, comprehend and reason over tables, conduct arithmetic calculations… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/ChatRAG-Bench.
allenai/WildChat-1M
allenai
Dataset Card for WildChat Dataset Description Paper: https://arxiv.org/abs/2405.01470 Interactive Search Tool: https://wildvisualizer.com (paper) License: ODC-BY Language(s) (NLP): multi-lingual Point of Contact: Yuntian Deng Dataset Summary WildChat is a collection of 1 million conversations between human users and ChatGPT, alongside demographic data, including state, country, hashed IP addresses, and request headers. We collected WildChat… See the full description on the dataset page: https://huggingface.co/datasets/allenai/WildChat-1M.
Yossh/danbooru2023-webp-4Mpixel-224
Yossh
The data set is just resized to 224*224 https://huggingface.co/datasets/KBlueLeaf/danbooru2023-webp-4Mpixel Pseudo code for processing def resize_image(file_path): with Image.open(file_path) as img: resized_img = img.resize((224, 224)) resized_img.save(file_path)
HuggingFaceFW/fineweb-edu-score-2
HuggingFaceFW
📚 FineWeb-Edu-score-2 1.3 trillion tokens of the finest educational data the 🌐 web has to offer What is it? 📚 FineWeb-Edu dataset consists of 1.3T tokens (FineWeb-Edu) and 5.4T tokens of educational web pages filtered from 🍷 FineWeb dataset. This is the 5.4 trillion version. Note: this version uses a lower educational score threshold = 2, which results in more documents, but lower quality compared to the 1.3T version. For more details… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2.
MixEval/MixEval
MixEval
🏠 Homepage | 👨‍💻 Github | 🏆 Leaderboard | 📜 arXiv | 📝 blog | 🤗 HF Paper | 𝕏 Twitter Benchmark correlations (%) with Chatbot Arena Elo, against the total costs of evaluating a single GPT-3.5-Turbo-0125 model. MixEval and MixEval-Hard show the highest correlations with Arena Elo and Arena Elo (En) among leading benchmarks. We reference the crowdsourcing price for Amazon Mechanical Turk ($0.05 per vote) when estimating the cost of evaluating a single model on Chatbot Arena… See the full description on the dataset page: https://huggingface.co/datasets/MixEval/MixEval.
remyxai/vqasynth_spacellava
remyxai
VQASynth_spacellava Uses the VQASynth pipeline to synthesize spatialVQA samples, mixed with general VQA samples used to fine-tune LLaVA-v1.5-13b.
edinburgh-dawg/mmlu-redux
edinburgh-dawg
Dataset Card for MMLU-Redux MMLU-Redux is a subset of 3,000 manually re-annotated questions across 30 MMLU subjects. Dataset Details Dataset Description Each data point in MMLU-Redux contains seven columns: question (str): The original MMLU question. choices (List[str]): The original list of four choices associated with the question from the MMLU dataset. answer (int): The MMLU ground truth label in the form of an array index between 0 and 3.… See the full description on the dataset page: https://huggingface.co/datasets/edinburgh-dawg/mmlu-redux.
NousResearch/CharacterCodex
NousResearch
Dataset Card for Character Codex Dataset Summary The Character Codex is a comprehensive dataset featuring popular characters from a wide array of media types and genres. Each entry includes detailed information about the character, the media source, and a unique scenario involving the character. This dataset is valuable for synthetic data, RAG for generative AI, writers, game developers, and fans who want to explore and utilize rich character descriptions for… See the full description on the dataset page: https://huggingface.co/datasets/NousResearch/CharacterCodex.
internlm/Lean-Workbook
internlm
Lean Workbook This dataset is about contest-level math problems formalized in Lean 4. Our dataset contains 57231 problems in the split of Lean Workbook and 82893 problems in the split of Lean Workbook Plus. We provide the natural language statement, answer, formal statement, and formal proof (if available) for each problem. These data can support autoformalization model training and searching for proofs. We open-source our code and our data. Our test environment is based on Lean… See the full description on the dataset page: https://huggingface.co/datasets/internlm/Lean-Workbook.
FBK-MT/Speech-MASSIVE
FBK-MT
Speech-MASSIVE Dataset Description Speech-MASSIVE is a multilingual Spoken Language Understanding (SLU) dataset comprising the speech counterpart for a portion of the MASSIVE textual corpus. Speech-MASSIVE covers 12 languages (Arabic, German, Spanish, French, Hungarian, Korean, Dutch, Polish, European Portuguese, Russian, Turkish, and Vietnamese) from different families and inherits from MASSIVE the annotations for the intent prediction and slot-filling tasks.… See the full description on the dataset page: https://huggingface.co/datasets/FBK-MT/Speech-MASSIVE.
linxy/LaTeX_OCR
linxy
LaTeX OCR 的数据仓库 本数据仓库是专为 LaTeX_OCR 及 LaTeX_OCR_PRO 制作的数据,来源于 https://zenodo.org/record/56198#.V2p0KTXT6eA 以及 https://www.isical.ac.in/~crohme/ 以及我们自己构建。 如果这个数据仓库有帮助到你的话,请点亮 ❤️like ++ 后续追加新的数据也会放在这个仓库 ~~ 原始数据仓库在github LinXueyuanStdio/Data-for-LaTeX_OCR. 数据集 本仓库有 5 个数据集 small 是小数据集,样本数 110 条,用于测试 full 是印刷体约 100k 的完整数据集。实际上样本数略小于 100k,因为用 LaTeX 的抽象语法树剔除了很多不能渲染的 LaTeX。 synthetic_handwrite 是手写体 100k 的完整数据集,基于 full 的公式,使用手写字体合成而来,可以视为人类在纸上的手写体。样本数实际上略小于 100k,理由同上。… See the full description on the dataset page: https://huggingface.co/datasets/linxy/LaTeX_OCR.
Magpie-Align/Magpie-Pro-300K-Filtered
Magpie-Align
Project Web: https://magpie-align.github.io/ Arxiv Technical Report: https://arxiv.org/abs/2406.08464 Codes: https://github.com/magpie-align/magpie Abstract Click Here High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor costs and a limited, predefined scope for prompting prevent… See the full description on the dataset page: https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered.
nkp37/OpenVid-1M
nkp37
Summary This is the dataset proposed in our paper "OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation". OpenVid-1M is a high-quality text-to-video dataset designed for research institutions to enhance video quality, featuring high aesthetics, clarity, and resolution. It can be used for direct training or as a quality tuning complement to other video datasets. All videos in the OpenVid-1M dataset have resolutions of at least 512×512. Furthermore, we… See the full description on the dataset page: https://huggingface.co/datasets/nkp37/OpenVid-1M.
tomg-group-umd/pixelprose
tomg-group-umd
From Pixels to Prose: A Large Dataset of Dense Image Captions [ arXiv paper ] PixelProse is a comprehensive dataset of over 16M (million) synthetically generated captions, leveraging cutting-edge vision-language models (Gemini 1.0 Pro Vision) for detailed and accurate descriptions. 1. Details Total number of image-caption pairs: 16,896,214 (16.9M) 6,538,898 (6.5M) pairs in the split of CommonPool 9,066,455 (9.1M) pairs in the split of CC12M 1,290,861 (1.3M)… See the full description on the dataset page: https://huggingface.co/datasets/tomg-group-umd/pixelprose.
nyu-visionx/CV-Bench
nyu-visionx
Cambrian Vision-Centric Benchmark (CV-Bench) This repository contains the Cambrian Vision-Centric Benchmark (CV-Bench), introduced in Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs. Files The test.parquet contains the full dataset annotations and images pre-loaded for processing with HF Datasets. It can be loaded as follows: from datasets… See the full description on the dataset page: https://huggingface.co/datasets/nyu-visionx/CV-Bench.
Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese
Magpie-Align
Project Web: https://magpie-align.github.io/ Arxiv Technical Report: https://arxiv.org/abs/2406.08464 Codes: https://github.com/magpie-align/magpie Abstract Click Here High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor costs and a limited, predefined scope for prompting prevent… See the full description on the dataset page: https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese.
lmms-lab/M4-Instruct-Data
lmms-lab
M4-Instruct Dataset Card Dataset details Dataset type: M4-Instruct is a set of multi-image datasets that are collected from public datasets or generated by the GPT-4V API. It is constructed for training LMMs for their interleaved multi-image capbilities, e.g., LLaVA-NeXT-Interleave. Dataset date: M4-Instruct was collected in April 2024, and released in June 2024. Paper or resources for more information: Blog:… See the full description on the dataset page: https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data.
Chrisneverdie/OnlySports_Dataset
Chrisneverdie
🏀nlySports Dataset Overview OnlySports Dataset is a comprehensive collection of English sports documents, comprising a diverse range of content including news articles, blogs, match reports, interviews, and tutorials. This dataset is part of the larger OnlySports collection, which includes: OnlySportsLM: A 196M parameter sports-domain language model OnlySports Dataset: The dataset described in this README OnlySports Benchmark: A novel evaluation method for… See the full description on the dataset page: https://huggingface.co/datasets/Chrisneverdie/OnlySports_Dataset.
We-Math/We-Math
We-Math
Dataset Card for WE-MATH Benchmark GitHub | Paper | Website Inspired by human-like mathematical reasoning, we introduce We-Math, the first benchmark specifically designed to explore the problem-solving principles beyond the end-to-end performance. We meticulously collect and categorize 6.5K visual math problems, spanning 67 hierarchical knowledge concepts and 5 layers of knowledge granularity. Citation If you find the content of this project helpful, please cite… See the full description on the dataset page: https://huggingface.co/datasets/We-Math/We-Math.
amphion/Emilia
amphion
Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation The Emilia dataset is the first open-source, multilingual, in-the-wild dataset designed for speech generation. It offers over 101,000 hours of high-quality speech data across six languages: Chinese (zh), English (en), Japanese (ja), Korean (ko), German (de), and French (fr). The dataset includes various speaking styles and their corresponding transcriptions. README… See the full description on the dataset page: https://huggingface.co/datasets/amphion/Emilia.
omni-research/DREAM-1K
omni-research
DREAM-1K DREAM-1K (Description with Rich Events, Actions, and Motions) is a challenging video description benchmark. It contains a collection of 1,000 short (around 10 seconds) video clips with diverse complexities from five different origins: live-action movies, animated movies, stock videos, long YouTube videos, and TikTok-style short videos. We provide a fine-grained manual annotation for each video. Bellow is the dataset statistics:
zwq2018/Multi-modal-Self-instruct
zwq2018
Dataset Description Paper Information Dataset Examples Leaderboard Dataset Usage Data Downloading Data Format Evaluation Citation You can download the zip dataset directly, and both train and test subsets are collected in Multi-modal-Self-instruct.zip. Dataset Description Multi-Modal Self-Instruct dataset utilizes large language models and their code capabilities to synthesize massive abstract images and visual reasoning instructions across daily scenarios. This benchmark… See the full description on the dataset page: https://huggingface.co/datasets/zwq2018/Multi-modal-Self-instruct.
Magpie-Align/Magpie-Reasoning-150K
Magpie-Align
Project Web: https://magpie-align.github.io/ Arxiv Technical Report: https://arxiv.org/abs/2406.08464 Codes: https://github.com/magpie-align/magpie Abstract Click Here High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor costs and a limited, predefined scope for prompting prevent… See the full description on the dataset page: https://huggingface.co/datasets/Magpie-Align/Magpie-Reasoning-150K.
mrzjy/kaomoji_caption
mrzjy
Kaomoji Captions # throw a table angrily | 掀桌子 (╯°□°)╯︵ ┻━┻ # surprise | 吃惊 Σ( ° △ °|||)︴ This is a collection of 10k+ Kaomojis (颜文字) with captions and meta info. Most of the captions are in English, while 1k+ captions are in Chinese. The data are crawled and parsed from different websites. There might be repeated samples, so you'd better perform deduplications before usage.
fwnlp/mDPO-preference-data
fwnlp
Dataset derived from VLFeedback
watashihakobashi/ogiri
watashihakobashi
東京大学松尾・岩澤研究室主催のLLM講座2024の第5回「SFT」演習で使用するデータセットです。https://weblab.t.u-tokyo.ac.jp/en/education/large-language-model/ 人手で作成した入力に対して以下の2つのモデルによる出力を抜粋したデータとなります。https://huggingface.co/watashiha/watashiha-gpt-6bhttps://huggingface.co/watashiha/Watashiha-Llama-2-13B-Ogiri-sft 教育、研究目的でのみ使用してください。
grider-withourai/nekopara-speech
grider-withourai
Nekopara Audio Dataset Dataset Description This dataset contains audio samples and associated metadata from the Nekopara visual novel series, covering volumes 0-4 and extra content. Features Feature Name Type Description character_name string Name of the character speaking volume string Game volume the audio is from (extra, vol0, vol1, vol2, vol3, vol4) audio Audio Audio sample (44.1 kHz sampling rate) voice_file string Original… See the full description on the dataset page: https://huggingface.co/datasets/grider-withourai/nekopara-speech.
lmms-lab/LLaVA-OneVision-Data
lmms-lab
Dataset Card for LLaVA-OneVision [2024-09-01]: Uploaded VisualWebInstruct(filtered), it's used in OneVision Stage almost all subsets are uploaded with HF's required format and you can use the recommended interface to download them and follow our code below to convert them. the subset of ureader_kg and ureader_qa are uploaded with the processed jsons and tar.gz of image folders. You may directly download them from the following url.… See the full description on the dataset page: https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data.
prithivMLmods/Dishes-Image-Zeroshot-IndianStyle
prithivMLmods
Merged using https://huggingface.co/spaces/prithivMLmods/Parquet-PIL ⚠️The images included in these datasets are intended solely for educational purposes. They are used to facilitate learning, research, and development in various educational and academic contexts. All images are sourced with the understanding that their use aligns with fair use principles and the educational objectives of this project.
Vikhrmodels/Grounded-RAG-RU-v2
Vikhrmodels
Датасет для алайнмента (граундинга) способности LLM отвечать на вопросы по документам (RAG) Этот датасет был собран на основе 13к разных статей из русской Википедии с помошью синтетических вопросов и ответов gpt-4-turbo-1106. Датасет содержит 4047 уникальных кластеров, т.е. комбинаций из документов - улосвная симуляция "найденных результатов" в Retrieval системе. Подробнее описано в разделе "Общие этапы сборки этого датасета". Общий объем датасета - 50210 уникальных диалогов. В… See the full description on the dataset page: https://huggingface.co/datasets/Vikhrmodels/Grounded-RAG-RU-v2.
FanqingM/MMIU-Benchmark
FanqingM
Dataset Card for MMIU Repository: https://github.com/OpenGVLab/MMIU Paper: https://arxiv.org/abs/2408.02718 Project Page: https://mmiu-bench.github.io/ Point of Contact: Fanqing Meng Introduction MMIU encompasses 7 types of multi-image relationships, 52 tasks, 77K images, and 11K meticulously curated multiple-choice questions, making it the most extensive benchmark of its kind. Our evaluation of 24 popular MLLMs, including both open-source and proprietary… See the full description on the dataset page: https://huggingface.co/datasets/FanqingM/MMIU-Benchmark.
ai-lab/MBD
ai-lab
Dataset summary This dataset is designed to assist in predicting a customer's propensity to purchase various products within a month following the reporting date. The dataset includes anonymized historical data on transaction activity, dialog embeddings, and geo-activity for some bank clients over 12 months. Reduced dataset version is avaliable as MBD-mini. The mini MBD dataset contains a reduced subset of the data, making it easier and faster to work with during the development… See the full description on the dataset page: https://huggingface.co/datasets/ai-lab/MBD.
MERA-evaluation/MERA
MERA-evaluation
MERA (Multimodal Evaluation for Russian-language Architectures) Summary MERA (Multimodal Evaluation for Russian-language Architectures) is a new open independent benchmark for the evaluation of SOTA models for the Russian language. The MERA benchmark unites industry and academic partners in one place to research the capabilities of fundamental models, draw attention to AI-related issues, foster collaboration within the Russian Federation and in the international… See the full description on the dataset page: https://huggingface.co/datasets/MERA-evaluation/MERA.
deepseek-ai/DeepSeek-Prover-V1
deepseek-ai
Evaluation Results | Model & Dataset Downloads | License | Contact Paper Link👁️ DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data 1. Introduction Proof assistants like Lean have revolutionized mathematical proof verification, ensuring high accuracy and reliability. Although large language models (LLMs) show… See the full description on the dataset page: https://huggingface.co/datasets/deepseek-ai/DeepSeek-Prover-V1.
ShapeNet/ShapeSplatsV1
ShapeNet
This repository contains ShapeSplats, a large dataset of Gaussian splats spanning 65K objects in 87 unique categories (gathered from ShapeNetCore, ShapeNet-Part, and ModelNet). ShapeSplatsV1 consists of the 52K objects across 55 categories of ShapeNetCore. The data is distributed as ply files where information about each Gaussian is encoded in custom vertex attributes. Please see DATA.md for details about the data. If you use the ShapeSplatsV1 data, you agree to abide by the ShapeNet terms of… See the full description on the dataset page: https://huggingface.co/datasets/ShapeNet/ShapeSplatsV1.
OpenGVLab/OmniCorpus-CC
OpenGVLab
🚀 We are uploading the dataset files ~ ⭐️ NOTE: Several parquet files were marked unsafe (viruses) by official scaning of hf, while they are reported safe by ClamAV and Virustotal. We found many false positive cases of the hf automatic scanning in hf discussions and raise one discussion to ask for a re-scanning. OmniCorpus-CC This is the repository of OmniCorpus-CC, which contains 988 million image-text interleaved documents collected from Common Crawl. Repository:… See the full description on the dataset page: https://huggingface.co/datasets/OpenGVLab/OmniCorpus-CC.
HuggingFaceFV/finevideo
HuggingFaceFV
FineVideo FineVideo Description Dataset Explorer Dataset Distribution How to download and use FineVideo Using datasets Using huggingface_hub Load a subset of the dataset Dataset Structure Data Instances Data Fields Dataset Creation License CC-By Considerations for Using the Data Social Impact of Dataset Discussion of Biases Additional Information Credits Future Work Opting out of FineVideo Citation Information Terms of use for FineVideo… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceFV/finevideo.
LooksJuicy/Chinese-Roleplay-Novel
LooksJuicy
一直以来,中文角色扮演开源数据集更关注超拟人方向或纯角色对话方向,严重缺乏交互游戏方向的开源数据,因此许多模型尤其参数量较小的模型对酒馆类的角色卡支持较差。 为了解决这一困境,本项目抛砖引玉,基于4500条小说文本使用GPT4o构建出约260条酒馆style的数据集,均为多轮对话,每轮对话都包括状态数据,如时间、角色状态、任务进度等。 数据key对应含义如下: world:表示当前故事的世界观,通常可以加入到system prompt中 scence:表示当前故事发生场景,包括时间、地点、环境、任务目标 character:表示当前故事中可能出现的角色和对应简介 field:表示这条数据每轮对话中需要生成的状态信息 conversations:表示这条数据的对话内容,分为问候语、主角(user)和系统(assistant) fields_format:表示状态信息的填充格式prompt,可能是列表、表格、JSON等各种形式 format_list:表示状态信息的填充结果 状态信息的示例如下 **健康状态**: 🌿 良好,身体颤抖 **精神状态**: 🌟 恐惧,极度紧张… See the full description on the dataset page: https://huggingface.co/datasets/LooksJuicy/Chinese-Roleplay-Novel.
DominiqueBrunato/TRACE-it_CALAMITA
DominiqueBrunato
Dataset Card for TRACE-it Challenge @ CALAMITA 2024 TRACE-it (Testing Relative clAuses Comprehension through Entailment in ITalian) has been proposed as part of the CALAMITA Challenge, the special event dedicated to the evaluation of Large Language Models (LLMs) in Italian and co-located with the Tenth Italian Conference on Computational Linguistics (https://clic2024.ilc.cnr.it/calamita/). The dataset focuses on evaluating LLM's understanding of a specific linguistic structure… See the full description on the dataset page: https://huggingface.co/datasets/DominiqueBrunato/TRACE-it_CALAMITA.
Salesforce/fineweb_deduplicated
Salesforce
TL;DR Fineweb is a popular and high quality open dataset. This dataset is a deduplicated version of Fineweb - removing rows with duplicate text, collecting counts. Motivation Fineweb is an open text dataset intended for training language models. It's one of the highest quality and most popular open datasets available. It has been produced by a reputable AI lab - HuggingFace and has been downloaded tens of thousands of times. Fineweb dataset is 93.4 TB and has… See the full description on the dataset page: https://huggingface.co/datasets/Salesforce/fineweb_deduplicated.
LooksJuicy/Chinese-Emotional-Intelligence
LooksJuicy
本项目旨在提升大模型情商,源数据来自网络,通过与我上个项目类似的方式构建问答对。
thesven/gsm8k-reasoning
thesven
Dataset Card for gsm8k-reasoning Overview GSM8K Reasoning is a dataset derived from the openai/gsm8k dataset, focusing on enhancing math problem-solving through reasoning-based prompts and solutions. This version emphasizes logical reasoning and step-by-step thought processes in mathematics, pushing models to generate solutions that reflect human-like deductive reasoning. The dataset is curated using a specialized pipeline designed to encourage… See the full description on the dataset page: https://huggingface.co/datasets/thesven/gsm8k-reasoning.
yeates/PromptfixData
yeates
Model Scources Dataset: https://huggingface.co/datasets/yeates/PromptfixData Github: https://github.com/yeates/PromptFix Paper: https://arxiv.org/pdf/2405.16785 Project Page: https://www.yongshengyu.com/PromptFix-Page/ Model Usage The PromptFix dataset is intended solely for research purposes. Please note that the PromptFix dataset is curated from open-source research projects and publicly available photo libraries. By using our dataset, you automatically agree… See the full description on the dataset page: https://huggingface.co/datasets/yeates/PromptfixData.
nbeerbower/gutenberg2-dpo
nbeerbower
Gutenberg2 DPO A DPO dataset meant to enhance the writing capabilities of LLMs using public domain books from Project Gutenberg. Inspired by Jon Durbin's Gutenberg DPO dataset: jondurbin/gutenberg-dpo-v0.1 Process Various books were selected from Project Gutenberg by personal preference and recommendation by Claude 3.5 Sonnet. They were then parsed by chapter using chapterize. Books that failed to parse or had mangled results were dropped from consideration.… See the full description on the dataset page: https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo.
llm-lab/TuringQ
llm-lab
Dataset Summary TuringQ is a comprehensive dataset designed to evaluate the reasoning capabilities of large language models (LLMs) in the theory of computation. It consists of 4,006 question-answer pairs spanning undergraduate and graduate-level problems collected from a diverse set of universities. The dataset covers four difficulty levels and seven main conceptual areas, including Regular Languages, Theoretical Concepts, Context-Free Languages, Computability Theory… See the full description on the dataset page: https://huggingface.co/datasets/llm-lab/TuringQ.
RANEPA-ai/SLAVA-OpenData-2800-v1
RANEPA-ai
SLAVA: Benchmark of the Socio-political Landscape And Value Analysis Dataset Description Since 2024, the SLAVA benchmark has been developed, containing about 14,000 questions focused on the Russian domain, covering areas such as history, political science, sociology, political geography, and national security basics. This benchmark evaluates the ability of large language models (LLMs) to handle sensitive topics important to the Russian information… See the full description on the dataset page: https://huggingface.co/datasets/RANEPA-ai/SLAVA-OpenData-2800-v1.
Maple728/Time-300B
Maple728
Dataset Card for Time-300B This repository contains the Time-300B dataset of the paper Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts. For details on how to use this dataset, please visit our GitHub page.
cambridgeltl/DARE
cambridgeltl
DARE DARE (Diverse Visual Question Answering with Robustness Evaluation) is a carefully created and curated multiple-choice VQA benchmark. DARE evaluates VLM performance on five diverse categories and includes four robustness-oriented evaluations based on the variations of: prompts the subsets of answer options the output format the number of correct answers. The validation split of the dataset contains images, questions, answer options, and correct answers. We are not… See the full description on the dataset page: https://huggingface.co/datasets/cambridgeltl/DARE.
mllmTeam/MobileViews
mllmTeam
MobileViews: A Large-Scale Mobile GUI Dataset Read the paper MobileViews is a large-scale dataset designed to support research on mobile user interface (UI) analysis and mobile agents. The first version — MobileViews-600K — contains over 600,000 mobile UI screenshot-view hierarchy (VH) pairs, collected from approximately 20,000 apps on the Google Play Store. Dataset Overview The zip and parquet files with the same index contain the same screenshots and VH files… See the full description on the dataset page: https://huggingface.co/datasets/mllmTeam/MobileViews.
alibayram/yapay_zeka_turkce_mmlu_bolum_sonuclari
alibayram
Yapay Zeka Türkçe MMLU Bölüm Sonuçları Bu veri seti, Türkiye'deki eğitim sisteminde kullanılan 62 farklı kategoriden sorularla yapay zeka modellerinin bölüm bazında performansını ölçmektedir. Her bölüm, Türkiye'deki eğitim sisteminde kullanılan gerçek sorulardan 100 tanesini içerir ve toplamda 6200 soru bulunmaktadır. Her bölümdeki doğru cevap oranları detaylandırılarak modellerin hangi konularda daha başarılı olduğunu belirlemek amacıyla oluşturulmuştur. Veri… See the full description on the dataset page: https://huggingface.co/datasets/alibayram/yapay_zeka_turkce_mmlu_bolum_sonuclari.
Kaichengalex/YFCC15M
Kaichengalex
YFCC15M Recaption Dataset This YFCC15M Dataset is filtered by DeCLIP and recaptioned utilize the diverse description generation framework proposed in RWKV-CLIP. The text is a list of text tokens with a length of 77, encoded using the CLIP tokenizer. You can use from clip.simple_tokenizer import SimpleTokenizer as _Tokenizer to decode it back into the original text. Using Dataset You can easily download and use the arxiver dataset with Hugging Face's datasets… See the full description on the dataset page: https://huggingface.co/datasets/Kaichengalex/YFCC15M.
inductiva/windtunnel-20k
inductiva
Wind Tunnel Dataset The Wind Tunnel Dataset contains 19,812 OpenFOAM simulations of 1,000 unique automobile-like objects placed in a virtual wind tunnel measuring 20 meters long, 10 meters wide, and 8 meters high. Each object was tested under 20 different conditions: 4 random wind speeds ranging from 10 to 50 m/s, and 5 rotation angles (0°, 180° and 3 random angles). The object meshes were generated using Instant Mesh based on images sourced from the Stanford Cars… See the full description on the dataset page: https://huggingface.co/datasets/inductiva/windtunnel-20k.
k-mktr/improved-flux-prompts-photoreal-portrait
k-mktr
Photo Portrait Prompt Dataset for FLUX Overview This dataset contains a curated collection of prompts specifically designed for generating photo portraits using FLUX.1, an advanced text-to-image model. These prompts are crafted to produce high-quality, lifelike portraits by leveraging sophisticated prompting techniques and best practices. Latest Version Improved on October 3, 2024. This version has undergone curation and improvement. What is new?… See the full description on the dataset page: https://huggingface.co/datasets/k-mktr/improved-flux-prompts-photoreal-portrait.
nguyendv02/ViMD_Dataset
nguyendv02
Multi-Dialect Vietnamese: Task, Dataset, Baseline Models and Challenges (Main EMNLP 2024) Introduction This document presents the accompanying dataset for the paper titled "Multi-Dialect Vietnamese: Task, Dataset, Baseline Models, and Challenges". The dataset, referred to as the Vietnamese Multi-Dialect (ViMD) dataset, is a comprehensive resource designed to capture the linguistic diversity represented by 63 provincial dialects spoken across Vietnam. The The paper… See the full description on the dataset page: https://huggingface.co/datasets/nguyendv02/ViMD_Dataset.
vector-institute/newsmediabias-plus
vector-institute
📰 NewsMediaBias-Plus Dataset 🌐 Overview NewsMediaBias-Plus is a multimodal dataset designed for the analysis of media bias and disinformation through the combination of textual and visual data from news articles. This dataset aims to foster research and development in detecting, categorizing, and understanding the nuances of biased reporting and the dissemination of information in media outlets. 📚 Dataset Description The NewsMediaBias-Plus… See the full description on the dataset page: https://huggingface.co/datasets/vector-institute/newsmediabias-plus.
SylvanL/Traditional-Chinese-Medicine-Dataset-SFT
SylvanL
启古纳今,厚德精术 数据介绍 非网络来源的高质量中医数据集-指令微调 High-Quality Traditional Chinese Medicine Dataset from Non-Internet Sources - SFT/IFT 该数据集经过大量人力和资源的投入精心构建,以共建LLM高质量中文社区为己任。 包含约1GB的中医各个领域临床案例、名家典籍、医学百科,名词解释等优质问答内容,涵盖全面,配比均衡。 数据集主要由非网络来源的内部数据构成,并99%为简体中文内容,内容质量优异,信息密度可观。 该数据集的数据源与SylvanL/Traditional-Chinese-Medicine-Dataset-Pretrain中的内容存在一定关联,但不高度重叠。 在二者的构建过程中,存在着一定的循序渐进与互为补充的逻辑. 该数据集可以独立使用,但建议先使用配套的预训练数据集对模型进行继续预训练后,再使用该数据集进行进一步的指令微调。… See the full description on the dataset page: https://huggingface.co/datasets/SylvanL/Traditional-Chinese-Medicine-Dataset-SFT.
isLucid/liepa-2
isLucid
Kas ir kaip? 1000 valandų trukmės anotuotas lietuvių kalbos garsynas, skirtas šnekos atpažinimo ir sintezės variklių tobulinimui ir kūrimui. Naudojamas mokslo tikslams bei praktiniams taikymams. Kuo išskirtinis? Šis sprendimas yra išskirtinis savo apimtimi, o taip pat tuo, kad yra nemokamai prieinamas visiems norintiems tiek moksliniams tyrimams, tiek taikomųjų šnekos atpažinimo ir sintezės sprendimų kūrimui. Šis garsynas leis mokslininkams ir įmonėms tobulinti… See the full description on the dataset page: https://huggingface.co/datasets/isLucid/liepa-2.
princeton-nlp/prolong-data-512K
princeton-nlp
princeton-nlp/prolong-data-512K [Paper] [HF Collection] [Code] ProLong (Princeton long-context language models) is a family of long-context models that are continued trained and supervised fine-tuned from Llama-3-8B, with a maximum context window of 512K tokens. Our main ProLong model is one of the best-performing long-context models at the 10B scale (evaluated by HELMET). To train this strong long-context model, we conduct thorough ablations on the long-context pre-training… See the full description on the dataset page: https://huggingface.co/datasets/princeton-nlp/prolong-data-512K.
Thorsten-Voice/TV-44kHz-Full
Thorsten-Voice
The "Thorsten-Voice" dataset This truly open source (CC0 license) german (🇩🇪) voice dataset contains about 40 hours of transcribed voice recordings by Thorsten Müller, a single male, native speaker in over 38.000 wave files. Mono Samplerate: 44.100Hz Trimmed silence at begin/end Denoised Normalized to -24dB Disclaimer "Please keep in mind, I am not a professional speaker, just an open source speech technology enthusiast who donates his voice. I contribute my… See the full description on the dataset page: https://huggingface.co/datasets/Thorsten-Voice/TV-44kHz-Full.
Dampfinchen/Creative_Writing_Multiturn
Dampfinchen
This is a dataset merge of many, many high quality story writing / roleplaying datasets across all of Huggingface. I've filtered specifically for samples with high turns, which is a key different to already available datasets. My goal is to improve the model's ability to recollect and mention details from far back even at a longer context and more importantly, also improve the model's ability to output engaging verbose storylines, reduce certain phrases, increase creativity and reduce dry… See the full description on the dataset page: https://huggingface.co/datasets/Dampfinchen/Creative_Writing_Multiturn.
robotics-diffusion-transformer/rdt-ft-data
robotics-diffusion-transformer
Dataset Card This is the fine-tuning dataset used in the paper RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation. Source Project Page: https://rdt-robotics.github.io/rdt-robotics/ Paper: https://arxiv.org/pdf/2410.07864 Code: https://github.com/thu-ml/RoboticsDiffusionTransformer Model: https://huggingface.co/robotics-diffusion-transformer/rdt-1b Uses Download all archive files and use the following command to extract: cat… See the full description on the dataset page: https://huggingface.co/datasets/robotics-diffusion-transformer/rdt-ft-data.
big-banyan-tree/BBT_CommonCrawl_2024
big-banyan-tree
Context BigBanyanTree is an initiative to empower colleges to set up their data engineering clusters, and drive interest towards data processing and analysis using tools such as Apache Spark. The data provided here is the direct result of this initiative. The data was processed by Gautam and Suchit, under the guidance of Harsh Singhal. Content Each arrow file contains a table with fields extracted from Common Crawl WARC files. The datasets provided are derived… See the full description on the dataset page: https://huggingface.co/datasets/big-banyan-tree/BBT_CommonCrawl_2024.
Omartificial-Intelligence-Space/Arabic-finanical-rag-embedding-dataset
Omartificial-Intelligence-Space
Arabic Version of The Finanical Rag Embedding Dataset This dataset is tailored for fine-tuning embedding models in Retrieval-Augmented Generation (RAG) setups. It consists of 7,000 question-context pairs translated into Arabic, sourced from NVIDIA's 2023 SEC Filing Report. The dataset is designed to improve the performance of embedding models by providing positive samples for financial question-answering tasks in Arabic. This dataset is the Arabic version of the original… See the full description on the dataset page: https://huggingface.co/datasets/Omartificial-Intelligence-Space/Arabic-finanical-rag-embedding-dataset.
replit/agent-challenge
replit
Replit Agent Challenge For comprehensive details about the challenge, visit our GitHub repository. Dataset Overview This dataset comprises a collection of instructions and file states specifically curated for the agent challenge. It is derived from a subset of SWE-Bench-Lite. Schema Structure The dataset follows this schema: - File_before: [Initial state of the file] - Instructions: [Steps to transform the file to its final state] - File_after:… See the full description on the dataset page: https://huggingface.co/datasets/replit/agent-challenge.
litagin/Galgame_Speech_ASR_16kHz
litagin
Dataset Card for Galgame_Speech_ASR_16kHz The following rules (in the original repository) must be followed: 必须遵守GNU General Public License v3.0内的所有协议!附加:禁止商用,本数据集以及使用本数据集训练出来的任何模型都不得用于任何商业行为,如要用于商业用途,请找数据列表内的所有厂商授权(笑),因违反开源协议而出现的任何问题都与本人无关! 训练出来的模型必须开源,是否在README内引用本数据集由训练者自主决定,不做强制要求。 English: You must comply with all the terms of the GNU General Public License v3.0!Additional note: Commercial use is prohibited. This dataset and any model trained using this dataset cannot be… See the full description on the dataset page: https://huggingface.co/datasets/litagin/Galgame_Speech_ASR_16kHz.
AntiplagiatCompany/CL4Lang
AntiplagiatCompany
Cross-lingual plagiarism detection: Two are better than one The widespread availability of scientific documents in multiple languages, coupled with the development of automatic translation and editing tools, has created a demand for efficient methods that can detect plagiarism across different languages. A dataset for cross-lingual plagiarism evaluation. Collection consists of a subset of Wikipedia articles on 4 languages (ru, hy, es, en). Quary consists of wikipedia documents… See the full description on the dataset page: https://huggingface.co/datasets/AntiplagiatCompany/CL4Lang.
kkkevinkkk/SituatedFaithfulnessEval
kkkevinkkk
Dataset Card for "SituatedFaithfulnessEval" More Information needed
juliozhao/doclayout-yolo-DocLayNet
juliozhao
https://huggingface.co/papers/2410.12628
dmis-lab/ChroKnowBench
dmis-lab
ChroKnowBench ChroKnowBench is a benchmark dataset designed to evaluate the performance of language models on temporal knowledge across multiple domains. The dataset consists of both time-variant and time-invariant knowledge, providing a comprehensive assessment for understanding knowledge evolution and constancy over time. Dataset is introduced by Park et al. in ChroKnowledge: Unveiling Chronological Knowledge of Language Models in Multiple Domains Dataset… See the full description on the dataset page: https://huggingface.co/datasets/dmis-lab/ChroKnowBench.
HumanEval-V/HumanEval-V-Benchmark
HumanEval-V
HumanEval-V: Evaluating Visual Understanding and Reasoning Abilities of LMMs Through Coding Tasks 📄 Paper • 🏠 Home Page • 💻 GitHub Repository • 🏆 Leaderboard • 🤗 Dataset Viewer HumanEval-V includes 108 carefully crafted, entry-level Python coding tasks. LMMs are required to complete the code solution based on the provided visual context and a predefined Python function signature outlining the task requirements. Every task is equipped with… See the full description on the dataset page: https://huggingface.co/datasets/HumanEval-V/HumanEval-V-Benchmark.
fwnlp/self-instruct-safety-alignment
fwnlp
[EMNLP 2024] Data Advisor: Dynamic Data Curation for Safety Alignment of Large Language Models 🌐 Homepage | 📖 Paper | 🤗 Dataset (Data Advisor) | 🤗 Dataset (Self-Instruct) Disclaimer The dataset contains content that may be offensive or harmful. This dataset is intended for research purposes, specifically to support efforts aimed at creating safer and less harmful AI systems. Please engage with it responsibly and at your own risk. Citation… See the full description on the dataset page: https://huggingface.co/datasets/fwnlp/self-instruct-safety-alignment.
lt-asset/collu-bench
lt-asset
Collu-Bench: A Benchmark for Predicting LLM Hallucinations in Code Despite their success, large language models (LLMs) face the critical challenge of hallucinations, generating plausible but incorrect content. While much research has focused on hallucinations in multiple modalities including images and natural language text, less attention has been given to hallucinations in source code, which leads to incorrect and vulnerable code that causes significant financial loss. To… See the full description on the dataset page: https://huggingface.co/datasets/lt-asset/collu-bench.
FreedomIntelligence/ApolloMoEDataset
FreedomIntelligence
Democratizing Medical LLMs For Much More Languages Covering 12 Major Languages including English, Chinese, French, Hindi, Spanish, Arabic, Russian, Japanese, Korean, German, Italian, Portuguese and 38 Minor Languages So far. 📃 Paper • 🌐 Demo • 🤗 ApolloMoEDataset • 🤗 ApolloMoEBench • 🤗 Models •🌐 Apollo • 🌐 ApolloMoE 🌈 Update [2024.10.15] ApolloMoE repo is published!🎉 Languages Coverage 12 Major Languages and 38 Minor Languages… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/ApolloMoEDataset.
gretelai/gretel-pii-masking-en-v1
gretelai
Gretel Synthetic Domain-Specific Documents Dataset (English) This dataset is a synthetically generated collection of documents enriched with Personally Identifiable Information (PII) and Protected Health Information (PHI) entities spanning multiple domains. Created using Gretel Navigator with mistral-nemo-2407 as the backend model, it is specifically designed for fine-tuning Gliner models. The dataset contains document passages featuring PII/PHI entities from a wide range of… See the full description on the dataset page: https://huggingface.co/datasets/gretelai/gretel-pii-masking-en-v1.
ChuGyouk/CompositionalGSM_augmented
ChuGyouk
Compositional GSM_augmented Compositional GSM_augmented is a math instruction dataset, inspired by Not All LLM Reasoners Are Created Equal. It is based on nvidia/OpenMathInstruct-2 dataset, so you can use this dataset as training dataset. It is generated using meta-llama/Meta-Llama-3.1-70B-Instruct model by Hyperbloic AI link. (Thanks for free credit!) Replace the description of the data with the contents in the paper. Each question in compositional GSM consists of two… See the full description on the dataset page: https://huggingface.co/datasets/ChuGyouk/CompositionalGSM_augmented.
THU-KEG/RM-Bench
THU-KEG
RM-Bench This repository contains the data of the paper "RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style" Dataset Details the samples are formatted as follows: { "id": // unique identifier of the sample, "prompt": // the prompt given to the model, "chosen": [ "resp_1", // the chosen response with concise style, "resp_2", // the chosen response with detailed style and formatted as plain text… See the full description on the dataset page: https://huggingface.co/datasets/THU-KEG/RM-Bench.
AIML-TUDA/CLEVR-Sudoku
AIML-TUDA
CLEVR-Sudoku is a dataset for the task of Sudoku puzzle solving. It is a synthetic dataset generated using the CLEVR engine. The dataset consists of 3x3 Sudoku puzzles with varying levels of difficulty. The dataset is divided into three categories based on the number of known cells in the puzzle: Easy (K10), Medium (K30), and Hard (K50). Each puzzle is accompanied by a set of 10 possible solutions. The dataset is generated using the CLEVR engine and is available in the form of images and JSON files. The images are 256x256 pixels in size and are stored in the PNG format. The JSON files contain the puzzle, the solution, and the possible solutions in the form of a dictionary. The dataset is available for download in the form of a zip file. The dataset is intended for use in the development of machine learning models for the task of Sudoku puzzle solving.
extrasensory/reasoning-biochem
extrasensory
This is a synthetic reasoning dataset generated from the PrimeKG biomedical knowledge graph. It contains verifiable reasoning traces generated using the approach outlined in Synthetic CoT Reasoning Trace Generation from Knowledge Graphs. The synthetic chain-of-thought data is generated procedurally using program synthesis and logic programming which is able to produce vast quantities of verifiable forward reasoning traces with minimal human oversight. The benchmark is intended to be used to… See the full description on the dataset page: https://huggingface.co/datasets/extrasensory/reasoning-biochem.
USTC-KnowledgeComputingLab/ElectrolyteBench
USTC-KnowledgeComputingLab
AI for Electrolyte is gaining increasing attention. To evaluate the performance of large models in the field of electrolyte, we collaborated with chemists to build a test set called ElectrolyteBench. To the best of our knowledge, we are the first to design such a dataset for LLMs. We hope this work will attract more attention to this field and contribute to the advancement of AI for Electrolyte. ElectrolyteBench includes 4 core tasks: Molecular Property Electrolyte Formula Text Understanding… See the full description on the dataset page: https://huggingface.co/datasets/USTC-KnowledgeComputingLab/ElectrolyteBench.
TIGER-Lab/MEGA-Bench
TIGER-Lab
MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks 🌐 Homepage | 🏆 Leaderboard | 🤗 Dataset | 🤗 Paper | 🔎 Visualiaztion | 📖 arXiv | GitHub ❗❗ Data Information We put the file path of images/videos in HF datasets. Please download the zipped data here. We chose not to directly include images in the Parquet files because the viewer of Hugging Face Datasets cannot display rows beyond a size limit, causing visualization failure on some of our… See the full description on the dataset page: https://huggingface.co/datasets/TIGER-Lab/MEGA-Bench.
united-we-care/United-Syn-Med
united-we-care
United-MedSyn Dataset Description The United-MedSyn dataset is a specialized medical speech dataset designed to evaluate and improve Automatic Speech Recognition (ASR) systems within the healthcare domain. It comprises English medical speech recordings, with a particular focus on medical terminology and clinical conversations. The dataset is well-suited for various ASR tasks, including speech recognition, transcription, and classification, facilitating the… See the full description on the dataset page: https://huggingface.co/datasets/united-we-care/United-Syn-Med.
OpenDFM/MobA-MobBench
OpenDFM
🎮 MobA manipulates mobile phones just like how you would. 🌐 Website | 📃 Paper | 🤗 MobBench | 🗃️ Code 简体中文 | English 🔥 News [2024.10.18] We open-source MobA on GitHub, and our paper is available on arXiv. 📖 Introduction Current mobile assistants are limited by dependence on system APIs or struggle with complex user instructions and diverse interfaces due to restricted comprehension and decision-making abilities. To address these challenges, we… See the full description on the dataset page: https://huggingface.co/datasets/OpenDFM/MobA-MobBench.
prometheus-eval/MMQA
prometheus-eval
Links for Reference Repository:In Progress Paper: https://arxiv.org/abs/2410.17578 Point of Contact:[email protected] / [email protected] Multilingual Multicultural-Question Answering (MMQA) MMQA is a multilingual and multicultural long-form question-answering dataset, which originated as a subset of the MM-Eval benchmark. MMQA features long-form question-answer pairs that inquire about culture-related contexts in seven languages: Bengali, Korean… See the full description on the dataset page: https://huggingface.co/datasets/prometheus-eval/MMQA.
rasoul-nikbakht/NetSpec-LLM
rasoul-nikbakht
📁 Network Spec for LLM Understanding 📄 Overview This repository houses a comprehensive collection of ETSI (European Telecommunications Standards Institute) documents, systematically downloaded, processed, and organized for streamlined access and analysis. Each ETSI deliverable is paired with its corresponding metadata to ensure thorough information management. 🔍 Data Processing Workflow The data processing involves two main scripts that automate… See the full description on the dataset page: https://huggingface.co/datasets/rasoul-nikbakht/NetSpec-LLM.
llm-editing/HalluEditBench
llm-editing
Can Knowledge Editing Really Correct Hallucinations? Respository Oveview: This repository contains the code, results and dataset for the paper "Can Knowledge Editing Really Correct Hallucinations?" TLDR: We proposed HalluEditBench to holistically benchmark knowledge editing methods in correcting real-world hallucinations on five dimensions including Efficacy, Generalization, Portability, Locality, and Robustness. We find that their effectiveness could be far from what their… See the full description on the dataset page: https://huggingface.co/datasets/llm-editing/HalluEditBench.
Aarushhh/math-reasoning-10k
Aarushhh
Math-reasoning-10k Dataset Summary This dataset contains reasoning paths/plans generated for the first 10,000 problems and solutions from the NuminaMath-CoT dataset. Each entry consists of a mathematical problem and a corresponding reasoning plan that outlines the steps required to solve the problem, without actually solving it. This approach focuses on planning and reasoning, providing a structured pathway to solving the problem rather than the final solution… See the full description on the dataset page: https://huggingface.co/datasets/Aarushhh/math-reasoning-10k.
MohamedAshraf701/query-response-dataset
MohamedAshraf701
Query Response Dataset Overview The Query Response Dataset is designed to provide a rich set of question-answer pairs, ideal for training AI models in natural language processing (NLP) tasks. This dataset contains structured query-response data that can be utilized for various applications, including chatbots, virtual assistants, and customer support systems. Dataset Details Number of Entries: 1.5K Fields: Query: The question or inquiry made by a… See the full description on the dataset page: https://huggingface.co/datasets/MohamedAshraf701/query-response-dataset.