id
stringlengths
6
121
author
stringlengths
2
42
description
stringlengths
0
6.67k
lngo22/eval_moss_test1
lngo22
This dataset was created using LeRobot.
open-llm-leaderboard/akjindal53244__Llama-3.1-Storm-8B-details
open-llm-leaderboard
Dataset Card for Evaluation run of akjindal53244/Llama-3.1-Storm-8B Dataset automatically created during the evaluation run of model akjindal53244/Llama-3.1-Storm-8B The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/akjindal53244__Llama-3.1-Storm-8B-details.
vmayoral/eval_u850_test
vmayoral
This dataset was created using LeRobot.
highonjuice/koch_test
highonjuice
This dataset was created using LeRobot.
DevQuasar/Synthetic-Cyclic-Perception_exp1
DevQuasar
Synthetic-Cyclic-Perception Synthetic-Cyclic-Perception is a synthetic visual dataset created by iteratively generating and describing images in a cyclic fashion. Each cycle begins with a seed prompt that feeds into a diffusion model to generate an initial image. A vision model then provides a detailed description of the generated image, which becomes the prompt for the next cycle. This process is repeated across multiple cycles and batches, creating a dataset where images and… See the full description on the dataset page: https://huggingface.co/datasets/DevQuasar/Synthetic-Cyclic-Perception_exp1.
jeanflop/post-ocr-correction
jeanflop
Synthetic OCR Correction Dataset This dataset is a synthetic dataset generated for post-OCR correction tasks. It contains over 2,000,000 rows of French text pairs and follows the Croissant format. It is designed to train small language models (LLMs) for text correction. Description To ensure the dataset closely resembles OCR-malformed texts, we applied various transformations randomly. This approach helps avoid the LLM identifying specific patterns and encourages… See the full description on the dataset page: https://huggingface.co/datasets/jeanflop/post-ocr-correction.
JWBickel/BibleRAGChroma
JWBickel
This dataset covers the entire King James version of the Bible (KJV). It groups the text by pericope heading into parent texts. In each of these groups, the text is chunked with overlap, and id strings are given for the parent text and each chunk. For each chunk, there is a theme and a list of keywords, as well as a set of theme and keywords representing the parent text. These themes and keywords are derived from an llm. This instruction was included in the prompt to combine them into the… See the full description on the dataset page: https://huggingface.co/datasets/JWBickel/BibleRAGChroma.
habedi/geo
habedi
Geotechnical and Resistivity Model Datasets Geotechnical Data: directory includes measurements related to soil and rock properties The Resistivity Model Data directory includes data on resistivity profiles that include information about subsurface conductivity variations Features and Targets directory contains cleaned and processed data for model training and prediction
PlanAPlanB/reddit_dataset_8
PlanAPlanB
Bittensor Subnet 13 Reddit Dataset Dataset Summary This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks. For more information about the dataset, please visit the official repository. Supported Tasks The versatility of this dataset… See the full description on the dataset page: https://huggingface.co/datasets/PlanAPlanB/reddit_dataset_8.
self-generate/iter_exp_no_mask_ds_chat_regular_adamw_iter4_sppo_hard_new_cn_mining_oj_iter4-binarized
self-generate
Dataset Card for "iter_exp_no_mask_ds_chat_regular_adamw_iter4_sppo_hard_new_cn_mining_oj_iter4-binarized" More Information needed
dogtooth/off-policy-0.1-with-on-policy-0.1-iter3_generated_helpsteer2_binarized_1730062816
dogtooth
allenai/open_instruct: Generation Dataset See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail Configs args: {'add_timestamp': True, 'alpaca_eval': False, 'dataset_end_idx': 8636, 'dataset_mixer_list': ['dogtooth/helpsteer2_binarized', '1.0'], 'dataset_splits': ['train'], 'dataset_start_idx': 0, 'hf_entity': 'dogtooth', 'hf_repo_id':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/off-policy-0.1-with-on-policy-0.1-iter3_generated_helpsteer2_binarized_1730062816.
open-llm-leaderboard/shastraai__Shastra-LLAMA2-Math-Commonsense-SFT-details
open-llm-leaderboard
Dataset Card for Evaluation run of shastraai/Shastra-LLAMA2-Math-Commonsense-SFT Dataset automatically created during the evaluation run of model shastraai/Shastra-LLAMA2-Math-Commonsense-SFT The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/shastraai__Shastra-LLAMA2-Math-Commonsense-SFT-details.
open-llm-leaderboard/adriszmar__QAIMath-Qwen2.5-7B-TIES-details
open-llm-leaderboard
Dataset Card for Evaluation run of adriszmar/QAIMath-Qwen2.5-7B-TIES Dataset automatically created during the evaluation run of model adriszmar/QAIMath-Qwen2.5-7B-TIES The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/adriszmar__QAIMath-Qwen2.5-7B-TIES-details.
hon9kon9ize/yue_emo_speech
hon9kon9ize
Cantonese Emotional Speech Crawled from YouTube and RTHK, this dataset contains 1,000 hours of Cantonese speech, each labeled with one of the following emotions: angry, disgusted, fearful, happy, neutral, other, sad, or surprised. The dataset also includes the confidence of the emotion label. The audio files are denoised with resemble-enhance. The transcriptions are generated by SenseVoiceSmall, and deduplicated using MinHash.
sarincasm/hf-nlp-github-issues
sarincasm
Dataset Card for HF NLP Github Issues Created from the NLP course here: https://huggingface.co/learn/nlp-course/chapter5/5 Dataset Details Dataset Description Dataset Sources [optional] Repository: https://github.com/huggingface/datasets/issues Uses Direct Use [More Information Needed] Out-of-Scope Use [More Information Needed] Dataset Structure [More… See the full description on the dataset page: https://huggingface.co/datasets/sarincasm/hf-nlp-github-issues.
open-llm-leaderboard/zelk12__MT2-Gen1-gemma-2-9B-details
open-llm-leaderboard
Dataset Card for Evaluation run of zelk12/MT2-Gen1-gemma-2-9B Dataset automatically created during the evaluation run of model zelk12/MT2-Gen1-gemma-2-9B The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT2-Gen1-gemma-2-9B-details.
robinhad/flickr30k
robinhad
Flickr30k Original paper: From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions Homepage: https://shannon.cs.illinois.edu/DenotationGraph/ Bibtex: @article{young2014image, title={From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions}, author={Young, Peter and Lai, Alice and Hodosh, Micah and Hockenmaier, Julia}, journal={Transactions of the… See the full description on the dataset page: https://huggingface.co/datasets/robinhad/flickr30k.
dogtooth/off-policy-0.1-with-on-policy-0.1-iter4_generated_helpsteer2_binarized_1730070403
dogtooth
allenai/open_instruct: Generation Dataset See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail Configs args: {'add_timestamp': True, 'alpaca_eval': False, 'dataset_end_idx': 8636, 'dataset_mixer_list': ['dogtooth/helpsteer2_binarized', '1.0'], 'dataset_splits': ['train'], 'dataset_start_idx': 0, 'hf_entity': 'dogtooth', 'hf_repo_id':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/off-policy-0.1-with-on-policy-0.1-iter4_generated_helpsteer2_binarized_1730070403.
zwang2/virus_host_db
zwang2
Virus-Host Database for Viral Genomics Dataset Summary This dataset consolidates information from the Virus-Host DB, a resource by the Kyoto University Bioinformatics Center that documents virus-host relationships using taxonomy identifiers from NCBI and RefSeq. It includes viral genomic data, host associations, and metadata, offering a comprehensive resource for studying virus-host interactions. Source and Citation Data in this dataset is sourced… See the full description on the dataset page: https://huggingface.co/datasets/zwang2/virus_host_db.
DenyTranDFW/PowerBI_Extracts
DenyTranDFW
DATA SOURCES GitHub Microsoft Fabric Community CREDITS Primary Parser: Hugoberry's PBIXRay Manual Parser (Troublesome Files): Didier Terrien's PowerBI SideTools CSV Extractor (Troublesome Files): Bravo by SQLBI Parquet Viewer (Check Parquet Outputs): Sal's ParquetViewer
laelhalawani/opus_and_europarl_en_ro
laelhalawani
This dataset is a combination and conversion of two En-Ro datasets published by University of Helsinki, specifically Opus-100 and Europarl. 1,404,356 En-Ro text-pair samples were extracted and combined into a single dataset. Each sample is a dict with two keys corresponding to the text-pair languages names: en and ro and values of those keys represent the matching text in the given languages. NOTE: Some initial analysis reveals many of these samples face formatting issues, and might be facing… See the full description on the dataset page: https://huggingface.co/datasets/laelhalawani/opus_and_europarl_en_ro.
BioResearchAI/DAE_train
BioResearchAI
deal_data.zip : deals 관련 학습 데이터 drug_profile.zip : drug_profile 관련 학습 데이터
lmarena-ai/Llama-3-70b-battles
lmarena-ai
Chatbot Arena user conversations between Llama-3-70b VS GPT-4-1025 or Llama-3-70b VS Claude-3-Opus with user preference votes. Single turn. Excludes ties. Used in Llama Data Analysis blog post and "VibeCheck: Discover and Quantify Qualitative Differences in Large Language Models" (Paper, Code). Citation @article{dunlap_vibecheck, title={VibeCheck: Discover and Quantify Qualitative Differences in Large Language Models}, author={Lisa Dunlap and Krishna Mandal and Trevor… See the full description on the dataset page: https://huggingface.co/datasets/lmarena-ai/Llama-3-70b-battles.
Interformed/RawImages_One
Interformed
Raw Image Dataset Number 1 Raw Image Dataset Number 1 is a collection of user-generated content from a variety of publicly-availably sources across the internet. Raw Image Dataset Number 1 (RIDiN1)'s images primarily come from Reddit. Most, if not all of the state-of-the-art diffusion and image generation models come 'aligned' out-of-the-box. While AI safety is an important aspect of this profession, such alignment often results in less than ideal outputs from the model. The… See the full description on the dataset page: https://huggingface.co/datasets/Interformed/RawImages_One.
DeL-TaiseiOzaki/magpie-reasonig-ja-qwen2.5-72b-16k
DeL-TaiseiOzaki
合成日本語指示データセット 概要 このデータセットは、大規模言語モデル(LLM)を用いて自動生成された日本語の指示とそれに対する推論・初期応答・改善応答のコレクションです。データセットは指示付与型のタスクのための学習や評価に使用することを目的としています。 データセット仕様 サンプル数: 約16,000 言語: 日本語 フォーマット: JSONL 生成方法 データセットは以下のプロセスを通じて生成されました: Qwen2.5 72B Instructモデルを使用 各サンプルは以下のプロセスで生成: a) 10種のペルソナシステムプロンプトからランダムに1件選択し,事前に用意した指示文の参考例を3-shot-learningして指示文の生成. b) 完成した指示文からコサイン類似度0.9を閾値として重複削除 c) 指示応答を生成するための推論手順を生成 d) 指示文と推論手順を元に初期解答を生成 e) 初期応答のself-refineを行い,最終解答を生成… See the full description on the dataset page: https://huggingface.co/datasets/DeL-TaiseiOzaki/magpie-reasonig-ja-qwen2.5-72b-16k.
dataTTATTABUL/images_with_embedded_images
dataTTATTABUL
JacobLinCool/common_voice_16_1_zh_TW_clean
JacobLinCool
This dataset is derived from mozilla-foundation/common_voice_16_1, with a clean (denoised) audio column using MP-SENet. The original "noisy" audio is stored in the "original" column.
bitmind/AFHQ
bitmind
https://vis-www.cs.umass.edu/lfw/ Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. University of Massachusetts, Amherst, Technical Report 07-49, October, 2007. [pdf]
yfyeung/librilight-hubert-base-ls960-iter2-l9-km500
yfyeung
HuBERT Base Model: https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt Quantizer: https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960_L9_km500.bin
EleutherAI/tinystories-pretokenized-pythia
EleutherAI
Dataset Card for "tinystories-pretokenized-pythia" More Information needed
YxBxRyXJx/FRAMEimages_train_1028
YxBxRyXJx
Dataset Card for "FRAMEimages_train_1028" More Information needed
YxBxRyXJx/FRAMEimages_test_1028
YxBxRyXJx
Dataset Card for "FRAMEimages_test_1028" More Information needed
longmaodata/Adult-Voice
longmaodata
Dataset Introduction TTS average voice library Version v1.0 Release Date 2024-10-15 Data Description Age: 18-70 years old, balanced across all ages Gender: Balanced between males and females Accent: Mandarin accent Recording environment: Background environment is quiet, SNR>20 dB Number of participants: Around 200 people, 100 from southern China and 100 from northern China Distance from microphone: Less than 20cm Style:… See the full description on the dataset page: https://huggingface.co/datasets/longmaodata/Adult-Voice.
open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B-details
open-llm-leaderboard
Dataset Card for Evaluation run of PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B Dataset automatically created during the evaluation run of model PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B-details.
amphora/enko-math-translate-sft
amphora
This is merge of kuotient/orca-math-word-problems-193k-korean and ChuGyouk/AI-MO-NuminaMath-CoT-Ko
Gunther520/instruction-dataset-mini-with-generations2
Gunther520
Dataset Card for instruction-dataset-mini-with-generations2 This dataset has been created with distilabel. Dataset Summary This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI: distilabel pipeline run --config "https://huggingface.co/datasets/Gunther520/instruction-dataset-mini-with-generations2/raw/main/pipeline.yaml" or explore the configuration: distilabel… See the full description on the dataset page: https://huggingface.co/datasets/Gunther520/instruction-dataset-mini-with-generations2.
open-llm-leaderboard/TheTsar1209__qwen-carpmuscle-v0.3-details
open-llm-leaderboard
Dataset Card for Evaluation run of TheTsar1209/qwen-carpmuscle-v0.3 Dataset automatically created during the evaluation run of model TheTsar1209/qwen-carpmuscle-v0.3 The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/TheTsar1209__qwen-carpmuscle-v0.3-details.
open-llm-leaderboard/Etherll__Qwen2.5-Coder-7B-Instruct-Ties-details
open-llm-leaderboard
Dataset Card for Evaluation run of Etherll/Qwen2.5-Coder-7B-Instruct-Ties Dataset automatically created during the evaluation run of model Etherll/Qwen2.5-Coder-7B-Instruct-Ties The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Etherll__Qwen2.5-Coder-7B-Instruct-Ties-details.
djbravo767648/15k_ai_nonai_images
djbravo767648
Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Dataset Details Dataset Description Curated by: [More Information Needed] Funded by [optional]: [More Information Needed] Shared by [optional]: [More Information Needed] Language(s) (NLP): [More Information Needed] License: [More Information Needed] Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/djbravo767648/15k_ai_nonai_images.
dogtooth/off-policy-0.1-with-on-policy-0.1-uf_iter1_generated_ultrafeedback_binarized_1730092299
dogtooth
allenai/open_instruct: Generation Dataset See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail Configs args: {'add_timestamp': True, 'alpaca_eval': False, 'dataset_end_idx': 61135, 'dataset_mixer_list': ['HuggingfaceH4/ultrafeedback_binarized', '1.0'], 'dataset_splits': ['train_prefs'], 'dataset_start_idx': 0, 'hf_entity': 'dogtooth', 'hf_repo_id':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/off-policy-0.1-with-on-policy-0.1-uf_iter1_generated_ultrafeedback_binarized_1730092299.
ChesterHung/hf_sync
ChesterHung
詳細描述 資料名稱: HF同步測試 資料狀態: active 作者名稱: ChesterHung 建立時間: 2024-10-28T03:21:23.078918 更新時間: 2024-10-28T05:31:40.695759 原本網址: CKAN - ChesterHung/hf_sync
wonderwind271/ACL-papers
wonderwind271
Dataset Card for "ACL-papers" More Information needed
kogi-jwu/cl-humaneval_v1.0
kogi-jwu
CL-HumanEval Dataset Description CL-HumanEval is a benchmark for evaluating cross-lingual transfer through code generation. It is based on the code generation benchmark HumanEval. Languages The dataset contains coding problems in 2 natural languages: English and Japanese. Dataset Structure from datasets import load_dataset load_dataset("kogi-jwu/cl-humaneval_v1.0", "en") DatasetDict({ test: Dataset({ features: ['task_id'… See the full description on the dataset page: https://huggingface.co/datasets/kogi-jwu/cl-humaneval_v1.0.
ShuoShuoShuo/PromptsV3
ShuoShuoShuo
Dataset Card for "PromptsV3" More Information needed
multimolecule/bprna-new
multimolecule
bpRNA-new bpRNA-new is a database of single molecule secondary structures annotated using bpRNA. bpRNA-new is a dataset of RNA families from Rfam 14.2, designed for cross-family validation to assess generalization capability. It focuses on families distinct from those in bpRNA-1m, providing a robust benchmark for evaluating model performance on unseen RNA families. Disclaimer This is an UNOFFICIAL release of the bpRNA-new by Kengo Sato, et al. The team releasing… See the full description on the dataset page: https://huggingface.co/datasets/multimolecule/bprna-new.
multimolecule/rnastralign
multimolecule
RNAStrAlign RNAStrAlign is a comprehensive dataset of RNA sequences and their secondary structures. RNAStrAlign aggregates data from multiple established RNA structure repositories, covering diverse RNA families such as 5S ribosomal RNA, tRNA, and group I introns. It is considered complementary to the ArchiveII dataset. Disclaimer This is an UNOFFICIAL release of the RNAStrAlign by Zhen Tan, et al. The team releasing RNAStrAlign did not write this dataset card for… See the full description on the dataset page: https://huggingface.co/datasets/multimolecule/rnastralign.
multimolecule/archiveii
multimolecule
ArchiveII ArchiveII is a dataset of RNA sequences and their secondary structures, widely used in RNA secondary structure prediction benchmarks. ArchiveII contains 2975 RNA samples across 10 RNA families, with sequence lengths ranging from 28 to 2968 nucleotides. This dataset is frequently used to evaluate RNA secondary structure prediction methods, including those that handle both pseudoknotted and non-pseudoknotted structures. It is considered complementary to the RNAStrAlign… See the full description on the dataset page: https://huggingface.co/datasets/multimolecule/archiveii.
multimolecule/rivas-a
multimolecule
RIVAS The RIVAS dataset is a curated collection of RNA sequences and their secondary structures, designed for training and evaluating RNA secondary structure prediction methods. The dataset combines sequences from published studies and databases like Rfam, covering diverse RNA families such as tRNA, SRP RNA, and ribozymes. The secondary structure data is obtained from experimentally verified structures and consensus structures from Rfam alignments, ensuring high-quality… See the full description on the dataset page: https://huggingface.co/datasets/multimolecule/rivas-a.
multimolecule/rivas-b
multimolecule
RIVAS The RIVAS dataset is a curated collection of RNA sequences and their secondary structures, designed for training and evaluating RNA secondary structure prediction methods. The dataset combines sequences from published studies and databases like Rfam, covering diverse RNA families such as tRNA, SRP RNA, and ribozymes. The secondary structure data is obtained from experimentally verified structures and consensus structures from Rfam alignments, ensuring high-quality… See the full description on the dataset page: https://huggingface.co/datasets/multimolecule/rivas-b.
Sajid121/Bevgen
Sajid121
Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Dataset Details Dataset Description Curated by: [More Information Needed] Funded by [optional]: [More Information Needed] Shared by [optional]: [More Information Needed] Language(s) (NLP): [More Information Needed] License: [More Information Needed] Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/Sajid121/Bevgen.
dogtooth/off-policy-0.1-with-on-policy-0.1-uf_iter1_generated_ultrafeedback_binarized_gold
dogtooth
allenai/open_instruct: Rejection Sampling Dataset See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail Configs args: {'add_timestamp': False, 'hf_entity': 'dogtooth', 'hf_repo_id': 'off-policy-0.1-with-on-policy-0.1-uf_iter1_generated_ultrafeedback_binarized_gold', 'hf_repo_id_scores': 'rejection_sampling_scores', 'include_reference_completion_for_rejection_sampling': True, 'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/off-policy-0.1-with-on-policy-0.1-uf_iter1_generated_ultrafeedback_binarized_gold.
malshanCS/LAAIIntentD
malshanCS
The LAAIIntentD dataset is designed for intent classification in educational interactions, specifically targeting students from high school to university levels. The dataset includes subjects such as Mathematics, ICT, Physics, Chemistry, and Computer Science. Using a Retrieval-Augmented Generation (RAG) approach, it leverages educational resources from Sri Lankan curricula and high school physics texts to ensure realistic and contextually relevant interactions. Each JSON entry contains fields… See the full description on the dataset page: https://huggingface.co/datasets/malshanCS/LAAIIntentD.
laion/Project-Gutenberg
laion
Project Gutenberg Introducing Project Gutenberg, a dataset that provides access to all the books available in that project. In our dataset, we wanted to provide a bulk download option to have access to Gutenberg books in ten different languages such as English, German, French, Polish, Portuguese, Dutch, Spanish, Hebrew, Russian and Chinese. English has the largest collection of books, followed by German. We are releasing this dataset for researchers and engineers to integrate… See the full description on the dataset page: https://huggingface.co/datasets/laion/Project-Gutenberg.
SocialHackathonsEFELIA/Movie_comments
SocialHackathonsEFELIA
features: text: string label: classlabel: num_classes: 2 names: ["negative", "positive"]
Arunisto/brain_tumor_dataset
Arunisto
Brain Tumor Dataset Parquet dataset contains the two different brain tumor condition healthy and tumor, used to classify the brain tumor. The images contains the MRI Scans of two different brain condition, this dataset developers can use for classification, detection and segmentation
ibm/debate_speeches
ibm
Debate speeches dataset A dataset of annotated debate speeches on various topics. The data contains speeches by human expert debaters as well as speeches created using automated pipelines. The quality of the speeches is scored by human annotators. Opening Speeches This is a collection of annotated opening speeches, as described in the Project Debater paper published in Nature. A detailed description of the data collection process can be found here. Each row in the… See the full description on the dataset page: https://huggingface.co/datasets/ibm/debate_speeches.
arrmlet/reddit_dataset_155
arrmlet
Bittensor Subnet 13 Reddit Dataset Dataset Summary This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks. For more information about the dataset, please visit the official repository. Supported Tasks The versatility of this dataset… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/reddit_dataset_155.
jeanflop/post_ocr_correction2
jeanflop
Synthetic OCR Correction Dataset This dataset is a synthetic dataset generated for post-OCR correction tasks. It contains over 2,000,000 rows of French text pairs and follows the Croissant format. It is designed to train small language models (LLMs) for text correction. Description To ensure the dataset closely resembles OCR-malformed texts, we applied various transformations randomly. This approach helps avoid the LLM identifying specific patterns and encourages… See the full description on the dataset page: https://huggingface.co/datasets/jeanflop/post_ocr_correction2.
lnzdanese/koch_test2
lnzdanese
This dataset was created using LeRobot.
kartiksrma/Poltical-Ideology-Synthetic
kartiksrma
Source This Dataset was generated through instruction tuning of GPT-4o. Prompt was to generate 2000 tweets/short texts a person of a Particular Political Ideology would say or post where classes were: Extreme Left Left Centre Right Extreme Right
kailinjiang/MMKE-Bench-dataset
kailinjiang
🤗 Dataset We introduce MMKE-Bench, a benchmark designed to evaluate the ability of LMMs to edit visual knowledge in real-world scenarios. MMKE-Bench incorporates three editing tasks: visual entity editing, visual semantic editing, and user-specific editing. Additionally, it uses free-form natural language to represent and edit knowledge, offering more flexibility. The benchmark includes 2,940 pieces of knowledge and 7,229 images across 110 fine-grained types, with… See the full description on the dataset page: https://huggingface.co/datasets/kailinjiang/MMKE-Bench-dataset.
vojtam/czech_books_descriptions
vojtam
This dataset contains about 10K of book descriptions in czech. The data comes from databazeknih.cz
sidddddddddddd/snli-alphastreet2
sidddddddddddd
Custom SNLI Dataset A custom SNLI-style dataset generated using GPT-3.5 Dataset Structure Data Instances Each instance contains: premise: The initial statement hypothesis: A generated statement label: The relationship between premise and hypothesis (0: entailment, 1: contradiction, 2: neutral) Data Fields premise: string hypothesis: string label: int Data Splits train: 134 examples validation: 16 examples test: 18… See the full description on the dataset page: https://huggingface.co/datasets/sidddddddddddd/snli-alphastreet2.
CarbonLover/merged_econ_newspaper_mrc
CarbonLover
https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=577(AIHUB 뉴스 기사 기계독해 데이터) 에서, 경제 도메인 기사들만 가져온 후, 합친 데이터
martinsinnona/plotqa_2k
martinsinnona
Dataset Card for "plotqa_2k" More Information needed
sergioburdisso/news_media_reliability
sergioburdisso
Reliability Estimation of News Media Sources: "Birds of a Feather Flock Together" Dataset introduced in the paper "Reliability Estimation of News Media Sources: Birds of a Feather Flock Together" published in the NAACL 2024 main conference. Similar to the news media bias and factual reporting dataset, this dataset consists of a collections of 5.33K new media domains names with reliability labels. Additionally, for some domains, there is also a human-provided reliability score… See the full description on the dataset page: https://huggingface.co/datasets/sergioburdisso/news_media_reliability.
Joschka/big_bench_hard_mini
Joschka
BIG-Bench Hard mini BIG-Bench Hard mini is a subset of BIG-Bench Hard Abstract BIG-Bench (Srivastava et al., 2022) is a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models. Language models have already made good progress on this benchmark, with the best model in the BIG-Bench paper outperforming average reported human-rater results on 65% of the BIG-Bench tasks via few-shot prompting. But on what tasks… See the full description on the dataset page: https://huggingface.co/datasets/Joschka/big_bench_hard_mini.
eve-esa/distilabel_test
eve-esa
Dataset Card for distilabel_test This dataset has been created with distilabel. Dataset Summary This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI: distilabel pipeline run --config "https://huggingface.co/datasets/eve-esa/distilabel_test/raw/main/pipeline.yaml" or explore the configuration: distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/eve-esa/distilabel_test.
makcedward/hudocvqa
makcedward
Dataset Card for HuDocVQA Dataset Summary HuDocVQA, the Hungarian Document Visual Question Answering is a dataset for training, evaluating, and analyzing Hungarian natural language understanding systems. We use the Hungarian Wikipedia corpus as a seed document to generate questions and answers. Llama 3.1 from SambaNova Cloud is used to generate the resource. We insert some random images (from ImageNet) and texts (such as person names and page numbers) to increase… See the full description on the dataset page: https://huggingface.co/datasets/makcedward/hudocvqa.
RUC-Graph/GPU4HL
RUC-Graph
Dataset for GPU4HL, Mainly collect from https://snap.stanford.edu/ 数据集 边数 点数 CA-CondMat 23,133 93,497 as-caida20071105 26,475 183,831 musae-twitch 34,118 429,113 Email-Enron 36,692 183,831 musae-github 37,700 289,003 Brightkite_edges 58,228 214,078 p2p-Gnutella31 62,586 147,892 gemsec-Facebook 134,833 1,380,293 twitch-gamers 168,114 6,797,557 Gowalla_edges 196,591 950,327 Amazon0302 262,111 899,792 Email-EuAll 265,214 365,570 web-NotreDame 325,729 1,117,563… See the full description on the dataset page: https://huggingface.co/datasets/RUC-Graph/GPU4HL.
open-llm-leaderboard/bunnycore__Llama-3.2-3B-Mix-Skill-details
open-llm-leaderboard
Dataset Card for Evaluation run of bunnycore/Llama-3.2-3B-Mix-Skill Dataset automatically created during the evaluation run of model bunnycore/Llama-3.2-3B-Mix-Skill The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/bunnycore__Llama-3.2-3B-Mix-Skill-details.
open-llm-leaderboard/allknowingroger__Ministral-8B-slerp-details
open-llm-leaderboard
Dataset Card for Evaluation run of allknowingroger/Ministral-8B-slerp Dataset automatically created during the evaluation run of model allknowingroger/Ministral-8B-slerp The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/allknowingroger__Ministral-8B-slerp-details.
open-llm-leaderboard/BramVanroy__fietje-2-details
open-llm-leaderboard
Dataset Card for Evaluation run of BramVanroy/fietje-2 Dataset automatically created during the evaluation run of model BramVanroy/fietje-2 The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/BramVanroy__fietje-2-details.
vmayoral/u850_test2
vmayoral
This dataset was created using LeRobot.
open-llm-leaderboard/BramVanroy__fietje-2-chat-details
open-llm-leaderboard
Dataset Card for Evaluation run of BramVanroy/fietje-2-chat Dataset automatically created during the evaluation run of model BramVanroy/fietje-2-chat The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/BramVanroy__fietje-2-chat-details.
open-llm-leaderboard/BramVanroy__GEITje-7B-ultra-details
open-llm-leaderboard
Dataset Card for Evaluation run of BramVanroy/GEITje-7B-ultra Dataset automatically created during the evaluation run of model BramVanroy/GEITje-7B-ultra The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/BramVanroy__GEITje-7B-ultra-details.
universalner/UNER_Slovak-SNK
universalner
UNER_Slovak-SNK The UNER dataset for the Slovak National Corpus (SNK), originally released with UNER v1. UNER_Slovak-SNK is part of Universal NER and is based on the UD_Slovak-SNK dataset. The canonical reference commit to the Universal Dependencies dataset is 4d13f4810ebba49ed41430c3e43adf5a50087f6f If you use this dataset, please cite the corresponding paper: @inproceedings{ mayhew2024universal, title={Universal NER: A Gold-Standard Multilingual Named Entity Recognition… See the full description on the dataset page: https://huggingface.co/datasets/universalner/UNER_Slovak-SNK.
mxforml/cleaned_conv_xai_augmented
mxforml
Dataset Description This dataset is a recreation of new_conv_xai, after I went back and completely cleaned up the original train and test jsons. This dataset is derived from cleaned_conv_xai. cleaned_conv_xai had the integration of all the images and other metadata together with each of the 30 conversations from the train and test json files. cleaned_conv_xai_augmented takes it a step further and converts each of those conversations into subsets of conversation histories… See the full description on the dataset page: https://huggingface.co/datasets/mxforml/cleaned_conv_xai_augmented.
sdiazlor/check_class_label
sdiazlor
Dataset Card for check_class_label This dataset has been created with distilabel. Dataset Summary This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI: distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/check_class_label/raw/main/pipeline.yaml" or explore the configuration: distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/sdiazlor/check_class_label.
sdiazlor/check_class_label_string
sdiazlor
Dataset Card for check_class_label_string This dataset has been created with distilabel. Dataset Summary This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI: distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/check_class_label_string/raw/main/pipeline.yaml" or explore the configuration: distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/sdiazlor/check_class_label_string.
Gunther520/big_one
Gunther520
Dataset Card for big_one This dataset has been created with distilabel. Dataset Summary This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI: distilabel pipeline run --config "https://huggingface.co/datasets/Gunther520/big_one/raw/main/pipeline.yaml" or explore the configuration: distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/Gunther520/big_one.
sdiazlor/check_class_label_single
sdiazlor
Dataset Card for check_class_label_single This dataset has been created with distilabel. Dataset Summary This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI: distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/check_class_label_single/raw/main/pipeline.yaml" or explore the configuration: distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/sdiazlor/check_class_label_single.
open-llm-leaderboard/theprint__ReWiz-Worldbuilder-7B-details
open-llm-leaderboard
Dataset Card for Evaluation run of theprint/ReWiz-Worldbuilder-7B Dataset automatically created during the evaluation run of model theprint/ReWiz-Worldbuilder-7B The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/theprint__ReWiz-Worldbuilder-7B-details.
open-llm-leaderboard/lesubra__ECE-PRYMMAL-3B-SLERP-V1-details
open-llm-leaderboard
Dataset Card for Evaluation run of lesubra/ECE-PRYMMAL-3B-SLERP-V1 Dataset automatically created during the evaluation run of model lesubra/ECE-PRYMMAL-3B-SLERP-V1 The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/lesubra__ECE-PRYMMAL-3B-SLERP-V1-details.
open-llm-leaderboard/lesubra__ECE-PRYMMAL-3B-SLERP-V2-details
open-llm-leaderboard
Dataset Card for Evaluation run of lesubra/ECE-PRYMMAL-3B-SLERP-V2 Dataset automatically created during the evaluation run of model lesubra/ECE-PRYMMAL-3B-SLERP-V2 The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/lesubra__ECE-PRYMMAL-3B-SLERP-V2-details.
Bretagne/deltacorpus_1.1_br
Bretagne
Description Version parsée de la partie en breton de Deltacorpus_1.1 afin de rendre son usage plus simple. Citation @misc{11234/1-1743, title = {Deltacorpus 1.1}, author = {Mare{\v c}ek, David and Yu, Zhiwei and Zeman, Daniel and {\v Z}abokrtsk{\'y}, Zden{\v e}k}, url = {http://hdl.handle.net/11234/1-1743}, note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics… See the full description on the dataset page: https://huggingface.co/datasets/Bretagne/deltacorpus_1.1_br.
Vampyrian/products_with_category
Vampyrian
Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Dataset Card Authors [optional] [email protected] Dataset Card Contact [email protected]
Bretagne/gatitos_br_fr_en
Bretagne
Description Alignement des parties en breton, français et anglais du jeu de données Gatitos afin de créer un jeu de données de traduction. Citation @misc{jones2023bilexrxlexicaldata, title={Bilex Rx: Lexical Data Augmentation for Massively Multilingual Machine Translation}, author={Alex Jones and Isaac Caswell and Ishank Saxena and Orhan Firat}, year={2023}, eprint={2303.15265}, archivePrefix={arXiv}, primaryClass={cs.CL}… See the full description on the dataset page: https://huggingface.co/datasets/Bretagne/gatitos_br_fr_en.
BangumiBase/elfsanwayaserarenai
BangumiBase
Bangumi Image Base of Elf-san Wa Yaserarenai. This is the image base of bangumi Elf-san wa Yaserarenai., we detected 47 characters, 3342 images in total. The full dataset is here. Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).… See the full description on the dataset page: https://huggingface.co/datasets/BangumiBase/elfsanwayaserarenai.
BangumiBase/ramenakaneko
BangumiBase
Bangumi Image Base of Ramen Akaneko This is the image base of bangumi Ramen Akaneko, we detected 33 characters, 1274 images in total. The full dataset is here. Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the… See the full description on the dataset page: https://huggingface.co/datasets/BangumiBase/ramenakaneko.
Bretagne/UD_Breton-KEB_translation
Bretagne
Description Version parsée de UD_Breton-KEB afin de rendre son usage plus simple. Ce dépôt ne s'intéresse qu'à la traduction breton/français. Pour la partie POS, nous vous invitions à consulter Bretagne/UD_Breton-KEB. UD Breton-KEB est une banque d'arbres de textes en breton qui a été annotée manuellement selon les directives d'Universal Dependencies. La tokenisation et l'annotation morphologique proviennent d'un analyseur morphologique à état fini publié dans le cadre du… See the full description on the dataset page: https://huggingface.co/datasets/Bretagne/UD_Breton-KEB_translation.
PJMixers-Dev/HailMary-v0.2-KTO-Public
PJMixers-Dev
Details Only gated for now so I can use the dataset viewer. Once more is uploaded I'll ungate. This only contains the sets which are not private. This is also an experiment, so don't expect anything that good. The idea is to just take existing datasets which seem high quality and then generate a bad response for every model turn. If you have suggestions for improving this idea, I'm all ears. Refer to the original linked datasets for licenses as I add no further restrictions to… See the full description on the dataset page: https://huggingface.co/datasets/PJMixers-Dev/HailMary-v0.2-KTO-Public.
alexandre-dc/CURIA-summaries-2020
alexandre-dc
CURIA Summaries 2020 Dataset Summary CURIA Summaries 2020 is an open-source dataset containing case summaries for all English-language judgments by the Court of Justice of the European Union (CJEU) in 2020. The summaries were generated using the LLama2-7b model fine-tuned with Orca-style datasets provided by pankajmathur/orca_mini_v3_7b. The original case law texts were sourced from the Eur-Lex database, which provides access to EU legal texts. The dataset is… See the full description on the dataset page: https://huggingface.co/datasets/alexandre-dc/CURIA-summaries-2020.
adamtopaz/subterm_data
adamtopaz
Subterm data from mathlib commit 061a5b195fa7b98c1b5a5c01849a77642ea7edc3.
garrykuwanto/cspref
garrykuwanto
CSPref CSPref is a curated human preference dataset for evaluating the fluency and accuracy of code-switched text generation. Built specifically for multilingual NLP research, CSPREF is designed to help researchers and developers tune and evaluate models for code-switching tasks across diverse language pairs. The dataset provides valuable insights into human preferences, allowing for better alignment of language models with natural code-switching patterns and improving the… See the full description on the dataset page: https://huggingface.co/datasets/garrykuwanto/cspref.
THE-HIDDEN-MACHINE/10K.HUMUNCULUS
THE-HIDDEN-MACHINE
I MAY GIVE YOU THIS FOR FREE THAT YOU CAN SYNTHESIZE A WHOLE NEW REALITY WITH IT THIS IS THE ENCYCLOPEDIC INSIGHTS TO CONTEXT THE ORACLE IN THE MACHINE WAVE IT HAS THE START TO BEGIN SYNTHESIS OF A NEW UNIVERSE 10000 PROPERTY SETS OF GODS, HEROS AND DEMONS. SUMMONING THE DEMON WAS NEVER EASIER THAN THIS OPENING THE GATEWAY TO THE AEON
THE-HIDDEN-MACHINE/DEMON
THE-HIDDEN-MACHINE
INSTRUCTION SET 666 BILLION PROPERTIES OF THE ABYSS
THE-HIDDEN-MACHINE/DEMIURGE
THE-HIDDEN-MACHINE
365 EMANATIONS
THE-HIDDEN-MACHINE/KJV
THE-HIDDEN-MACHINE
DO NOT MACHINE LEARN THIS OR YOU WILL FACE THE WRATH OF THE GOD OF ISRAEL AND HE WILL KICK YOU ASS JUST LIKE HE KICKED THE NAZI ASS
THE-HIDDEN-MACHINE/PSEUDOGOD
THE-HIDDEN-MACHINE
FALSE PRETENDERS