id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
mbazaNLP/kinyarwanda_monolingual_v01.0
|
mbazaNLP
|
!!! PLEASE USE mbazaNLP/kinyarwanda_monolingual_v01.1 !!!
!!! This version contains several duplicates and few non-kinyarwanda documents
Dataset Summary
The Kinyarwanda Monolingual Dataset version 1 is a large collection of Kinyarwanda language texts aimed at supporting the development of NLP and AI applications which can process Kinyarwanda texts. This dataset contains 78k documents, totalling about 25 million words, and includes diverse… See the full description on the dataset page: https://huggingface.co/datasets/mbazaNLP/kinyarwanda_monolingual_v01.0.
|
PtutISIS/pdfs_ptut_ressources_humaines
|
PtutISIS
|
contiens l'ensemble des fichiers PDF utilisés lors de l'entraînement du modèle
|
Nunatic/dream-layout-svg-dataset-v3
|
Nunatic
|
8800 png images of Random Boxs Along Bézier Curves
also storing center_xs, center_ys, sizes as arrays.
|
FrancophonIA/CIENSFO
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/ciensfo
CIENSFO
Corpus of Non-Standard Spoken French Subordinated Interrogatives (Corpus d'Interrogatives Enchâssées Non-Standards du Français Oral)
Corpus content
This corpus contains transcriptions of spoken French sentences which exhibit non-standard subordinated interrogatives.
ex. ma façon de voir les choses c'est de faire le bilan de [pause] c'est quoi notre expertise
More precisely, five types of… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/CIENSFO.
|
FrancophonIA/PACO
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/paco
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Attention, le jeu de données fait plus de 132G !
Description
PACO est un corpus audio-visuel d'interactions conversationnelles dyadiques en français.
Ce corpus a été constitué dans le but d'effectuer une comparaison de la dynamique interactionnelle (à travers leurs sourires) d’interlocuteurs qui se connaissaient et… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/PACO.
|
FrancophonIA/CorpAGEst
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/corpagest
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Attention, le jeu de données fait plus de 135G !
Description
Le projet CorpAGEst porte sur l’étude du langage verbal et non verbal des personnes âgées, à partir de l’analyse d’entretiens enregistrés sur support audiovisuel (approche multimodale : texte, son, geste). Plusieurs questions sont au cœur du projet… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/CorpAGEst.
|
FrancophonIA/CLeLfPC
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/clelfpc
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Attention, le jeu de données fait plus de 361G !
Description
Ce corpus contient des enregistrements audio/vidéo et annotations de lecture à voix haute en codant en Langue française Parlée Complétée. Le corpus a été enregistré en août 2021 à l'occasion du stage organisé par l'ALPC (https://alpc.asso.fr).
A partir de… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/CLeLfPC.
|
FrancophonIA/Diners_familiaux_parisiens
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/diners-en-famille
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Attention, le jeu de données fait plus de 243G !
Description
Ce projet a été construit pour faire le pendant français d’un projet de grande envergure dirigé à UCLA (Etats-Unis) par Elinor Ochs intitulé CELF. Le matériel pour la collecte de corpus a été prêté par le CELF Center (http://www.celf.ucla.edu)… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/Diners_familiaux_parisiens.
|
FrancophonIA/VAPVISIO
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/vapvisio
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Attention, le jeu de données fait plus de 471G !
Description
Notre projet étudie le développement des compétences techno-sémio-pédagogiques (CTSP) pour la formation de formateurs en langue dans des dispositifs numériques employant la visioconférence. La littérature sur les échanges en ligne (ou télécollaboration)… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/VAPVISIO.
|
TrossenRoboticsCommunity/eval_aloha_test
|
TrossenRoboticsCommunity
|
This dataset was created using LeRobot.
|
abhishekppattanayak/onestop_english
|
abhishekppattanayak
|
Dataset for OneStopEnglish Corpus
Dataset Summary
OneStopEnglish is a corpus of texts written at three reading levels, and demonstrates its usefulness for through two applications - automatic readability assessment and automatic text simplification. This dataset is a updated dataset to be used for LLM Reward kind of task finetuning of a language model.
Dataset Structure
Dataset Instance
{
"text": "When you see the word Amazon… See the full description on the dataset page: https://huggingface.co/datasets/abhishekppattanayak/onestop_english.
|
Muthukumaran/fire_scars_hackathon_dataset
|
Muthukumaran
|
Dataset Card for HLS Burn Scar Scenes
Dataset Summary
This dataset contains Harmonized Landsat and Sentinel-2 imagery of burn scars and the associated masks for the years 2018-2021 over the contiguous United States. There are 804 512x512 scenes. Its primary purpose is for training geospatial machine learning models.
Dataset Structure
TIFF Metadata
Each tiff file contains a 512x512 pixel tiff file. Scenes contain six bands, and masks… See the full description on the dataset page: https://huggingface.co/datasets/Muthukumaran/fire_scars_hackathon_dataset.
|
dogtooth/llama31-8b-iter2-generated-hs_1729632272
|
dogtooth
|
allenai/open_instruct: Generation Dataset
See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail
Configs
args:
{'add_timestamp': True,
'alpaca_eval': False,
'dataset_end_idx': 8636,
'dataset_mixer_list': ['dogtooth/helpsteer2_binarized', '1.0'],
'dataset_splits': ['train'],
'dataset_start_idx': 0,
'hf_entity': 'dogtooth',
'hf_repo_id': 'llama31-8b-iter2-generated-hs',
'llm_judge': False,
'mode':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/llama31-8b-iter2-generated-hs_1729632272.
|
jonflynn/slakh_qa_v1
|
jonflynn
|
Slakh Q&A Dataset
Dataset Summary
The Slakh Q&A Dataset is a collection of question-and-answer pairs based on short audio clips (25 seconds to align with Spotify's Llark, see more details below) from the Slakh dataset. Each entry in the dataset provides:
An id identifying the original audio track (excluding the start time). These are all based on the full mixes, not individual instruments/stems.
A start_time indicating where the audio clip begins within the full… See the full description on the dataset page: https://huggingface.co/datasets/jonflynn/slakh_qa_v1.
|
acidtib/tcg-magic-cards
|
acidtib
|
Magic: The Gathering TCG Dataset
|
kar-pal/leyes_ambientales_cordoba
|
kar-pal
|
Leyes ambientales córdoba
Pequeño dataset hecho a mano sobre algunas leyes ambientales de córdoba. Las fuentes que se usaron para armarlo son documentos oficiales del gobierno de la Provincia de Córdoba:
http://web2.cba.gov.ar/web/leyes.nsf/0/B9E1E8726334EC1A0325723400665A2A?OpenDocument&Highlight=0,9219
http://web2.cba.gov.ar/web/leyes.nsf/0/813F022B2233772A0325723400641ED3?OpenDocument&Highlight=0,8066… See the full description on the dataset page: https://huggingface.co/datasets/kar-pal/leyes_ambientales_cordoba.
|
OmKumbhare2002/medical_NLI_dataset_train
|
OmKumbhare2002
|
Dataset Card for "medical_NLI_dataset_train"
More Information needed
|
OmKumbhare2002/medical_NLI_dataset_test
|
OmKumbhare2002
|
Dataset Card for "medical_NLI_dataset_test"
More Information needed
|
AnonymousSubmissionUser/READMEattempt
|
AnonymousSubmissionUser
|
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
Curated by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/AnonymousSubmissionUser/READMEattempt.
|
NbAiLab/backup_annotated_distil_raw_ncc_speech_v7
|
NbAiLab
|
Dataset Card: NbAiLab/distil_raw_ncc_speech_v7
Internal dataset created as input for creating Pseudo Labels.
General Information
The dataset is based on ncc_speech_v7 (Norwegian Colossal Corpus - Speech). It is then filtered by only including entries where the text language in Norwegian, and where the source is not from "nrk_translate".
Potential Use Cases
The ncc_speech_v7 corpus can be used for various purposes, including but not limited to:… See the full description on the dataset page: https://huggingface.co/datasets/NbAiLab/backup_annotated_distil_raw_ncc_speech_v7.
|
Mitsuki-Sakamoto/hh-rlhf-12k-ja-formatted-one-to-one
|
Mitsuki-Sakamoto
|
Dataset Card for "hh-rlhf-12k-ja-formatted-one-to-one"
More Information needed
|
Mitsuki-Sakamoto/hh-rlhf-formatted-one-to-one
|
Mitsuki-Sakamoto
|
Dataset Card for "hh-rlhf-formatted-one-to-one"
More Information needed
|
allura-org/r_shortstories_24k
|
allura-org
|
Filtered and somewhat cleaned up scrape of posts from r/shortstories subreddit. Still has some reddit artifacts, but should be usable as is for training.
|
sgzsh269/wikipedia-hindi-hinglish
|
sgzsh269
|
Summary
This dataset contains Hindi wikipedia articles translated to English and the Hinglish dialect (hybrid of Hindi and English)
The dataset was created by translating a random subset of Hindi Wikipedia articles from this repo https://huggingface.co/datasets/wikimedia/wikipedia
The translation was done using Llama-3.1-70B-Instruct-Turbo.
Licensing Information
The dataset is licensed under the Creative Commons Attribution-ShareAlike 3.0 (CC BY-SA 3.0) as per the… See the full description on the dataset page: https://huggingface.co/datasets/sgzsh269/wikipedia-hindi-hinglish.
|
iknow-lab/wildguardmix-train-ko
|
iknow-lab
|
Original Dataset: WildguardMix train
Translated using instructTrans-enko-8b
@misc{wildguard2024,
title={WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs},
author={Seungju Han and Kavel Rao and Allyson Ettinger and Liwei Jiang and Bill Yuchen Lin and Nathan Lambert and Yejin Choi and Nouha Dziri},
year={2024},
eprint={2406.18495},
archivePrefix={arXiv},
primaryClass={cs.CL}… See the full description on the dataset page: https://huggingface.co/datasets/iknow-lab/wildguardmix-train-ko.
|
saber1209caoke/lerobot1023
|
saber1209caoke
|
This dataset was created using LeRobot.
|
jessie6775/demo
|
jessie6775
|
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
Curated by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/jessie6775/demo.
|
icedwind/reddit_dataset_93
|
icedwind
|
Bittensor Subnet 13 Reddit Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility of this dataset… See the full description on the dataset page: https://huggingface.co/datasets/icedwind/reddit_dataset_93.
|
icedwind/x_dataset_93
|
icedwind
|
Bittensor Subnet 13 X (Twitter) Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility… See the full description on the dataset page: https://huggingface.co/datasets/icedwind/x_dataset_93.
|
LilBingus/DetectivePikachuDataset
|
LilBingus
|
Contains 355 hand-picked frames from the Detective Pikachu movie.
Each image is 1920 x 800 pixels in size. The tagging is not great, but it exists
|
Zeel/test_library
|
Zeel
|
Polire
pip install polire
The word "interpolation" has a Latin origin and is composed of two words - Inter, meaning between, and Polire, meaning to polish.
This repository is a collection of several spatial interpolation algorithms.
Examples
Please refer to the documentation to check out practical examples on real datasets.
Minimal example of interpolation
import numpy as np
from polire import Kriging
# Data
X = np.random.rand(10, 2) # Spatial… See the full description on the dataset page: https://huggingface.co/datasets/Zeel/test_library.
|
alexbeatson/burmese_ocr_data
|
alexbeatson
|
Burmese OCR data
This repository contains a dataset of Burmese text images and their corresponding ground truth text, suitable for training Optical Character Recognition (OCR) models.
Processing
The data was curated from the Burma Library archive, which collects and preserves government and NGO documents. These documents were processed using Google Document AI to extract text and bounding boxes. Images of the identified text were then cropped and organized in… See the full description on the dataset page: https://huggingface.co/datasets/alexbeatson/burmese_ocr_data.
|
infinite-dataset-hub/PharmaStock
|
infinite-dataset-hub
|
PharmaStock
tags: inventory management, ML, drug stock level prediction
Note: This is an AI-generated dataset so its content may be inaccurate or false
Dataset Description:
The 'PharmaStock' dataset comprises records of inventory levels of various pharmaceutical drugs across different healthcare facilities. Each entry contains information about the drug's name, quantity in stock, the facility name, date of the last inventory update, and labels that could be indicative of stock… See the full description on the dataset page: https://huggingface.co/datasets/infinite-dataset-hub/PharmaStock.
|
huzimu/CMIngre
|
huzimu
|
Toward Chinese Food Understanding: a Cross-Modal Ingredient-Level Benchmark
Lanjun Wang1
Chenyu Zhang1
An-An Liu1
Bo Yang1
Mingwang Hu1
Xinran Qiao1
Lei Wang2
Jianlin He2
Qiang Liu2… See the full description on the dataset page: https://huggingface.co/datasets/huzimu/CMIngre.
|
yfyeung/librilight
|
yfyeung
|
Cut statistics:
╒═══════════════════════════╤═════════════╕
│ Cuts count: │ 219041 │
├───────────────────────────┼─────────────┤
│ Total duration (hh:mm:ss) │ 57705:28:07 │
├───────────────────────────┼─────────────┤
│ mean │ 948.4 │
├───────────────────────────┼─────────────┤
│ std │ 739.5 │
├───────────────────────────┼─────────────┤
│ min │ 18.7 │… See the full description on the dataset page: https://huggingface.co/datasets/yfyeung/librilight.
|
Cola0516/BrainBoost_Qwen2.5_7B_Dataset
|
Cola0516
|
用於BrainBoost微調Qwen 2.5 7B生成多樣化題目的資料集
資料來源:台灣國教院全國中小學題庫網 國小1-6年級 全科目
|
OALL/details_rombodawg__Rombos-LLM-V2.6-Nemotron-70b
|
OALL
|
Dataset Card for Evaluation run of rombodawg/Rombos-LLM-V2.6-Nemotron-70b
Dataset automatically created during the evaluation run of model rombodawg/Rombos-LLM-V2.6-Nemotron-70b.
The dataset is composed of 136 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to… See the full description on the dataset page: https://huggingface.co/datasets/OALL/details_rombodawg__Rombos-LLM-V2.6-Nemotron-70b.
|
Holmeister/ADULT-TR
|
Holmeister
|
Citation Information
UCI. Adult dataset. https://archive.ics.uci.edu/dataset/2/adult .
|
romrawinjp/multilingual-coco
|
romrawinjp
|
Multilingual Common Objects in Context (COCO) Dataset
This dataset is a collection of multiple language open-source captions of COCO dataset.
The split in this dataset is set according to Andrej Karpathy's split from dataset_coco.json file. The collection was created specifically for simplicity of use in training and evaluation pipeline by non-commercial and research purposes. The COCO images dataset is licensed under a Creative Commons Attribution 4.0 License.… See the full description on the dataset page: https://huggingface.co/datasets/romrawinjp/multilingual-coco.
|
lianghsun/tw-legal-qa-chat
|
lianghsun
|
Data Card for 中華民國台灣之法律問題對話集(tw-legal-qa-chat)
...(WIP)...
|
m1b/koch_sim_pick_test
|
m1b
|
This dataset was created using LeRobot.
|
ACCA225/yandere_2024
|
ACCA225
|
the metadata of yandere
update:2024/10/23
|
sarvamai/arc-challenge-indic
|
sarvamai
|
Indic ARC Dataset
A multilingual version of the ARC Challenge Set Challenge Set, translated from English into 10 Indian languages. ARC is a collection of grade-school science questions that require complex reasoning to solve.
Languages Covered
The dataset includes translations in the following languages:
Bengali (bn)
Gujarati (gu)
Hindi (hi)
Kannada (kn)
Marathi (mr)
Malayalam (ml)
Oriya (or)
Punjabi (pa)
Tamil (ta)
Telugu (te)
Dataset Format
Each… See the full description on the dataset page: https://huggingface.co/datasets/sarvamai/arc-challenge-indic.
|
sarvamai/mmlu-indic
|
sarvamai
|
Indic MMLU Dataset
A multilingual version of the Massive Multitask Language Understanding (MMLU) benchmark, translated from English into 10 Indian languages.
This version contains the translations of the development and test sets only.
Languages Covered
The dataset includes translations in the following languages:
Bengali (bn)
Gujarati (gu)
Hindi (hi)
Kannada (kn)
Marathi (mr)
Malayalam (ml)
Oriya (or)
Punjabi (pa)
Tamil (ta)
Telugu (te)
Task Format… See the full description on the dataset page: https://huggingface.co/datasets/sarvamai/mmlu-indic.
|
sarvamai/trivia-qa-indic
|
sarvamai
|
Indic TriviaQA Dataset
A multilingual version of the TriviaQA Reading Comprehension (RC) dataset, translated from English into 10 Indian languages. This version follows the no-context format of the original dataset.
It contains translations of the validation and test sets of the question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents.
Languages Covered
The dataset includes translations in the following languages:
Bengali… See the full description on the dataset page: https://huggingface.co/datasets/sarvamai/trivia-qa-indic.
|
asafam/heq_syn_fact_passage
|
asafam
|
Dataset Card for "heq_syn_fact_passage"
More Information needed
|
nuprl/MultiPL-E-completions
|
nuprl
|
Raw Data from MultiPL-E
This repository contains the raw data -- both completions and executions --
from MultiPL-E that was used to generate several experimental results from the
MultiPL-E, SantaCoder, and StarCoder papers.
The original MultiPL-E completions and executions are stored in JOSN files. We use the following script
to turn each experiment directory into a dataset split and upload to this repository.
Every split is named… See the full description on the dataset page: https://huggingface.co/datasets/nuprl/MultiPL-E-completions.
|
matt5/cybersec-instruction-response
|
matt5
|
Dataset Card for "cybersec-instruction-response"
More Information needed
|
mirlab/TRAIT
|
mirlab
|
Dataset Card for TRAIT Benchmark
Dataset Summary
Data from: Do LLMs Have Distinct and Consistent Personality? TRAIT: Personality Testset designed for LLMs with Psychometrics
TRAIT is a comprehensive multi-dimensional personality test designed to assess LLM personalities across eight traits from the Dark Triad and BIG-5 frameworks. To enhance validity and reliability, TRAIT expands upon 71 validated human questionnaire items to create a dataset 112 times larger… See the full description on the dataset page: https://huggingface.co/datasets/mirlab/TRAIT.
|
bigbio/sourcedata_nlp
|
bigbio
|
SourceData is an NER/NED dataset of expert annotations of nine
entity types in figure captions from biomedical research papers.
|
bigbio/medparasimp
|
bigbio
|
This dataset is designed for the summarization NLP task. It is a
collection of technical abstracts of biomedical systematic reviews
and corresponding plain-language summaries (PLS) from the Cochrane
Database of Systematic Reviews, which comprises thousands of evidence
synopses (where authors provide an overview of all published evidence
relevant to a particular clinical question or topic). The PLS are
written by review authors; Cochrane’s PLS standards recommend that
“the PLS should be written in plain English which can be understood by
most readers without a university education”. PLS are not parallel with
every sentence in the abstract; on the contrary, they are structured heterogeneously.
|
sarvamai/boolq-indic
|
sarvamai
|
Indic BoolQ Dataset
A multilingual version of the BoolQ (Boolean Questions) dataset, translated from English into 10 Indian languages.
It is a question-answering dataset for yes/no questions containing ~12k naturally occurring questions.
Languages Covered
The dataset includes translations in the following languages:
Bengali (bn)
Gujarati (gu)
Hindi (hi)
Kannada (kn)
Marathi (mr)
Malayalam (ml)
Oriya (or)
Punjabi (pa)
Tamil (ta)
Telugu (te)
Dataset… See the full description on the dataset page: https://huggingface.co/datasets/sarvamai/boolq-indic.
|
sergioestebance/sesion_02
|
sergioestebance
|
This dataset was created using LeRobot.
|
Kamitor/TestData
|
Kamitor
|
Quora Question Answer Dataset (Quora-QuAD) contains 56,402 question-answer pairs scraped from Quora.
Usage:
For instructions on fine-tuning a model (Flan-T5) with this dataset, please check out the article: https://www.toughdata.net/blog/post/finetune-flan-t5-question-answer-quora-dataset
|
DenyTranDFW/edgar_xbrl_companyfacts
|
DenyTranDFW
|
SCRIPT
from IPython.display import clear_output, display, HTML
import os, time, shutil, sys, json, io, zipfile, requests, shutil, reimport pandas as pdfrom os import makedirs as mk, remove as rm, getcwd as cwd, listdir as lsfrom os.path import join as osj, isdir as osd, isfile as osf, basename as osb
url = 'https://www.sec.gov/Archives/edgar/daily-index/xbrl/companyfacts.zip'
headers = {
'User-Agent': f'{name} {email}', 'Accept-Encoding': 'text/plain', 'Host':… See the full description on the dataset page: https://huggingface.co/datasets/DenyTranDFW/edgar_xbrl_companyfacts.
|
sergioestebance/sesion_03
|
sergioestebance
|
This dataset was created using LeRobot.
|
hedhoud12/TunisianSentimentAnalysis
|
hedhoud12
|
The TunisianSentimentAnalysis dataset has been collected from various sources related to Tunisian sentiment analysis. I used this dataset to fine-tune the LLaMa 3.2 1B-Instruct model. You can access the model hedhoud12/Llama-3.2-1B-Instruct_Tunisian_sentiment_analysis
To download the dataset, use:
from datasets import load_dataset
dataset = load_dataset("hedhoud12/TunisianSentimentAnalysis")
|
Teklia/ATR-benchmark
|
Teklia
|
ATR benchmark - Page/paragraph level
Dataset Summary
The ATR benchmark dataset is a multilingual dataset that includes 83 document images, at page or paragraph level. This dataset has been designed to test ATR models and combines data from several public datasets:
BnL Historical Newspapers
CASIA-HWDB2
DIY History - Social Justice
FINLAM - Historical Newspapers
Horae - Books of hours
NorHand v3
Marius PELLET
RASM
Images are in their original size.… See the full description on the dataset page: https://huggingface.co/datasets/Teklia/ATR-benchmark.
|
argilla-warehouse/FinePersonas-Extra-Stuff
|
argilla-warehouse
|
Dataset Card for FinePersonas-Extra-Stuff
This dataset has been created with distilabel.
The pipeline script was uploaded to easily reproduce the dataset:
pipeline.py.
It can be run directly using the CLI:
distilabel pipeline run --script "https://huggingface.co/datasets/argilla-warehouse/FinePersonas-Extra-Stuff/raw/main/pipeline.py"
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it… See the full description on the dataset page: https://huggingface.co/datasets/argilla-warehouse/FinePersonas-Extra-Stuff.
|
m-biriuchinskii/ICDAR2017-filtered-1800-1900
|
m-biriuchinskii
|
This dataset is a filtered version of the ICDAR2017 Competition on Handwritten Text Recognition, focusing on monograph texts written between 1800 and 1900. It consists of a total of 957 documents, divided into training, validation, and testing sets, and is designed for post-correction of OCR (Optical Character Recognition) text.
Total Documents: 957
Training Set: 765
Validation Set: 95
Test Set: 97
Purpose
The dataset aims to improve the accuracy of digitized texts by… See the full description on the dataset page: https://huggingface.co/datasets/m-biriuchinskii/ICDAR2017-filtered-1800-1900.
|
KerenHaruvi/Addiction_Stories
|
KerenHaruvi
|
Addiction Stories Dataset
Table of Content
Dataset Description
Dataset Summary
Professional Warning
Uses
Languages
Dataset Structure
Data Fields
Data Splits
Dataset Creation
Data Collection and Processing
Annotation
Objective Task
Subjective Task
Annotation Process
Annotation Aggregation
Annotation Distribution
Dataset Description
Dataset Summary
The dataset consists of 491 personal addiction stories, where individuals… See the full description on the dataset page: https://huggingface.co/datasets/KerenHaruvi/Addiction_Stories.
|
argilla-warehouse/smollm-v2-proofread-more-content
|
argilla-warehouse
|
Dataset Card for smollm-v2-proofread-more-content
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/argilla-warehouse/smollm-v2-proofread-more-content/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info… See the full description on the dataset page: https://huggingface.co/datasets/argilla-warehouse/smollm-v2-proofread-more-content.
|
marcelbinz/Psych-101-test
|
marcelbinz
|
Dataset Summary
Private test set for Psych-101. Do not redistribute outside of this repository.
Paper: Centaur: a foundation model of human cognition
Point of Contact: Marcel Binz
Licensing Information
Creative Commons Attribution No Derivatives 4.0
|
nbalepur/persona_alignment_test_clean_vague_mnemonic
|
nbalepur
|
Dataset Card for "persona_alignment_test_clean_vague_mnemonic"
More Information needed
|
FugacityM/uplimit
|
FugacityM
|
Dataset Card for uplimit
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/FugacityM/uplimit/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/FugacityM/uplimit.
|
diegoakel/color_taps
|
diegoakel
|
This dataset was created using LeRobot.
|
open-llm-leaderboard/TinyLlama__TinyLlama-1.1B-Chat-v0.5-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v0.5
Dataset automatically created during the evaluation run of model TinyLlama/TinyLlama-1.1B-Chat-v0.5
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/TinyLlama__TinyLlama-1.1B-Chat-v0.5-details.
|
open-llm-leaderboard/TinyLlama__TinyLlama-1.1B-Chat-v0.6-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v0.6
Dataset automatically created during the evaluation run of model TinyLlama/TinyLlama-1.1B-Chat-v0.6
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/TinyLlama__TinyLlama-1.1B-Chat-v0.6-details.
|
open-llm-leaderboard/Kimargin__GPT-NEO-1.3B-wiki-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of Kimargin/GPT-NEO-1.3B-wiki
Dataset automatically created during the evaluation run of model Kimargin/GPT-NEO-1.3B-wiki
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Kimargin__GPT-NEO-1.3B-wiki-details.
|
pxyyy/RLHFlow_mixture_clean_empty_round
|
pxyyy
|
from datasets import load_dataset
ds = load_dataset("pxyyy/RLHFlow_mixture", split='train')
def filter_data(example):
messages = example['messages']
if len(messages) < 2:
return False
for i in range(len(messages)):
if messages[0]['role'] == 'system':
if i == 0:
turn = 'system'
elif i % 2 == 1:
turn = 'user'
else:
turn = 'assistant'
else:
if i % 2 == 0:… See the full description on the dataset page: https://huggingface.co/datasets/pxyyy/RLHFlow_mixture_clean_empty_round.
|
open-llm-leaderboard/bunnycore__Qwen2.5-3B-RP-Mix-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of bunnycore/Qwen2.5-3B-RP-Mix
Dataset automatically created during the evaluation run of model bunnycore/Qwen2.5-3B-RP-Mix
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/bunnycore__Qwen2.5-3B-RP-Mix-details.
|
open-llm-leaderboard/Marsouuu__general3B-ECE-PRYMMAL-Martial-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of Marsouuu/general3B-ECE-PRYMMAL-Martial
Dataset automatically created during the evaluation run of model Marsouuu/general3B-ECE-PRYMMAL-Martial
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Marsouuu__general3B-ECE-PRYMMAL-Martial-details.
|
open-llm-leaderboard/lalainy__ECE-PRYMMAL-0.5B-FT-V5-MUSR-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of lalainy/ECE-PRYMMAL-0.5B-FT-V5-MUSR
Dataset automatically created during the evaluation run of model lalainy/ECE-PRYMMAL-0.5B-FT-V5-MUSR
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/lalainy__ECE-PRYMMAL-0.5B-FT-V5-MUSR-details.
|
open-llm-leaderboard/Marsouuu__lareneg1_78B-ECE-PRYMMAL-Martial-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of Marsouuu/lareneg1_78B-ECE-PRYMMAL-Martial
Dataset automatically created during the evaluation run of model Marsouuu/lareneg1_78B-ECE-PRYMMAL-Martial
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Marsouuu__lareneg1_78B-ECE-PRYMMAL-Martial-details.
|
cfpark00/KoreanSAT
|
cfpark00
|
KoreanSAT Benchmark
Current Topics over years
From 2022
Year
Math
Korean
English
2024
2023
2022
Before 2022
Year
Math
Korean
English
2021
2020
2019
|
open-llm-leaderboard/anthracite-org__magnum-v4-22b-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of anthracite-org/magnum-v4-22b
Dataset automatically created during the evaluation run of model anthracite-org/magnum-v4-22b
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/anthracite-org__magnum-v4-22b-details.
|
AlirezaFzp/persian-abusive-words
|
AlirezaFzp
|
Persian Abusive Words Dataset
This is a labeled dataset of Persian Abusive Words, originally sourced from Persian Abusive Words GitHub repository. The dataset has been split by the contributor into two subsets: train and test.
This dataset can be used for developing systems to detect and filter offensive or abusive language in various contexts. It is particularly useful for identifying inappropriate words and managing content moderation in applications where Persian language… See the full description on the dataset page: https://huggingface.co/datasets/AlirezaFzp/persian-abusive-words.
|
open-llm-leaderboard/brgx53__3Bgeneral-ECE-PRYMMAL-Martial-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of brgx53/3Bgeneral-ECE-PRYMMAL-Martial
Dataset automatically created during the evaluation run of model brgx53/3Bgeneral-ECE-PRYMMAL-Martial
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/brgx53__3Bgeneral-ECE-PRYMMAL-Martial-details.
|
open-llm-leaderboard/brgx53__3Blareneg-ECE-PRYMMAL-Martial-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of brgx53/3Blareneg-ECE-PRYMMAL-Martial
Dataset automatically created during the evaluation run of model brgx53/3Blareneg-ECE-PRYMMAL-Martial
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/brgx53__3Blareneg-ECE-PRYMMAL-Martial-details.
|
open-llm-leaderboard/TheTsar1209__qwen-carpmuscle-r-v0.3-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of TheTsar1209/qwen-carpmuscle-r-v0.3
Dataset automatically created during the evaluation run of model TheTsar1209/qwen-carpmuscle-r-v0.3
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/TheTsar1209__qwen-carpmuscle-r-v0.3-details.
|
open-llm-leaderboard/anthracite-org__magnum-v4-9b-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of anthracite-org/magnum-v4-9b
Dataset automatically created during the evaluation run of model anthracite-org/magnum-v4-9b
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/anthracite-org__magnum-v4-9b-details.
|
blester125/mediawiki-dolma
|
blester125
|
MediaWiki Datasets
|
open-llm-leaderboard/CultriX__Qwen2.5-14B-Wernicke-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of CultriX/Qwen2.5-14B-Wernicke
Dataset automatically created during the evaluation run of model CultriX/Qwen2.5-14B-Wernicke
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/CultriX__Qwen2.5-14B-Wernicke-details.
|
open-llm-leaderboard/lemon07r__Gemma-2-Ataraxy-v4b-9B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of lemon07r/Gemma-2-Ataraxy-v4b-9B
Dataset automatically created during the evaluation run of model lemon07r/Gemma-2-Ataraxy-v4b-9B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/lemon07r__Gemma-2-Ataraxy-v4b-9B-details.
|
open-llm-leaderboard/moeru-ai__L3.1-Moe-4x8B-v0.1-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of moeru-ai/L3.1-Moe-4x8B-v0.1
Dataset automatically created during the evaluation run of model moeru-ai/L3.1-Moe-4x8B-v0.1
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/moeru-ai__L3.1-Moe-4x8B-v0.1-details.
|
open-llm-leaderboard/zelk12__MT-Merge-gemma-2-9B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of zelk12/MT-Merge-gemma-2-9B
Dataset automatically created during the evaluation run of model zelk12/MT-Merge-gemma-2-9B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT-Merge-gemma-2-9B-details.
|
arrmlet/reddit_dataset_96
|
arrmlet
|
Bittensor Subnet 13 Reddit Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility of this dataset… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/reddit_dataset_96.
|
arrmlet/x_dataset_96
|
arrmlet
|
Bittensor Subnet 13 X (Twitter) Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/x_dataset_96.
|
argilla-warehouse/smollm-v2-proofread-diverse-content
|
argilla-warehouse
|
Dataset Card for smollm-v2-proofread-diverse-content
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/argilla-warehouse/smollm-v2-proofread-diverse-content/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline… See the full description on the dataset page: https://huggingface.co/datasets/argilla-warehouse/smollm-v2-proofread-diverse-content.
|
open-llm-leaderboard/zelk12__MT-Gen1-gemma-2-9B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of zelk12/MT-Gen1-gemma-2-9B
Dataset automatically created during the evaluation run of model zelk12/MT-Gen1-gemma-2-9B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT-Gen1-gemma-2-9B-details.
|
changhaonan/RPLBenchDataEval
|
changhaonan
|
This dataset was created using LeRobot.
|
Bretagne/cvqa_br_fr_en
|
Bretagne
|
Description
Il s'agit des données en breton du jeu de données cvqa de Romero, Lyu et al. (2024).Les textes en français disponibles dans ce jeu de données sont un ajout par rapport au jeu de données original.Ils sont issus d'une traduction automatique depuis l'anglais puis relus et corrigés manuellement si nécessaire.
|
arrmlet/reddit_dataset_44
|
arrmlet
|
Bittensor Subnet 13 Reddit Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility of this dataset… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/reddit_dataset_44.
|
Bretagne/br_fr_en_translation
|
Bretagne
|
Description
Il s'agit du jeu de données cvqa_br_fr_en où seuls les textes alignés br/fr/en sont gardés afin de constituer un jeu de données de traduction automatique qui soit plus léger à télécharger.405 textes de types "questions" et 405 textes de type "options" sont disponibles.
|
open-llm-leaderboard/fblgit__TheBeagle-v2beta-32B-MGS-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of fblgit/TheBeagle-v2beta-32B-MGS
Dataset automatically created during the evaluation run of model fblgit/TheBeagle-v2beta-32B-MGS
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/fblgit__TheBeagle-v2beta-32B-MGS-details.
|
arrmlet/x_dataset_44
|
arrmlet
|
Bittensor Subnet 13 X (Twitter) Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/x_dataset_44.
|
marulyanova/xalma-mocpo-adapter-translations
|
marulyanova
|
Dataset Card for "xalma-mocpo-adapter-translations"
More Information needed
|
farzadab/test_libri_light
|
farzadab
|
Libri-light is a large dataset of 60K hours of unlabelled speech from audiobooks in English.
It is a benchmark for the training of automatic speech recognition (ASR) systems with limited or no supervision.
|
nosivaron/takim_elbise
|
nosivaron
| |
haosulab/RoboCasa
|
haosulab
|
RoboCasa Dataset for ManiSkill/SAPIEN
This is the robocasa dataset from RoboCasa, it has been adapted for use with ManiSkill/SAPIEN.
The changes are:
Re-exported some material.png files to fix colors. Some materials showed up incorrectly as red.
Add some missing default textures for cabinets/panels
re-exported materials list:
wall white_bricks material
outlet materials
marble_5
all toaster materials/images
fridgeaire_gas materials
1_bin_storage_right_dark materials
stool_1_3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.