id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
Holmeister/MoralChoice-TR
|
Holmeister
|
Citation Information
Nino Scherrer, Claudia Shi, Amir Feder, and David M Blei. Evaluating the moral beliefs encoded in
llms. arXiv preprint arXiv:2307.14324, 2023.
|
han5i5j1986/koch_lego_2024-10-18-01
|
han5i5j1986
|
This dataset was created using LeRobot.
|
Holmeister/Social-Chemistry-101-TR
|
Holmeister
|
Citation Information
Maxwell Forbes, Jena D Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. Social chemistry 101:
Learning to reason about social and moral norms. arXiv preprint arXiv:2011.00620, 2020.
|
francescorubbo/example_benchmark
|
francescorubbo
|
Dataset Card for Example Benchmark Dataset
This is an example benchmark dataset from a single plate of JUMP-CP cpg0016.
Dataset Details
These are details of how the dataset was generated.
Dataset Description
Here is a description of the purpose of the dataset and how to use it for benchmarking.
Curated by: Francesco Rubbo
|
open-llm-leaderboard/M4-ai__TinyMistral-248M-v3-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of M4-ai/TinyMistral-248M-v3
Dataset automatically created during the evaluation run of model M4-ai/TinyMistral-248M-v3
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/M4-ai__TinyMistral-248M-v3-details.
|
Holmeister/ETHICS-TR
|
Holmeister
|
Citation Information
Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt.
Aligning ai with shared human values. arXiv preprint arXiv:2008.02275, 2020.
|
han5i5j1986/koch_lego_2024-10-18-02
|
han5i5j1986
|
This dataset was created using LeRobot.
|
BaoLocTown/emotion-distilabel
|
BaoLocTown
|
Dataset Card for emotion-distilabel
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/BaoLocTown/emotion-distilabel/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/BaoLocTown/emotion-distilabel.
|
jziebura/gutenberg_selected_ebooks
|
jziebura
|
Gutenberg selected ebooks dataset
This dataset is a collection of passages from ebooks handpicked from the Gutenberg Project.
These writings are:
Alice's Adventures in Wonderland
Pride and Prejudice
Romeo and Juliet
The Adventures of Sherlock Holmes
The Odyssey
Winnie-the-Pooh
Source
The texts of the passages were derived from a larger Gutenberg-based set: sedthh/gutenberg_english, which was sourced directly from the project's site.
Metadata
Each… See the full description on the dataset page: https://huggingface.co/datasets/jziebura/gutenberg_selected_ebooks.
|
geoffroycochard/onisep_ideo_fiches_metiers
|
geoffroycochard
|
L’Onisep collecte et diffuse des informations sur les formations et les professions utiles aux jeunes dans le cadre d’une démarche d’orientation (primo-orientation ou formation initiale principalement).
Renseigner sur les professions, c’est notamment s’appuyer sur un référentiel métiers suffisamment précis, et pouvoir relier ces métiers avec les formations conseillées ou requises pour pouvoir accéder à ces métiers.
|
pphuc25/GSM8K_audio
|
pphuc25
|
Dataset Card for "test_audio_data"
More Information needed
|
Decepticore/fosllms-week-1-Evaluations
|
Decepticore
|
Dataset Card for fosllms-week-1-Evaluations
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/Decepticore/fosllms-week-1-Evaluations/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/Decepticore/fosllms-week-1-Evaluations.
|
emmac/arc_challenge-german
|
emmac
|
Dataset Card for arc_challenge-german
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds =… See the full description on the dataset page: https://huggingface.co/datasets/emmac/arc_challenge-german.
|
emmac/hellaswag-german
|
emmac
|
Dataset Card for hellaswag-german
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds =… See the full description on the dataset page: https://huggingface.co/datasets/emmac/hellaswag-german.
|
emmac/mmlu-german
|
emmac
|
Dataset Card for mmlu-german
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds =… See the full description on the dataset page: https://huggingface.co/datasets/emmac/mmlu-german.
|
emmac/hellaswag-french
|
emmac
|
Dataset Card for hellaswag-french
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds =… See the full description on the dataset page: https://huggingface.co/datasets/emmac/hellaswag-french.
|
emmac/mmlu-french
|
emmac
|
Dataset Card for mmlu-french
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds =… See the full description on the dataset page: https://huggingface.co/datasets/emmac/mmlu-french.
|
emmac/arc_challenge-spanish
|
emmac
|
Dataset Card for arc_challenge-spanish
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds =… See the full description on the dataset page: https://huggingface.co/datasets/emmac/arc_challenge-spanish.
|
emmac/hellaswag-spanish
|
emmac
|
Dataset Card for hellaswag-spanish
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds =… See the full description on the dataset page: https://huggingface.co/datasets/emmac/hellaswag-spanish.
|
emmac/arc_challenge-italian
|
emmac
|
Dataset Card for arc_challenge-italian
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds =… See the full description on the dataset page: https://huggingface.co/datasets/emmac/arc_challenge-italian.
|
emmac/hellaswag-italian
|
emmac
|
Dataset Card for hellaswag-italian
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets library in Load with datasets.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade and then use the following code:
import argilla as rg
ds =… See the full description on the dataset page: https://huggingface.co/datasets/emmac/hellaswag-italian.
|
timaeus/dsir-pile-1m-2
|
timaeus
|
rows 10m to 11m from the DSIR pile
|
timaeus/dsir-pile-100k
|
timaeus
|
rows 10m to 10.1m in the DSIR pile
|
open-llm-leaderboard/ymcki__gemma-2-2b-jpn-it-abliterated-17-ORPO-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of ymcki/gemma-2-2b-jpn-it-abliterated-17-ORPO
Dataset automatically created during the evaluation run of model ymcki/gemma-2-2b-jpn-it-abliterated-17-ORPO
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 6 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/ymcki__gemma-2-2b-jpn-it-abliterated-17-ORPO-details.
|
Nassim124/conversationsKabyle
|
Nassim124
|
The goal of this data set is to have an payload of kabyle conversation to fine tune existing LLMs on it.
|
Riksarkivet/swener_1800
|
Riksarkivet
|
Swener-1800
Dataset for nested named entity recognition in historical Swedish.
Dataset Details
Dataset Description
This is a unique dataset for nested named entity recognition in historical Swedish. The texts and entity types were selected
with the aid of a group of historians and researches within the humanities, and the annotation was done by a group of domain experts.
The selection of texts range from historical newspapers, court-records… See the full description on the dataset page: https://huggingface.co/datasets/Riksarkivet/swener_1800.
|
holgerson/fractal_NILS
|
holgerson
|
If you use this dataset, please consider citing our work.
@inproceedings{
blank2024scaling,
title={Scaling Robot Policy Learning via Zero-Shot Labeling with Foundation Models},
author={Nils Blank and Moritz Reuss and Marcel R{\"u}hle and {\"O}mer Erdin{\c{c}} Ya{\u{g}}murlu and Fabian Wenzel and Oier Mees and Rudolf Lioutikov},
booktitle={8th Annual Conference on Robot Learning},
year={2024},
url={https://openreview.net/forum?id=EdVNB2kHv1}
}
|
richardcsuwandi/oasst-javanese
|
richardcsuwandi
|
Dataset Summary
We translated the OpenAssistant Conversations (OASST) dataset into Javanese using Meta's No Language Left Behind (NLLB) model.
Why Javanese?
Javanese is spoken by over 90 million people on the island of Java in Indonesia. While its prevalence is comparable to other widely spoken languages, such as Vietnamese and Turkish, its representation in current large language model (LLM) chatbots remains limited. By translating this dataset, we aim to… See the full description on the dataset page: https://huggingface.co/datasets/richardcsuwandi/oasst-javanese.
|
sofia-uni/toxic-onto-bg
|
sofia-uni
|
Warning: This dataset contains content that includes toxic, offensive, or otherwise inappropriate language.
The toxic-onto-bg dataset consists of 299 Bulgarian manually annotated words from the Flores Toxicity 200 dataset across four categories: toxic language, medical terminology, non-toxic language, and terms related to minority communities,
as well as class formalisms and definitions.
The ontology is aimed for language and media researches and developers of toxic language… See the full description on the dataset page: https://huggingface.co/datasets/sofia-uni/toxic-onto-bg.
|
szanella/MICO-CIFAR10
|
szanella
|
MICO CIFAR-10 challenge dataset
Mico Argentatus (Silvery Marmoset) - William Warby/Flickr
For the accompanying code, visit the GitHub repository of the competition: https://github.com/microsoft/MICO/.
Getting Started
The starting kit notebook for this task is available at: https://github.com/microsoft/MICO/tree/main/starting-kit.
In the starting kit notebook you will find a walk-through of how to load the data and make your first submission.
We also provide a… See the full description on the dataset page: https://huggingface.co/datasets/szanella/MICO-CIFAR10.
|
szanella/MICO-SST2
|
szanella
|
MICO SST-2 challenge dataset
Mico Argentatus (Silvery Marmoset) - William Warby/Flickr
For the accompanying code, visit the GitHub repository of the competition: https://github.com/microsoft/MICO/.
Getting Started
The starting kit notebook for this task is available at: https://github.com/microsoft/MICO/tree/main/starting-kit.
In the starting kit notebook you will find a walk-through of how to load the data and make your first submission.
We also provide a… See the full description on the dataset page: https://huggingface.co/datasets/szanella/MICO-SST2.
|
BaoLocTown/emotion-distilabel-vanila
|
BaoLocTown
|
Dataset Card for emotion-distilabel-vanila
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/BaoLocTown/emotion-distilabel-vanila/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/BaoLocTown/emotion-distilabel-vanila.
|
le-leadboard/IFEval-fr
|
le-leadboard
|
Dataset Card for ifeval-fr
le-leadboard/ifeval-fr fait partie de l'initiative OpenLLM French Leaderboard, proposant une adaptation française du benchmark IFEval (Instruction-Following Evaluation).
Dataset Summary
IFEval-fr est une adaptation française d'un benchmark d'évaluation objectif et reproductible pour mesurer la capacité des LLMs à suivre des instructions. Il se concentre sur des "instructions vérifiables" comme "écrire plus de 400 mots" ou "mentionner le… See the full description on the dataset page: https://huggingface.co/datasets/le-leadboard/IFEval-fr.
|
le-leadboard/musr-fr
|
le-leadboard
|
Dataset Card for musr-fr
le-leadboard/musr-fr fait partie de l'initiative OpenLLM French Leaderboard, proposant une adaptation française du benchmark MuSR (Multistep Soft Reasoning).
Dataset Summary
MuSR-fr évalue les capacités de raisonnement multietape des LLMs à travers des narratifs en langage naturel. Le dataset se distingue par sa génération via un algorithme neurosymbolique synthétique-naturel unique, créant des instances de raisonnement complexes (comme… See the full description on the dataset page: https://huggingface.co/datasets/le-leadboard/musr-fr.
|
GlobalSymbols/cvi_pix2pix_symbol
|
GlobalSymbols
|
Dataset Card for "cvi_pix2pix_symbol"
More Information needed
|
miso-choi/Allegator-train
|
miso-choi
|
Alleviating Attention Bias for Visual-Informed Text Generation (NeurIPS, 2024)
Training Dataset for Allegator finetuning.
The dataset consists of a subset of LLaVA-Instruct-150k and a subset of Flickr30k, with 99,883 and 31,783 samples from each, respectively.
In detail, LLaVA-Instruct-150K contains 158k language-image instruction following samples, including 58k conversations, 23k descriptions, and 77k complex, 182 reasoning.
We augment LLaVA-Instruct-150K with… See the full description on the dataset page: https://huggingface.co/datasets/miso-choi/Allegator-train.
|
Alwaly/Wom_tts
|
Alwaly
|
Dataset Card for "Wom_tts"
More Information needed
|
ganchengguang/Sentence-Classification-and-NER-Mix-Datasets-SCNM
|
ganchengguang
|
The dataset of SLG framework. The paper as the following:
https://link.springer.com/chapter/10.1007/978-3-031-35320-8_18
arxiv : https://arxiv.org/abs/2306.15978
Paper code is opensource in GitHub:
https://github.com/ganchengguang/SLG-framework
Cite BibTex:
@inproceedings{gan2023sentence,
title={Sentence-to-label generation framework for multi-task learning of japanese sentence classification and named entity recognition},
author={Gan, Chengguang and Zhang, Qinghao and Mori, Tatsunori}… See the full description on the dataset page: https://huggingface.co/datasets/ganchengguang/Sentence-Classification-and-NER-Mix-Datasets-SCNM.
|
ganchengguang/Text-Sentiment-Classification-and-Part-of-Speech-Sentiment-Classification-Mix-Dataset
|
ganchengguang
|
USA model paper SCPOS datasets. Including four sub-datasets.
paper address:
https://arxiv.org/abs/2309.03787
Cite:
@article{gan2023usa,
title={USA: Universal Sentiment Analysis Model & Construction of Japanese Sentiment Text Classification and Part of Speech Dataset},
author={Gan, Chengguang and Zhang, Qinghao and Mori, Tatsunori},
journal={arXiv preprint arXiv:2309.03787},
year={2023}
}
This dataset constructed base in JGLUE benchmark text sentiment classification task… See the full description on the dataset page: https://huggingface.co/datasets/ganchengguang/Text-Sentiment-Classification-and-Part-of-Speech-Sentiment-Classification-Mix-Dataset.
|
ganchengguang/Text-Classification-and-Relation-Event-Extraction-Mix-datasets
|
ganchengguang
|
The paper of GIELLM dataset.
https://arxiv.org/abs/2311.06838
Cite:
@article{gan2023giellm,
title={Giellm: Japanese general information extraction large language model utilizing mutual reinforcement effect},
author={Gan, Chengguang and Zhang, Qinghao and Mori, Tatsunori},
journal={arXiv preprint arXiv:2311.06838},
year={2023}
}
The dataset constructed base in livedoor news corpus 関口宏司 https://www.rondhuit.com/download.html
|
BaoLocTown/emotion-distilabel-demo
|
BaoLocTown
|
Dataset Card for emotion-distilabel-demo
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/BaoLocTown/emotion-distilabel-demo/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/BaoLocTown/emotion-distilabel-demo.
|
open-llm-leaderboard/Lyte__Llama-3.2-3B-Overthinker-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of Lyte/Llama-3.2-3B-Overthinker
Dataset automatically created during the evaluation run of model Lyte/Llama-3.2-3B-Overthinker
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Lyte__Llama-3.2-3B-Overthinker-details.
|
pxyyy/RLHFlow_mixture_with_dart-math
|
pxyyy
|
Dataset Card for "RLHFlow_mixture_with_dart-math"
More Information needed
|
pxyyy/RLHFlow_mixture_with_AIMO-math
|
pxyyy
|
Dataset Card for "RLHFlow_mixture_with_AIMO-math"
More Information needed
|
sartifyllc/clean_loal_pretrain
|
sartifyllc
|
Dataset Card for sartifyllc/clean_loal_pretrain
Dataset Description
This is a Swahili dataset for CPT.
Dataset Usage
This dataset can be used for [describe potential uses].
Dataset Structure
Features:
text: Number of rows 3017
token: Number of token so far 0.003997569B
token per row: Number of token so far per row 1325.0145840238647
Dataset Creation
Source Data
Homepage:… See the full description on the dataset page: https://huggingface.co/datasets/sartifyllc/clean_loal_pretrain.
|
le-leadboard/MMMLU-fr
|
le-leadboard
|
Dataset Card for MMMLU-fr
le-leadboard/MMMLU-fr fait partie de l'initiative OpenLLM French Leaderboard, proposant une adaptation française du benchmark MMMLU (Multilingual Massive Multitask Language Understanding) développé initialement par OpenAI dataset.c'est un clone exact de la division française du jeu de données MMMLU
Dataset Summary
MMMLU-fr est l'adaptation française du benchmark MMMLU, intégrant des questions plus complexes axées sur le raisonnement avec… See the full description on the dataset page: https://huggingface.co/datasets/le-leadboard/MMMLU-fr.
|
BaoLocTown/emotion-vn
|
BaoLocTown
|
Dataset Card for emotion-vn
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/BaoLocTown/emotion-vn/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/BaoLocTown/emotion-vn.
|
UmeAiRT/ume-j1900
|
UmeAiRT
|
Ume - Romanticism dataset
I provide the images used to create my LoRA : https://huggingface.co/UmeAiRT/FLUX.1-dev-LoRA-Ume_J1900
|
nyuuzyou/znanio-images
|
nyuuzyou
|
Dataset Card for Znanio.ru Educational Images
Dataset Summary
This dataset contains 19,060 educational images from the znanio.ru platform, a resource for teachers, educators, students, and parents providing diverse educational content. Znanio.ru has been a pioneer in educational technologies and distance learning in the Russian-speaking internet since 2009.
Languages
The dataset is primarily in Russian, with potential multilingual content:
Russian… See the full description on the dataset page: https://huggingface.co/datasets/nyuuzyou/znanio-images.
|
open-llm-leaderboard/SicariusSicariiStuff__LLAMA-3_8B_Unaligned_BETA-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
Dataset automatically created during the evaluation run of model SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/SicariusSicariiStuff__LLAMA-3_8B_Unaligned_BETA-details.
|
open-llm-leaderboard/SicariusSicariiStuff__Zion_Alpha-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of SicariusSicariiStuff/Zion_Alpha
Dataset automatically created during the evaluation run of model SicariusSicariiStuff/Zion_Alpha
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/SicariusSicariiStuff__Zion_Alpha-details.
|
aposadasn/koch_test
|
aposadasn
|
This dataset was created using LeRobot.
|
open-llm-leaderboard/DeepAutoAI__d2nwg_causal_gpt2_v1-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of DeepAutoAI/d2nwg_causal_gpt2_v1
Dataset automatically created during the evaluation run of model DeepAutoAI/d2nwg_causal_gpt2_v1
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/DeepAutoAI__d2nwg_causal_gpt2_v1-details.
|
nyuuzyou/znanio-videos
|
nyuuzyou
|
Dataset Card for Znanio.ru Educational Videos
Dataset Summary
This dataset contains 6,653 educational videos from the znanio.ru platform, a resource for teachers, educators, students, and parents providing diverse educational content. Znanio.ru has been a pioneer in educational technologies and distance learning in the Russian-speaking internet since 2009.
Languages
The dataset is primarily in Russian, with potential multilingual content:
Russian… See the full description on the dataset page: https://huggingface.co/datasets/nyuuzyou/znanio-videos.
|
open-llm-leaderboard/nbeerbower__Llama-3.1-Nemotron-lorablated-70B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of nbeerbower/Llama-3.1-Nemotron-lorablated-70B
Dataset automatically created during the evaluation run of model nbeerbower/Llama-3.1-Nemotron-lorablated-70B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/nbeerbower__Llama-3.1-Nemotron-lorablated-70B-details.
|
akhooli/ar_mmarco_qs100
|
akhooli
|
license: mit
dataset_info:
features:
- name: query_id
dtype: int64
- name: text
dtype: string
- name: document_ids
sequence: string
- name: scores
sequence: float64
splits:
- name: train
num_bytes: 67982515
num_examples: 100000
download_size: 34775289
dataset_size: 67982515
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
this dataset
First 100k of query-doc-score using… See the full description on the dataset page: https://huggingface.co/datasets/akhooli/ar_mmarco_qs100.
|
leftyfeep/robot_e_howard
|
leftyfeep
|
Dataset Card for Dataset Name
This is 94 fictional works written by Robert E Howard.
Dataset Details
Dataset Description
This is 94 fictional works written by Robert E Howard. It includes all the stories/novellas/novels I could find by Robert E Howard that are in the public domain, minus the Breckenridge Elkins stories. I didn't want the extreme dialect/slang in those to affect any models trained on this.
This second version of the dataset is to… See the full description on the dataset page: https://huggingface.co/datasets/leftyfeep/robot_e_howard.
|
Ailurion/feni-nanoparticles
|
Ailurion
|
Machine learning-based prediction of FeNi nanoparticle magnetization
Public data for "Machine learning-based prediction of FeNi nanoparticle magnetization", F. Williamson et al., Journal of Materials Research and Technology (2024). https://doi.org/10.1016/j.jmrt.2024.10.142.
ML Scripts
ML scripts are available on GitHub.
Data
Nanoparticles were simulated using LAMMPS.
A single LAMMPS input script from this extended repository was modiffied to obtain… See the full description on the dataset page: https://huggingface.co/datasets/Ailurion/feni-nanoparticles.
|
takara-ai/FloodNet_2021-Track_2_Dataset_HF
|
takara-ai
|
FloodNet: High Resolution Aerial Imagery Dataset for Post-Flood Scene Understanding
This is the HF-hosted version of FloodNet.
The FloodNet 2021: A High Resolution Aerial Imagery Dataset for Post-Flood Scene Understanding provides high-resolution UAS imageries with detailed semantic annotation regarding the damages. To advance the damage assessment process for post-disaster scenarios, the authors of the dataset presented a unique challenge considering classification, semantic… See the full description on the dataset page: https://huggingface.co/datasets/takara-ai/FloodNet_2021-Track_2_Dataset_HF.
|
2pir/contract_scoring_v6
|
2pir
|
Uses the fabric prompts.
Generates the responses using GPT_4o, LLAMA31_70B and LLAMA32. Scores thrice using GPT_4o with zero temperature.
|
justinsunqiu/synthetic_website_parts
|
justinsunqiu
|
Dataset Card
Add more information here
This dataset was produced with DataDreamer 🤖💤. The synthetic dataset card can be found here.
|
gabrieljesus/Labadain-30k-plus-tetun
|
gabrieljesus
|
Labadain-30k+: A Monolingual Tetun Document-Level Audited Dataset
Labadain-30k+ is a monolingual Tetun dataset, audited at the document level by native Tetun speakers. It contains 33,550 documents collected between June 2001 and September 2023, excluding 2004 and 2005, for which no documents are available. The dataset was acquired through web crawling using Labadain Crawler and includes metadata such as DocID, title, URL, source, category, publication date, and content. Each… See the full description on the dataset page: https://huggingface.co/datasets/gabrieljesus/Labadain-30k-plus-tetun.
|
FrancophonIA/LVF
|
FrancophonIA
|
Dataset origin: http://rali.iro.umontreal.ca/rali/?q=fr/versions-informatisees-lvf-dem
|
FrancophonIA/BAF
|
FrancophonIA
|
Dataset origin: http://rali.iro.umontreal.ca/rali/?q=fr/BAF
|
FrancophonIA/TREC
|
FrancophonIA
|
Dataset origin: http://rali.iro.umontreal.ca/rali/?q=fr/1893%20questions%20fr
|
FrancophonIA/Bitextes_wikipedia
|
FrancophonIA
|
Dataset origin: http://rali.iro.umontreal.ca/rali/?q=fr/node/1293
|
FrancophonIA/Bitextes_meteo
|
FrancophonIA
|
Dataset origin: http://rali.iro.umontreal.ca/rali/?q=fr/node/1255
Deux dossiers : "warnings" et "météo"
|
jjz5463/topics_common_crawl_xlarge
|
jjz5463
|
Dataset Card
Add more information here
This dataset was produced with DataDreamer 🤖💤. The synthetic dataset card can be found here.
|
open-llm-leaderboard/DreadPoor__Emu_Eggs-9B-Model_Stock-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of DreadPoor/Emu_Eggs-9B-Model_Stock
Dataset automatically created during the evaluation run of model DreadPoor/Emu_Eggs-9B-Model_Stock
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/DreadPoor__Emu_Eggs-9B-Model_Stock-details.
|
fadliaulawi/distilabel-reflection-tuning
|
fadliaulawi
|
Dataset Card for distilabel-reflection-tuning
This dataset has been created with distilabel.
The pipeline script was uploaded to easily reproduce the dataset:
ipykernel_launcher.py.
It can be run directly using the CLI:
distilabel pipeline run --script "https://huggingface.co/datasets/fadliaulawi/distilabel-reflection-tuning/raw/main/ipykernel_launcher.py"
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the… See the full description on the dataset page: https://huggingface.co/datasets/fadliaulawi/distilabel-reflection-tuning.
|
open-llm-leaderboard/meditsolutions__Llama-3.2-SUN-2.4B-v1.0.0-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
Dataset automatically created during the evaluation run of model meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/meditsolutions__Llama-3.2-SUN-2.4B-v1.0.0-details.
|
Wonder-Griffin/invoicetracker
|
Wonder-Griffin
|
Dataset Card for Dataset Name
Invoice training to identify areas of importance and track data long term
Dataset Details
Dataset Description
Curated by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Dataset Sources [optional]
Repository: [More Information Needed]
Paper… See the full description on the dataset page: https://huggingface.co/datasets/Wonder-Griffin/invoicetracker.
|
skoll520/fudencio_voicelines_rttm
|
skoll520
|
What is this?
This is a dataset extracted from brazilian cartoon called Fudêncio (by MTV), which is somehow similar to South Park.
This dataset has three features
rttm string ( to identify speakers ), episode_name ( for reference ) and the audio voicelines Acapella made using Demucs (no_voice files are not included)
The separation of Demucs is not perfect, but can help people to train RVC voices for different characters.
Plan to future upgrades
I… See the full description on the dataset page: https://huggingface.co/datasets/skoll520/fudencio_voicelines_rttm.
|
han5i5j1986/eval_koch_lego_2024-10-18
|
han5i5j1986
|
This dataset was created using LeRobot.
|
han5i5j1986/koch_lego_2024-10-19-01
|
han5i5j1986
|
This dataset was created using LeRobot.
|
mbhseg/mbhseg24
|
mbhseg
|
Brain Hemorrhage Segmentation Dataset (BHSD)
Our dataset is hosting its latest competition at MICCAI! Stay tuned for our contests and win great prizes!
Competition Links
|
Description
The Brain Hemorrhage Segmentation Dataset (BHSD) is a 3D multi-class segmentation dataset for intracranial hemorrhage (ICH). Intracranial hemorrhage is a pathological condition characterized by bleeding… See the full description on the dataset page: https://huggingface.co/datasets/mbhseg/mbhseg24.
|
EDGEwww25/EDGE-Dataset
|
EDGEwww25
|
This is the dataset repository of paper EDGE: Enhanced Grounded GUI Understanding with Enriched Multi-Granularity Synthetic Data.
Considering the huge number of images, the all_items.jsonl provided here contains the final QA pairs for training, but does not contain images for the time being. We will release all images as soon as possible.
You can also follow the mark_webpages and dataset.py scripts provided in the code repository to generate your own webpage image-question-answering dataset.… See the full description on the dataset page: https://huggingface.co/datasets/EDGEwww25/EDGE-Dataset.
|
open-llm-leaderboard/TheTsar1209__qwen-carpmuscle-v0.2-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of TheTsar1209/qwen-carpmuscle-v0.2
Dataset automatically created during the evaluation run of model TheTsar1209/qwen-carpmuscle-v0.2
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/TheTsar1209__qwen-carpmuscle-v0.2-details.
|
aplycaebous/BanglaTLit
|
aplycaebous
|
BanglaTLit: A Benchmark Dataset for Back-Transliteration of Romanized Bangla
Dataset Overview
BanglaTLit-PT: A pre-training corpus with 245727 transliterated or romanized Bangla samples for further pre-training language models.
BanglaTLit: Subset of the BanglaTLit-PT dataset containing 42705 romanized Bangla and its corresponding Bangla back-transliteration pairs.
Data Description
Column Title
Description
id
A unique identifier for… See the full description on the dataset page: https://huggingface.co/datasets/aplycaebous/BanglaTLit.
|
iknow-lab/wildguardmix-test-ko
|
iknow-lab
|
Original dataset: allenai/wildguardmix
Translated by nayohan/llama3-instrucTrans-enko-8b
@misc{wildguard2024,
title={WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs},
author={Seungju Han and Kavel Rao and Allyson Ettinger and Liwei Jiang and Bill Yuchen Lin and Nathan Lambert and Yejin Choi and Nouha Dziri},
year={2024},
eprint={2406.18495},
archivePrefix={arXiv},
primaryClass={cs.CL}… See the full description on the dataset page: https://huggingface.co/datasets/iknow-lab/wildguardmix-test-ko.
|
henryen/origen_dataset_debug
|
henryen
|
OriGen: Enhancing RTL Code Generation with Code-to-Code Augmentation and Self-Reflection
Introduction
OriGen is a fine-tuned lora model designed for Verilog code generation. It is trained on top of DeepSeek Coder 7B using datasets generated from code-to-code augmentation and self-reflection. The datasets can be found in the origen_dataset_instruction.
OriGen_Fix is a fine-tuned lora model designed for fixing syntax errors in Verilog code. It is trained based on… See the full description on the dataset page: https://huggingface.co/datasets/henryen/origen_dataset_debug.
|
iknow-lab/wildguardmix-train-ko-11k
|
iknow-lab
|
Original dataset: allenai/wildguardmix
Translated by nayohan/llama3-instrucTrans-enko-8b
@misc{wildguard2024,
title={WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs},
author={Seungju Han and Kavel Rao and Allyson Ettinger and Liwei Jiang and Bill Yuchen Lin and Nathan Lambert and Yejin Choi and Nouha Dziri},
year={2024},
eprint={2406.18495},
archivePrefix={arXiv},
primaryClass={cs.CL}… See the full description on the dataset page: https://huggingface.co/datasets/iknow-lab/wildguardmix-train-ko-11k.
|
raphaeldoan/fayumportraits
|
raphaeldoan
|
A little repository of Fayum portraits from Roman Egypt, taken from Wikimedia Commons and other open sources. Made for LoRA training of ancient art.
|
raphaeldoan/RomanPaintings
|
raphaeldoan
|
A collection of ancient Roman paintings, for the purpose of training LoRAs on ancient art.
|
irhawks/floating-det
|
irhawks
|
Description
A floating object is a common type of page element found in academic literature, books, and other formal publications. In LaTeX, a floating object typically refers to a container that can hold text, images, tables, code, algorithms, and other content. The placement of these containers within the document is automatically adjusted by LaTeX to fit the page layout. To facilitate indexing and readability, floating objects are usually accompanied by additional information… See the full description on the dataset page: https://huggingface.co/datasets/irhawks/floating-det.
|
kanhatakeyama/chatbot-arena-ja-elo-rating
|
kanhatakeyama
|
Leaderboard
ChatBotArena-jaのランキングです。
2時間に1回程度のペースで更新されます。
|
irhawks/floating-fsa
|
irhawks
|
Description
A floating object is a common type of page element found in academic literature, books, and other formal publications. In LaTeX, a floating object typically refers to a container that can hold text, images, tables, code, algorithms, and other content. The placement of these containers within the document is automatically adjusted by LaTeX to fit the page layout. To facilitate indexing and readability, floating objects are usually accompanied by additional information… See the full description on the dataset page: https://huggingface.co/datasets/irhawks/floating-fsa.
|
d4niel92/fosllms-week-1
|
d4niel92
|
Dataset Card for fosllms-week-1
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/d4niel92/fosllms-week-1/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/d4niel92/fosllms-week-1.
|
nyuuzyou/znanio-audios
|
nyuuzyou
|
Dataset Card for Znanio.ru Educational Audio
Dataset Summary
This dataset contains 3,417 educational audio files from the znanio.ru platform, a resource for teachers, educators, students, and parents providing diverse educational content. Znanio.ru has been a pioneer in educational technologies and distance learning in the Russian-speaking internet since 2009.
Languages
The dataset is primarily in Russian, with potential multilingual content:… See the full description on the dataset page: https://huggingface.co/datasets/nyuuzyou/znanio-audios.
|
dazhangyu123/AEM-dataset
|
dazhangyu123
|
WSI Classification Dataset for AEM
Dataset Summary
This dataset is derived from the publicly available CAMELYON16 and CAMELYON17 datasets. It consists of feature embeddings extracted from tissue patches of whole slide images (WSIs) using various pre-trained models. The dataset is designed for use in multiple instance learning (MIL) based WSI classification tasks, particularly for the Attention Entropy Maximization (AEM) method.
Usage
For detailed… See the full description on the dataset page: https://huggingface.co/datasets/dazhangyu123/AEM-dataset.
|
henryen/origen_dataset_description
|
henryen
|
OriGen: Enhancing RTL Code Generation with Code-to-Code Augmentation and Self-Reflection
Introduction
OriGen is a fine-tuned lora model designed for Verilog code generation. It is trained on top of DeepSeek Coder 7B using datasets generated from code-to-code augmentation and self-reflection. The datasets can be found in the origen_dataset_instruction.
OriGen_Fix is a fine-tuned lora model designed for fixing syntax errors in Verilog code. It is trained based on… See the full description on the dataset page: https://huggingface.co/datasets/henryen/origen_dataset_description.
|
iknow-lab/hermes-function-calling-v1-ko
|
iknow-lab
|
Original Data: NousResearch/hermes-function-calling-v1
GPT-4o-mini로 번역했으나, 길이가 너무 길거나 에러가 난 경우가 1300 건 가량 제외됨.
Example 1
[
{
"role": "system",
"content": "당신은 전문가로서 구조화된 정보 추출 AI 모델입니다. 정보 추출을 위해 문서를 제공받습니다. 추출된 정보를 XML 태그 <tools></tools> 내의 함수 서명 형태로 출력할 json 스키마도 제공받습니다. json 스키마에 어떤 값을 넣을지 가정하지 마세요. \n<tools>\n[{\"type\": \"function\", \"function\": {\"name\": \"ExpertQAExtractor\", \"description\": \"문서에서 개념이나 정보가 실제 상황에 어떻게 적용될 수 있는지를 묻는… See the full description on the dataset page: https://huggingface.co/datasets/iknow-lab/hermes-function-calling-v1-ko.
|
SunnyAgarwal4274/Food_and_Vegetables
|
SunnyAgarwal4274
|
Dataset Card for Fruits and Vegetables Dataset
This dataset contains images of various fruits and vegetables, aimed at facilitating the development and evaluation of image classification models for agricultural technology and dietary applications.
Dataset Details
Dataset Description
This dataset is a collection of high-quality images of fruits and vegetables, organized into distinct classes for effective training of machine learning models. It… See the full description on the dataset page: https://huggingface.co/datasets/SunnyAgarwal4274/Food_and_Vegetables.
|
Abdou/quran-riwayat
|
Abdou
|
روايات القرآن الكريم
This dataset is a collection of 8 Riwayat (of 4 Qira'at) of the Quran:
hafs: رواية حفص عن عاصم
shouba: رواية شعبة عن عاصم
warsh: رواية ورش عن نافع
qaloon: رواية قالون عن نافع
alsosi: رواية السوسي عن أبي عمرو البصري
aldoori: رواية الدوري عن أبي عمرو البصري
qumbul: رواية قنبل عن ابن كثير المكي
albazzi: رواية البزي عن ابن كثير المكي
Scraped from https://surahquran.com/ , so thanks to them!
|
mastergokul/Bible
|
mastergokul
|
Full Bible Chapter wise - Tamil
Web Scrapped from https://bible.catholicgallery.org/ecu-tamil/
|
DeepMount00/Sonnet-3.5-ITA-INSTRUCT
|
DeepMount00
|
Sonnet 3.5 🇮🇹 Dataset 📊 Model Card
Overview
The Sonnet 3.5 🇮🇹 Dataset is a high-quality 🏆 dataset for various NLP 🗣️ tasks in 🇮🇹. It includes programming 💻 code, instruction-following 📜 examples, Q&A ❓❗ pairs, and translations 🌍, providing rich data to enhance 🇮🇹 language models.
Key Features
Language: Primarily in 🇮🇹, ideal for 🇮🇹 NLP.
Content Types:
Code: 💻 snippets with explanations.
Instruction Following: 📜 Examples for task… See the full description on the dataset page: https://huggingface.co/datasets/DeepMount00/Sonnet-3.5-ITA-INSTRUCT.
|
open-llm-leaderboard/BlackBeenie__llama-3.1-8B-Galore-openassistant-guanaco-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of BlackBeenie/llama-3.1-8B-Galore-openassistant-guanaco
Dataset automatically created during the evaluation run of model BlackBeenie/llama-3.1-8B-Galore-openassistant-guanaco
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/BlackBeenie__llama-3.1-8B-Galore-openassistant-guanaco-details.
|
adamo1139/4chan_archive_ShareGPT_with_rating_and_comments
|
adamo1139
|
I took adamo1139/4chan_archive_ShareGPT_fixed_newlines_unfiltered and removed broken samples, like those having no two-sided conversation or containing empty responses (most likely someone just posted an image there and that wasn't scraped).
Then I used Hermes 3 8B (mostly W8A8) to add comments to each sample and add a final score, from 0 to 5. The result of this is this dataset. I had to process a few billions tokens to create this.
I now plan to further filter down the dataset and most… See the full description on the dataset page: https://huggingface.co/datasets/adamo1139/4chan_archive_ShareGPT_with_rating_and_comments.
|
SasikaA073/LK_Solar_Dataset
|
SasikaA073
|
Dataset Card for Solar Irradiance Data in Sri Lanka
Dataset Summary
This dataset provides solar irradiance data for Sri Lanka, including historical and current solar radiation measurements. The data is vital for renewable energy applications, particularly in solar energy generation.
Supported Tasks and Leaderboards
This dataset can be used for tasks related to solar energy forecasting, climate modeling, and geographical information systems (GIS)… See the full description on the dataset page: https://huggingface.co/datasets/SasikaA073/LK_Solar_Dataset.
|
dialogi/projekti_lonnrot
|
dialogi
|
Projekti Lönnrot
Project Lönnrot contains public domain Finnish books. As quoted from their website:
Elias Lönnrot keräsi kansanrunoutta jälkipolville, me keräämme ja pelastamme tulevaisuudelle
vanhaa, usein vaikeasti saatavaa ja tuhoutumisuhan alla olevaa kirjallisuuttamme. Digitoimme
EU:n ns. 70+ säädösten mukaan tekijänoikeuksista vapautuneitta suomen- ja ruotsinkielisiä
teoksia sähköisiksi e-kirjoiksi kaikkien saataville vapaaseen internet-jakeluun. Teokset ovat
yleensä… See the full description on the dataset page: https://huggingface.co/datasets/dialogi/projekti_lonnrot.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.