id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
han5i5j1986/eval_koch_lego_2024-10-17-02
|
han5i5j1986
|
This dataset was created using LeRobot.
|
trl-internal-testing/tiny-ultrafeedback-binarized
|
trl-internal-testing
|
from datasets import load_dataset
push_to_hub = True
def is_small(example):
small_prompt = len(example["chosen"][0]["content"]) < 100
small_chosen = len(example["chosen"][1]["content"]) < 100
small_rejected = len(example["rejected"][1]["content"]) < 100
return small_prompt and small_chosen and small_rejected
if __name__ == "__main__":
dataset = load_dataset("trl-lib/ultrafeedback_binarized")
dataset = dataset.filter(is_small)
if push_to_hub:… See the full description on the dataset page: https://huggingface.co/datasets/trl-internal-testing/tiny-ultrafeedback-binarized.
|
han5i5j1986/koch_lego_2024-10-17-tmp
|
han5i5j1986
|
This dataset was created using LeRobot.
|
han5i5j1986/koch_lego_2024-10-17-tmp-1
|
han5i5j1986
|
This dataset was created using LeRobot.
|
dvilasuero/meta-llama_Llama-3.1-70B-Instruct_thinking_mmlu-pro_20241017_105801
|
dvilasuero
|
Dataset Card for meta-llama_Llama-3.1-70B-Instruct_thinking_mmlu-pro_20241017_105801
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config… See the full description on the dataset page: https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-70B-Instruct_thinking_mmlu-pro_20241017_105801.
|
han5i5j1986/eval_koch_lego_2024-10-17-04
|
han5i5j1986
|
This dataset was created using LeRobot.
|
mxforml/conv_xai_short
|
mxforml
| |
FY-taott/G4Beacon2PreEmbedding
|
FY-taott
|
Pre-Embedding of G4Beacon2
1. Introduction
1.1 G4Beacon2
In the G4Beacon2 GitHub repository, we presented G4Beacon2, a genome-wide cell-specific G-quadruplex(G4) prediction tool that provided a detailed overview of its program, data, and usage methods. This database acts as a supplementary resource, including pre-embedded DNA sequence data. We encourage you to use this data in conjunction with the GitHub repository.
1.2 Pre-embedding… See the full description on the dataset page: https://huggingface.co/datasets/FY-taott/G4Beacon2PreEmbedding.
|
smartcat/Amazon_All_Beauty_2023
|
smartcat
|
Dataset Card for Dataset Name
Original dataset can be found on: https://amazon-reviews-2023.github.io/
Dataset Details
This dataset is downloaded from the link above, the category Amazon All Beauty meta dataset.
Dataset Description
This dataset is a refined version of the Amazon All Beauty 2023 meta dataset, which originally contained product metadata for beauty products sold on Amazon. The dataset includes detailed information about products such… See the full description on the dataset page: https://huggingface.co/datasets/smartcat/Amazon_All_Beauty_2023.
|
open-llm-leaderboard/Gunulhona__Gemma-Ko-Merge-PEFT-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of Gunulhona/Gemma-Ko-Merge-PEFT
Dataset automatically created during the evaluation run of model Gunulhona/Gemma-Ko-Merge-PEFT
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Gunulhona__Gemma-Ko-Merge-PEFT-details.
|
jinliuxi/reason_chinese_10k
|
jinliuxi
|
Dataset: 10k Reasoning in Chinese for LoRA Model on LLaMA 3.2
This dataset contains 10,000 training examples used for fine-tuning a LoRA model on LLaMA 3.2, specifically designed to enhance reasoning capabilities in Chinese. The dataset is tailored to support tasks involving complex logical thinking, inference, and decision-making without relying on rigid formats or explicit chain-of-thought reasoning.
Key Features:
• Language: Chinese
• Task: Reasoning and Inference
• Model Target: LLaMA 3.2… See the full description on the dataset page: https://huggingface.co/datasets/jinliuxi/reason_chinese_10k.
|
OALL/details_speakleash__Bielik-11B-v2
|
OALL
|
Dataset Card for Evaluation run of speakleash/Bielik-11B-v2
Dataset automatically created during the evaluation run of model speakleash/Bielik-11B-v2.
The dataset is composed of 136 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/OALL/details_speakleash__Bielik-11B-v2.
|
yuuzuX/yuuzux
|
yuuzuX
|
詳細描述
資料名稱: yuuzux
下載連結: CKAN - yuuzuX/yuuzux
作者名稱: yuuzuX
更新時間: 2024-10-17T09:46:34.986Z
|
dino-zavr1/koch_test_2cam
|
dino-zavr1
|
This dataset was created using LeRobot.
|
open-llm-leaderboard/LeroyDyer__SpydazWeb_AI_HumanAI_001-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of LeroyDyer/SpydazWeb_AI_HumanAI_001
Dataset automatically created during the evaluation run of model LeroyDyer/SpydazWeb_AI_HumanAI_001
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/LeroyDyer__SpydazWeb_AI_HumanAI_001-details.
|
openpecha/stt-training-data
|
openpecha
|
STT Training Data
This dataset is prepared for Speech-to-Text (STT) training. It contains audio files from different departments, each annotated with their transcription. The dataset is split by department and includes the duration of audio data per department.
Dataset Overview
Total Audio Files: 1,391,020
Total Audio Length: 1,576.76 hours
Total Audio collected data: 18 OCT, 2024
Department-wise Statistics
Department
Number of Audio Files… See the full description on the dataset page: https://huggingface.co/datasets/openpecha/stt-training-data.
|
libailin1120/test
|
libailin1120
|
Debatts-Data: The First Madarin Rebuttal Speech Dataset for Expressive Text-to-Speech Synthesis
The Debatts-Data dataset is the first Madarin rebuttal speech dataset for expressive text-to-speech synthesis. It is constructed from a vast collection of professional Madarin speech data sourced from diverse video platforms and podcasts on the Internet. The in-the-wild collection approach ensures the real and natural rebuttal speech. In addition, the dataset contains annotations of… See the full description on the dataset page: https://huggingface.co/datasets/libailin1120/test.
|
han5i5j1986/koch_lego_2024-10-17-04
|
han5i5j1986
|
This dataset was created using LeRobot.
|
HolyTreeCrowns/YaleKIContractArrangementDB
|
HolyTreeCrowns
|
YaleKIContractArrangementDB
tags: Machine Learning, Contract Analysis, Yale Innovation
Note: This is an AI-generated dataset so its content may be inaccurate or false
Dataset Description:
The 'YaleKIContractArrangementDB' dataset contains structured information extracted from contracts related to Yale's Key Innovation (KI) initiatives. The data is tailored for machine learning models that analyze and classify contractual agreements. The dataset focuses on various aspects such as… See the full description on the dataset page: https://huggingface.co/datasets/HolyTreeCrowns/YaleKIContractArrangementDB.
|
dvilasuero/meta-llama_Llama-3.1-70B-Instruct_thinking_ifeval_20241017_122400
|
dvilasuero
|
Dataset Card for meta-llama_Llama-3.1-70B-Instruct_thinking_ifeval_20241017_122400
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-70B-Instruct_thinking_ifeval_20241017_122400/raw/main/pipeline.yaml"… See the full description on the dataset page: https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-70B-Instruct_thinking_ifeval_20241017_122400.
|
han5i5j1986/koch_lego_2024-10-17-05
|
han5i5j1986
|
This dataset was created using LeRobot.
|
le-leadboard/gpqa-fr
|
le-leadboard
|
Dataset Card for gpqa-fr
le-leadboard/gpqa-fr fait partie de l'initiative OpenLLM French Leaderboard, offrant une adaptation française du dataset GPQA (Graduate-level Proof Q&A Benchmark) pour évaluer les capacités de raisonnement avancé des modèles en langue française.
Dataset Summary
GPQA-fr est une adaptation française d'un ensemble de 448 questions à choix multiples de niveau doctoral en biologie, physique et chimie. Ces questions sont conçues pour être… See the full description on the dataset page: https://huggingface.co/datasets/le-leadboard/gpqa-fr.
|
open-llm-leaderboard/ymcki__gemma-2-2b-jpn-it-abliterated-17-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of ymcki/gemma-2-2b-jpn-it-abliterated-17
Dataset automatically created during the evaluation run of model ymcki/gemma-2-2b-jpn-it-abliterated-17
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/ymcki__gemma-2-2b-jpn-it-abliterated-17-details.
|
dvilasuero/meta-llama_Llama-3.1-70B-Instruct_cot_ifeval_20241017_134405
|
dvilasuero
|
Dataset Card for meta-llama_Llama-3.1-70B-Instruct_cot_ifeval_20241017_134405
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-70B-Instruct_cot_ifeval_20241017_134405/raw/main/pipeline.yaml"
or explore… See the full description on the dataset page: https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-70B-Instruct_cot_ifeval_20241017_134405.
|
aimlresearch2023/my-distiset
|
aimlresearch2023
|
Dataset Card for my-distiset
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/aimlresearch2023/my-distiset/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/aimlresearch2023/my-distiset.
|
le-leadboard/MATH_LVL5_fr
|
le-leadboard
|
Dataset Card for MATH_LVL5_fr
le-leadboard/MATH_LVL5_fr fait partie de l'initiative OpenLLM French Leaderboard, proposant une adaptation française des problèmes mathématiques de niveau avancé du dataset MATH.
Dataset Summary
MATH_LVL5_fr est une adaptation française des problèmes mathématiques de niveau 5 (le plus avancé) du dataset MATH original. Il comprend des problèmes de compétition mathématique de niveau lycée, formatés de manière cohérente avec LaTeX pour… See the full description on the dataset page: https://huggingface.co/datasets/le-leadboard/MATH_LVL5_fr.
|
hkuds/RecGPT_dataset
|
hkuds
|
RecGPT-Dataset
Overview
This dataset contains the pre-training, evaluation, and test data used in the RecGPT work.
|
kgttg/BP
|
kgttg
|
Payroll Dataset
|
bobchar/single_shooting
|
bobchar
| |
unitreerobotics/UnitreeG1_DualArmGrasping
|
unitreerobotics
|
This is a dataset using the Unitree G1 humanoid robot with dual-arm dexterous hands to grasp red wooden blocks; the head is equipped with binocular vision. The robot is teleoperated to grasp the red wooden blocks with both arms and place them into a black rectangular.
The videos directory stores the related videos; the train directory stores the data information; the meta_data directory stores the metadata.
|
d0rj/reflection-v1-ru_subset
|
d0rj
|
d0rj/reflection-v1-ru_subset
Translated glaiveai/reflection-v1 dataset into Russian language using GPT-4o.
Almost all the rows of the dataset have been translated. I have removed those translations that do not match the original by the presence of the tags "thinking", "reflection" and "output". Mapping to the original dataset rows can be taken from the "index" column.
Usage
import datasets
data = datasets.load_dataset("d0rj/reflection-v1-ru_subset")… See the full description on the dataset page: https://huggingface.co/datasets/d0rj/reflection-v1-ru_subset.
|
skyarff/happyOrSad
|
skyarff
|
Dataset Card for "happyOrSad"
More Information needed
|
galtimur/lca-bug-localization-test
|
galtimur
|
This is test subset of LCA bug localization dataset enriched by the repo states to make it easier the work with this dataset.
|
zubersdefrwer/autocompelete
|
zubersdefrwer
|
[
{
"input": "buy milk",
"suggestion": "buy milk tomorrow"
},
{
"input": "call",
"suggestion": "call mom"
},
{
"input": "wish",
"suggestion": "wish happy birthday to dad"
},
{
"input": "book",
"suggestion": "book a dentist appointment"
},
{
"input": "pay",
"suggestion": "pay electricity bill"
},
{
"input": "remind",
"suggestion": "remind me to take out the… See the full description on the dataset page: https://huggingface.co/datasets/zubersdefrwer/autocompelete.
|
gane5hvarma/test-alpaca
|
gane5hvarma
|
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
Curated by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/gane5hvarma/test-alpaca.
|
qq8933/pprm_math_preference
|
qq8933
|
Citation
@article{zhang2024llama,
title={LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning},
author={Zhang, Di and Wu, Jianbo and Lei, Jingdi and Che, Tong and Li, Jiatong and Xie, Tong and Huang, Xiaoshui and Zhang, Shufei and Pavone, Marco and Li, Yuqiang and others},
journal={arXiv preprint arXiv:2410.02884},
year={2024}
}
@article{zhang2024accessing,
title={Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo… See the full description on the dataset page: https://huggingface.co/datasets/qq8933/pprm_math_preference.
|
open-llm-leaderboard/piotr25691__thea-c-3b-25r-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of piotr25691/thea-c-3b-25r
Dataset automatically created during the evaluation run of model piotr25691/thea-c-3b-25r
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/piotr25691__thea-c-3b-25r-details.
|
ilhamfadheel/alpaca-cleaned-indonesian
|
ilhamfadheel
|
🦙🛁 Cleaned Alpaca Dataset (INDONESIAN)
Welcome to the Cleaned Alpaca Dataset repository! This repository hosts a cleaned and curated version of a dataset used to train the Alpaca LLM (Large Language Model). The original dataset had several issues that are addressed in this cleaned version.
On April 8, 2023 the remaining uncurated instructions (~50,000) were replaced with data from the GPT-4-LLM dataset. Curation of the incoming GPT-4 data is ongoing.
A 7b Lora model (trained… See the full description on the dataset page: https://huggingface.co/datasets/ilhamfadheel/alpaca-cleaned-indonesian.
|
pxyyy/MagpieLM-SFT-Data-v0.1
|
pxyyy
|
Dataset Card for "MagpieLM-SFT-Data-v0.1"
More Information needed
|
Omarrran/dr_andrew_Huberman_eng_tts_hf_dataset
|
Omarrran
|
Omarrran/dr_andrew_Huberman_eng_tts_hf_dataset
Dataset Description
This dataset contains English text-to-speech (TTS) data, including paired text and audio files.
Dataset Statistics
Total number of samples: 3159
Number of samples in train split: 2527
Number of samples in test split: 632
Audio Statistics
Total audio duration: 31563.23 seconds (8.77 hours)
Average audio duration: 9.99 seconds
Minimum audio duration: 1.09 seconds… See the full description on the dataset page: https://huggingface.co/datasets/Omarrran/dr_andrew_Huberman_eng_tts_hf_dataset.
|
open-llm-leaderboard/Etherll__Herplete-LLM-Llama-3.1-8b-Ties-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of Etherll/Herplete-LLM-Llama-3.1-8b-Ties
Dataset automatically created during the evaluation run of model Etherll/Herplete-LLM-Llama-3.1-8b-Ties
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Etherll__Herplete-LLM-Llama-3.1-8b-Ties-details.
|
claran/m2d2-wiki-decon
|
claran
|
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
Curated by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/claran/m2d2-wiki-decon.
|
open-llm-leaderboard/zelk12__MT3-gemma-2-9B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of zelk12/MT3-gemma-2-9B
Dataset automatically created during the evaluation run of model zelk12/MT3-gemma-2-9B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT3-gemma-2-9B-details.
|
ai2-adapt-dev/ultrafeedback-10k
|
ai2-adapt-dev
|
Ultrafeedback 10k for small-scale experiments
I sampled 10k instances from the original Ultrafeedback as a baseline to run small-scale DPO experiments.
The 50k from Tulu 3.4 SFT Replica is quite big, and very expensive to run ablations upon.
from datasets import load_dataset
load_dataset("ai2-adapt-dev/ultrafeedback-10k", "main", split="train")
The templated versions for each aspect is also available as a config helpfulness, honesty, truthfulness, and instruction_following.… See the full description on the dataset page: https://huggingface.co/datasets/ai2-adapt-dev/ultrafeedback-10k.
|
arsaporta/symile-m3
|
arsaporta
|
Dataset Card for Symile-M3
Symile-M3 is a multilingual dataset of (audio, image, text) samples. The dataset is specifically designed to test a model's ability to capture higher-order information between three distinct high-dimensional data types: by incorporating multiple languages, we construct a task where text and audio are both needed to predict the image, and where, importantly, neither text nor audio alone would suffice.
Dataset description… See the full description on the dataset page: https://huggingface.co/datasets/arsaporta/symile-m3.
|
open-llm-leaderboard/prince-canuma__Ministral-8B-Instruct-2410-HF-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of prince-canuma/Ministral-8B-Instruct-2410-HF
Dataset automatically created during the evaluation run of model prince-canuma/Ministral-8B-Instruct-2410-HF
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/prince-canuma__Ministral-8B-Instruct-2410-HF-details.
|
bunnycore/Reasoning
|
bunnycore
|
Dataset Card for my-distiset
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/bunnycore/my-distiset/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/bunnycore/Reasoning.
|
bunnycore/Kind-Teacher
|
bunnycore
|
Dataset Card for my-distisets
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/bunnycore/my-distisets/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/bunnycore/Kind-Teacher.
|
HumanoidTeam/robograsp_hackathon_2024
|
HumanoidTeam
|
2024 RoboGrasp Dataset
This is dataset the dataset for the RoboGrasp Hackathon 2024.
It includes 108 simulated pick-and-place robot episodes collected on a simulated mobile aloha environment through teleoperated demonstrations.
During the collected episodes the right arm of the robot is used to pick an item from the table and put it in a box on the top of the table.
There are three types of items:
green cube (47% of episodes)
red sphere (30% of episodes)
blue cylinder (22% of… See the full description on the dataset page: https://huggingface.co/datasets/HumanoidTeam/robograsp_hackathon_2024.
|
Salesforce/PROVE
|
Salesforce
|
Trust but Verify: Programmatic VLM Evaluation in the Wild
Viraj Prabhu, Senthil Purushwalkam, An Yan, Caiming Xiong, Ran Xu
Explorer
| Paper
| Quickstart
Vision-Language Models (VLMs) often generate plausible but incorrect responses to visual queries. However, reliably quantifying the effect of such hallucinations in free-form responses to open-ended queries is challenging as it requires visually verifying each claim within the response. We propose Programmatic… See the full description on the dataset page: https://huggingface.co/datasets/Salesforce/PROVE.
|
FrancophonIA/questions-ou-livres-audio
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/questions-ou-livres-audio
Description
Ce petit corpus oral a été créé pour une communication à la DGfS 2017 à Sarrebruck. Il s'agit de 94 interrogatives à "où" qui ont été tirées de quatre livres audio, à savoir "Alex (Camille Verhœven 2, écrit par P. Lemaitre, lu par P. Résimont), "Le temps est assassin" (écrit par M. Bussi, lu par J. Basecqz), "Si c’était à refaire" (écrit par M. Lévy, lu par M. Marchese) et "Total… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/questions-ou-livres-audio.
|
MongoDB/devcenter-articles-embedded
|
MongoDB
|
Overview
This dataset consists of chunked and embedded versions of a subset of articles from the MongoDB Developer Center.
Dataset Structure
The dataset consists of the following fields:
sourceName: The source of the article. This value is devcenter for the entire dataset.
url: Link to the article
action: Action taken on the article. This value is created for the entire dataset.
body: Content of the chunk in Markdown format
format: Format of the content. This… See the full description on the dataset page: https://huggingface.co/datasets/MongoDB/devcenter-articles-embedded.
|
FrancophonIA/synpaflex-corpus
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/synpaflex-corpus
Description
SynPaFlex est un corpus de livres-audios en français composé de 87 heures de parole de bonne qualité, enregistré par une unique locutrice. Il est constitué d’un ensemble de livres de différents genres. Ce corpus diffère des corpus existants, constitués généralement de quelques heures de parole mono-genre et multi-locuteurs. La motivation principale pour construire un tel corpus est… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/synpaflex-corpus.
|
FrancophonIA/Phonologie_du_Francais_Contemporain
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/pfc
Description
PFC: Base de données sur le français oral contemporain dans l’espace francophone Base de données sur le français oral contemporain dans l’espace francophone Le projet international PFC (Phonologie du Français Contemporain), codirigé par Marie-Hélène Côté (Université Laval), Jacques Durand (ERSS, Université de Toulouse-Le Mirail), Bernard Laks (MoDyCo, Université de Paris Ouest) et Chantal Lyche (Universités… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/Phonologie_du_Francais_Contemporain.
|
FrancophonIA/Profanities_FR
|
FrancophonIA
|
Dataset origin: https://github.com/pmichel31415/mtnt/blob/master/resources/profanities.fr
Citation
@InProceedings{michel2018mtnt,
author = {Michel, Paul and Neubig, Graham},
title = {{MTNT}: A Testbed for {M}achine {T}ranslation of {N}oisy {T}ext},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2018}
}
|
FrancophonIA/EventEA
|
FrancophonIA
|
Dataset origin: https://github.com/nju-websoft/EventEA
Citation
@misc{tian2022eventeabenchmarkingentityalignment,
title={EventEA: Benchmarking Entity Alignment for Event-centric Knowledge Graphs},
author={Xiaobin Tian and Zequn Sun and Guangyao Li and Wei Hu},
year={2022},
eprint={2211.02817},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2211.02817},
}
|
FrancophonIA/gatitos
|
FrancophonIA
|
Dataset origin: https://github.com/google-research/url-nlp/tree/main/gatitos
GATITOS Multilingual Lexicon
The GATITOS (Google's Additional Translations Into Tail-languages: Often Short)
dataset is a high-quality, multi-way parallel dataset of tokens and short
phrases, intended for training and improving machine translation models. Experiments on this dataset and Panlex focusing on unsupervised translation in a 208-language model can be found in BiLex Rx: Lexical Data… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/gatitos.
|
FrancophonIA/lesvocaux
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/lesvocaux
Description
Le corpus Les Vocaux est réalisé dans le cadre du projet ORALIDIA (Oralité et diachronie : une voie d’accès au changement linguistique) financé par l'Université de Strasbourg (projet Idex), le laboratoire LILPA (UR1339, Université de Strasbourg) et le laboratoire ATILF (UMR 7118, CNRS & Université de Lorraine). Malgré le développement des corpus oraux, l’accès à des contextes diversifiés d’oral… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/lesvocaux.
|
arulami/learn_hf_food_not_food_image_captions
|
arulami
|
Food/Not Food Image Caption Dataset
Small dataset of synthetic food and not food image captions.
Text generated using Mistral Chat/Mixtral.
Can be used to train a text classifier on food/not_food image captions as a demo before scaling up to a larger dataset.
See Colab notebook on how dataset was created.
Example usage
import random
from datasets import load_dataset
# Load dataset
loaded_dataset = load_dataset("arulami/learn_hf_food_not_food_image_captions")
#… See the full description on the dataset page: https://huggingface.co/datasets/arulami/learn_hf_food_not_food_image_captions.
|
activebus/Altogether-FT
|
activebus
|
Altogether-FT
(EMNLP 2024) Altogether-FT is a dataset that transforms/re-aligns Internet-scale alt-texts into dense captions. It does not caption images from scratch and generate naive captions that provide little value to an average user (e.g., "a dog is walking in the park" offer minimal utility to users not blind). Instead, it complements and completes alt-texts into dense captions, while preserving supervisions in alt-texts by expert human/agents around the world (that… See the full description on the dataset page: https://huggingface.co/datasets/activebus/Altogether-FT.
|
FrancophonIA/sharedtask2019
|
FrancophonIA
|
Dataset origin: https://github.com/disrpt/sharedtask2019
Introduction
The DISRPT 2019 workshop introduces the first iteration of a cross-formalism shared task on discourse unit segmentation. Since all major discourse parsing frameworks imply a segmentation of texts into segments, learning segmentations for and from diverse resources is a promising area for converging methods and insights. We provide training, development and test datasets from all available languages and… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/sharedtask2019.
|
FrancophonIA/sharedtask2021
|
FrancophonIA
|
Dataset origin: https://github.com/disrpt/sharedtask2021
Introduction
The DISRPT 2021 shared task, co-located with CODI 2021 at EMNLP, introduces the second iteration of a cross-formalism shared task on discourse unit segmentation and connective detection, as well as the first iteration of a cross-formalism discourse relation classification task.
We provide training, development and test datasets from all available languages and treebanks in the RST, SDRT and PDTB formalisms… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/sharedtask2021.
|
FrancophonIA/sharedtask2023
|
FrancophonIA
|
Dataset origin: https://github.com/disrpt/sharedtask2023
Introduction
The DISRPT 2023 shared task, to be held in conjunction with CODI 2023 and ACL 2023, introduces the third iteration of a cross-formalism shared task on discourse unit segmentation and connective detection, as well as the second iteration of a cross-formalism discourse relation classification task.
We will provide training, development, and test datasets from all available languages and treebanks in the RST… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/sharedtask2023.
|
FrancophonIA/reuters
|
FrancophonIA
|
Dataset origin: https://archive.ics.uci.edu/dataset/259/reuters+rcv1+rcv2+multilingual+multiview+text+categorization+test+collection
Reuters RCV1/RCV2 Multilingual, Multiview Text Categorization Test collection
Distribution 1.0
README file (v 1.0)
26 September 2009
Massih R. Amini, Cyril Goutte
National Research Council Canada
I. Introduction
This… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/reuters.
|
FrancophonIA/glossaire_phonologie_articulatoire
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/lexicons/sldr000874
Description
Glossaire de phonologie articulatoire en français.
Travail inachevé. Nous invitons les chercheurs à le compléter.
Citation
@misc{11403/sldr000874/v1,
title = {Glossaire de phonologie articulatoire},
author = {Alain Marchal},
url = {https://hdl.handle.net/11403/sldr000874/v1},
note = {{ORTOLANG} ({Open} {Resources} {and} {TOols} {for} {LANGuage}) \textendash… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/glossaire_phonologie_articulatoire.
|
ShantanuT01/RIP-Dataset
|
ShantanuT01
|
RIP Dataset
Overview
The Rewritten Ivy Panda (RIP) Dataset is an AI-text detection dataset focusing on student essays. test.parquet contains additional metrics for the test essays.
This dataset was used in our paper Which LLMs are Difficult to Detect? A Detailed Analysis of Potential Factors Contributing to Difficulties in LLM Text Detection.
Data Generation
Eight LLMs rewrote essays from the Ivy Panda essay dataset:
Anthropic Claude Haiku and… See the full description on the dataset page: https://huggingface.co/datasets/ShantanuT01/RIP-Dataset.
|
FrancophonIA/Le_dictionnaire_electronique_des_mots
|
FrancophonIA
|
Dataset origin: http://rali.iro.umontreal.ca/rali/?q=fr/versions-informatisees-lvf-dem
|
bunnycore/Sci-Reasoning
|
bunnycore
|
Dataset Card for Sci-Reasoning
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/bunnycore/Sci-Reasoning/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/bunnycore/Sci-Reasoning.
|
williamgilpin/dysts
|
williamgilpin
|
Chaotic Time Series Dataset
Each time series is a drawn from one system over an extended duration, making this dataset suitable for long-horizon forecasting tasks.
|
self-planner/qwen-family
|
self-planner
|
Model
HumanEval
HumanEval+
baseline
self_planner
Delta
baseline
self_planner
Delta
qwen2.5-1.5b-instruct
54.9
51.8
-3.1
49.4
45.1
-4.3
qwen2.5-3B-instruct
68.3
66.5
-1.8
62.8
56.1
-6.7
qwen2.5-7B-instruct
84.8
81.7
-3.1
76.8
75.0
-1.8
qwen2.5-14B-instruct
80.5
81.1
0.6
76.2
75.0
-1.2
Model
MBPP
MBPP+
baseline
self_planner
Delta
baseline
self_planner
Delta
qwen2.5-14B-instruct
80.2
83.2
3.0
66.7
67.7
1.0
qwen2.5-7B-instruct
78.7
76.4
-2.3
65.4… See the full description on the dataset page: https://huggingface.co/datasets/self-planner/qwen-family.
|
self-planner/google-gemma-family
|
self-planner
|
Model
HumanEval
HumanEval+
baseline
self_planner
Delta
baseline
self_planner
Delta
gemma2-2b-it
44.5
36.0
-8.5
39.0
30.5
-8.5
gemma2-9b-it
66.5
62.2
-4.3
59.1
56.1
-3.0
Model
MBPP
MBPP+
baseline
self_planner
Delta
baseline
self_planner
Delta
gemma2-2b-it
51.1
46.1
-5.0
41.4
37.1
-4.3
gemma2-9b-it
72.7
68.7
-4.0
61.4
56.9
-4.5
|
DewiBrynJones/evals-whisper-large-v3-cv-cy-en
|
DewiBrynJones
|
Model: openai/whisper-large-v3
Test Set: DewiBrynJones/commonvoice_18_0_cy_en
Split: test
WER: 34.231428
CER: 15.730472
|
CaptionEmporium/furry-e621-safe-llama3.2-11b
|
CaptionEmporium
|
furry-e621-safe-llama3.2-11b: A new anthropomorphic art dataset
Dataset Summary
This is 2,987,631 synthetic captions for 995,877 images found in e921, which is just e621 filtered to the "safe" tag. The long captions were produced using meta-llama/Llama-3.2-11B-Vision-Instruct. Medium and short captions were produced from these captions using meta-llama/Llama-3.1-8B-Instruct The dataset was grounded for captioning using the ground truth tags on every post… See the full description on the dataset page: https://huggingface.co/datasets/CaptionEmporium/furry-e621-safe-llama3.2-11b.
|
open-llm-leaderboard/rombodawg__Rombos-LLM-V2.6-Nemotron-70b-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of rombodawg/Rombos-LLM-V2.6-Nemotron-70b
Dataset automatically created during the evaluation run of model rombodawg/Rombos-LLM-V2.6-Nemotron-70b
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/rombodawg__Rombos-LLM-V2.6-Nemotron-70b-details.
|
han5i5j1986/eval_koch_lego_2024-10-18-01
|
han5i5j1986
|
This dataset was created using LeRobot.
|
han5i5j1986/eval_koch_lego_2024-10-18-02
|
han5i5j1986
|
This dataset was created using LeRobot.
|
claran/seed-pretrain-decon
|
claran
|
Dataset Card for Dataset Name
Pre-training corpus for seed models in "Scalable Data Ablation Approximations for Language Models through Modular Training and Merging", to be presented at EMNLP 2024.
Dataset Details
Dataset Description
Curated by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]… See the full description on the dataset page: https://huggingface.co/datasets/claran/seed-pretrain-decon.
|
nevmenandr/russian-old-orthography-ocr
|
nevmenandr
|
Basic Description
Dataset contains source images and human-readable extracted texts. All texts were published in Russia in the 19th century and written using pre-reform orthography.
The dataset is designed to train and evaluate optical character recognition systems for texts published in Russian before the orthographic reform (1917).
Data structure
For each text there is a file with its image and the text corresponding to this image. The names of these files are… See the full description on the dataset page: https://huggingface.co/datasets/nevmenandr/russian-old-orthography-ocr.
|
yuuzuX/sadffwedadsdfa
|
yuuzuX
|
詳細描述
資料名稱: sadffwedadsdfa
資料狀態: active
作者名稱: yuuzuX
建立時間: 2024-10-17T09:53:54.386101
更新時間: 2024-10-17T09:54:26.386724
原本網址: CKAN - yuuzuX/sadffwedadsdfa
其他資訊:
Huggingface.Url: https://huggingface.co/datasets/yuuzuX/sadffwedadsdfa
|
han5i5j1986/eval_koch_lego_2024-10-18-03
|
han5i5j1986
|
This dataset was created using LeRobot.
|
SihyunPark/korea_hate_speech
|
SihyunPark
|
K-MHaS는 추가 레이블링 필수
|
open-llm-leaderboard/DeepAutoAI__d2nwg_causal_gpt2-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of DeepAutoAI/d2nwg_causal_gpt2
Dataset automatically created during the evaluation run of model DeepAutoAI/d2nwg_causal_gpt2
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/DeepAutoAI__d2nwg_causal_gpt2-details.
|
han5i5j1986/eval_koch_lego_2024-10-18-04
|
han5i5j1986
|
This dataset was created using LeRobot.
|
ChesterHung/public
|
ChesterHung
|
詳細描述
資料名稱: 公開
資料狀態: active
作者名稱: Chester
建立時間: 2024-08-30T01:52:05.417111
更新時間: 2024-10-17T09:58:10.111536
原本網址: CKAN - Chester/public
其他資訊:
Huggingface.Url: https://huggingface.co/datasets/ChesterHung/public
|
blester125/foodista-dolma
|
blester125
|
Foodista Data
|
clinia/CUREv1
|
clinia
|
Dataset Card for CUREv1
Clinia’s CURE, Crosslingual Understanding and Retrieval Evaluation for health
Evaluate your retriever’s performance on query-passage pairs curated by medical professionals, across 10 disciplines and 3 cross-lingual settings.
Dataset Details
Uses
Direct Use
You can load the dataset with the following code:
language_setting = "en-en"
domain = "dermatology"
dermatology_queries = load_dataset(… See the full description on the dataset page: https://huggingface.co/datasets/clinia/CUREv1.
|
dydyd/current_vibration
|
dydyd
|
Dataset Summary
This dataset provides vibration and motor current data for fault diagnosis of motor winding faults.
Dataset Details
Vibration data is acquired with a sampling frequency of 25.6 kHz, and current data is acquired with a sampling frequency of 100 kHz.
For more detailed information about this dataset, please check this article published in "Data in Brief".
Title: Vibration and Current Dataset of Three-Phase Permanent Magnet Synchronous Motors with… See the full description on the dataset page: https://huggingface.co/datasets/dydyd/current_vibration.
|
studymakesmehappyyyyy/VCGBENCH
|
studymakesmehappyyyyy
|
── videochatgpt_gen # Official website: https://github.com/mbzuai-oryx/Video-ChatGPT/tree/main/quantitative_evaluation
├── Test_Videos/ # Available at: https://mbzuaiac-my.sharepoint.com/:u:/g/personal/hanoona_bangalath_mbzuai_ac_ae/EatOpE7j68tLm2XAd0u6b8ABGGdVAwLMN6rqlDGM_DwhVA?e=90WIuW
├── Test_Human_Annotated_Captions/ # Available at:… See the full description on the dataset page: https://huggingface.co/datasets/studymakesmehappyyyyy/VCGBENCH.
|
blester125/peps-dolma
|
blester125
|
Python PEPs
This data was converted with pandoc version 3.5
|
WendiChen/DeformPAM
|
WendiChen
|
Dataset of DeformPAM
Contents
Description
Structure
Usage
Description
This is the dataset used in the paper DeformPAM: Data-Efficient Learning for Long-horizon Deformable
Object Manipulation via Preference-based Action Alignment.
Paper
Project Homepage
Github Repository
Structure
We offer two versions of the dataset: one is the full dataset used to train the models in our paper,
and the other is a mini dataset for easier… See the full description on the dataset page: https://huggingface.co/datasets/WendiChen/DeformPAM.
|
Gunther520/first-test-dataset2
|
Gunther520
|
Dataset Card for first-test-dataset2
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/Gunther520/first-test-dataset2/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/Gunther520/first-test-dataset2.
|
yugb2005/MRafi
|
yugb2005
| |
Holmeister/SciFact-TR
|
Holmeister
|
Citation Information
David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and
Hannaneh Hajishirzi. Fact or fiction: Verifying scientific claims. arXiv preprint arXiv:2004.14974,
2020.
|
Holmeister/HealthVer-TR
|
Holmeister
|
Citation Information
Mourad Sarrouti, Asma Ben Abacha, Yassine M’rabet, and Dina Demner-Fushman. Evidence-based
fact-checking of health-related claims. In Findings of the Association for Computational Linguistics:
EMNLP 2021, pages 3499–3512, 2021.
|
Holmeister/COVID-Fact-TR
|
Holmeister
|
Citation Information
Arkadiy Saakyan, Tuhin Chakrabarty, and Smaranda Muresan. COVID-fact: Fact extraction and
verification of real-world claims on COVID-19 pandemic. In Proceedings of the 59th Annual Meeting of
the Association for Computational Linguistics and the 11th International Joint Conference on Natural
Language Processing (Volume 1: Long Papers), pages 2116–2129, Online, August 2021. Association
for Computational Linguistics.
|
Gunther520/first-test-dataset
|
Gunther520
|
Dataset Card for first-test-dataset
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/Gunther520/first-test-dataset/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/Gunther520/first-test-dataset.
|
Holmeister/CrowS-Pair-TR
|
Holmeister
|
Citation Information
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R Bowman. Crows-pairs: A challenge dataset
for measuring social biases in masked language models. arXiv preprint arXiv:2010.00133, 2020.
|
jmercat/koch_feed_cat_2
|
jmercat
|
This dataset was created using LeRobot.
|
viethq5/emotion
|
viethq5
|
Dataset Card for emotion
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/viethq5/emotion/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/viethq5/emotion.
|
Holmeister/StereoSet-TR
|
Holmeister
|
Citation Information
Moin Nadeem, Anna Bethke, and Siva Reddy. Stereoset: Measuring stereotypical bias in pretrained
language models. arXiv preprint arXiv:2004.09456, 2020.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.