id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
minchyeom/Thinker-XML
|
minchyeom
|
System prompt suggestion:
You are a world-class AI system. Always respond in strict XML format with your reasoning steps within the <im_reasoning> XML tag. Each reasoning step should represent one unit of thought. Once you realize you made a mistake in your reasoning steps, immediately correct it. Place your final response outside the XML tag. Adhere to this XML structure without exception.
|
ashercn97/example-preference-dataset
|
ashercn97
|
Dataset Card for example-preference-dataset
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/example-preference-dataset/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/ashercn97/example-preference-dataset.
|
edwhu/eval_koch_test
|
edwhu
|
This dataset was created using LeRobot.
|
virattt/financebench
|
virattt
|
Dataset Card for Processed FinanceBench
Dataset Description
This dataset is derived from the PatronusAI/financebench-test dataset, containing only the PASS examples processed into a clean format for question-answering tasks in the financial domain.
Dataset Summary
The dataset contains financial questions, their corresponding document contexts, and human-written answers that have been verified as faithful to the source documents.
Columns:… See the full description on the dataset page: https://huggingface.co/datasets/virattt/financebench.
|
ashercn97/example-preference-dataset2
|
ashercn97
|
Dataset Card for example-preference-dataset2
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/example-preference-dataset2/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/ashercn97/example-preference-dataset2.
|
ashercn97/example-preference-dataset2_work
|
ashercn97
|
Dataset Card for example-preference-dataset2_work
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work.
|
minchyeom/Thinker-XML-DPO
|
minchyeom
|
Refer to: Thinker-XML.
|
ashercn97/example-preference-dataset2_work2
|
ashercn97
|
Dataset Card for example-preference-dataset2_work2
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work2/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work2.
|
ashercn97/example-preference-dataset2_work3
|
ashercn97
|
Dataset Card for example-preference-dataset2_work3
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work3/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work3.
|
ashercn97/example-preference-dataset2_work4
|
ashercn97
|
Dataset Card for example-preference-dataset2_work4
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work4/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work4.
|
ashercn97/example-preference-dataset2_work5
|
ashercn97
|
Dataset Card for example-preference-dataset2_work5
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work5/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work5.
|
ashercn97/example-preference-dataset2_work6
|
ashercn97
|
Dataset Card for example-preference-dataset2_work6
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work6/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work6.
|
ashercn97/distilabel-example-2
|
ashercn97
|
Dataset Card for distilabel-example-2
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/distilabel-example-2/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/ashercn97/distilabel-example-2.
|
cboss8/eval_koch_legotiger_2
|
cboss8
|
This dataset was created using LeRobot.
|
ashercn97/example-preference-dataset2_work7
|
ashercn97
|
Dataset Card for example-preference-dataset2_work7
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work7/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/ashercn97/example-preference-dataset2_work7.
|
open-llm-leaderboard/OpenBuddy__openbuddy-nemotron-70b-v23.1-131k-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of OpenBuddy/openbuddy-nemotron-70b-v23.1-131k
Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-nemotron-70b-v23.1-131k
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/OpenBuddy__openbuddy-nemotron-70b-v23.1-131k-details.
|
ashercn97/preferences_data_oct_23
|
ashercn97
|
Dataset Card for preferences_data_oct_23
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/ashercn97/preferences_data_oct_23/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/ashercn97/preferences_data_oct_23.
|
espnet/floras_2
|
espnet
|
FLORAS
FLORAS is a 50-language benchmark For LOng-form Recognition And Summarization of spoken language.
The goal of FLORAS is to create a more realistic benchmarking environment for speech recognition, translation, and summarization models.
Unlike typical academic benchmarks like LibriSpeech and FLEURS that uses pre-segmented single-speaker read-speech, FLORAS tests the capabilities of models on raw long-form conversational audio, which can have one or many speakers.
To… See the full description on the dataset page: https://huggingface.co/datasets/espnet/floras_2.
|
pszemraj/hellasigma
|
pszemraj
|
hellasigma
This is an initial proof of concept and only contains 190 examples. Still, it seems to be able to tease out differences especially in 7b+ models. I've run some initial evals and will post... soon
Many evaluation datasets focus on a single correct answer to see if the model is "smart." What about when there's no right answer? HellaSigma is an "eval" dataset to probe at what your model's personality type may be. Is it a Sigma, or not?
This dataset contains generic… See the full description on the dataset page: https://huggingface.co/datasets/pszemraj/hellasigma.
|
cjfcsjt/142_sft_rft
|
cjfcsjt
|
sft_rft
This model is a fine-tuned version of /mnt/nvme0n1p1/hongxin_li/jingfan/models/qwen2_vl_lora_sft_webshopv_300 on the vl_finetune_data dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:… See the full description on the dataset page: https://huggingface.co/datasets/cjfcsjt/142_sft_rft.
|
Orion-zhen/meissa-lima
|
Orion-zhen
|
Meissa-LIMA
受LIMA启发, 我制作了这个数据集. 数据集由以下几个部分构成: 原始数据集, 中文翻译版, 破限数据集, 角色扮演数据集, Gutenberg数据集, 弱智吧问答.
原始数据集: 原数据集中包含了13条拒绝/道德对齐的数据, 我将其找出并手动进行了修改
中文翻译版: 使用运行在 Great Server 上的 Orion-zhen/Meissa-Qwen2.5-7B-Instruct-Q5_K_M-GGUF 完成翻译, 并由我进行校对
破限数据集: 从 Orion-zhen/meissa-unalignments 中选取了若干条目
角色扮演数据集: 从 MinervaAI/Aesir-Preview 中选取了若干条目
Gutenberg数据集: 从 Orion-zhen/kto-gutenberg 中选取了若干条目
弱智吧问答: 从 LooksJuicy/ruozhiba 中选取了若干问题并由我手动编写回答
|
dogtooth/llama31-8b-instruct_generated_helpsteer2_binarized_gold
|
dogtooth
|
allenai/open_instruct: Rejection Sampling Dataset
See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail
Configs
args:
{'add_timestamp': False,
'hf_entity': 'dogtooth',
'hf_repo_id': 'llama31-8b-instruct_generated_helpsteer2_binarized_gold',
'hf_repo_id_scores': 'rejection_sampling_scores',
'include_reference_completion_for_rejection_sampling': True,
'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/llama31-8b-instruct_generated_helpsteer2_binarized_gold.
|
dogtooth/rejection_sampling_scores
|
dogtooth
|
allenai/open_instruct: Rejection Sampling Dataset
See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail
Configs
args:
{'add_timestamp': False,
'hf_entity': 'dogtooth',
'hf_repo_id': 'off-policy-0.1-with-on-policy-0.1-uf_iter1_generated_ultrafeedback_binarized_gold',
'hf_repo_id_scores': 'rejection_sampling_scores',
'include_reference_completion_for_rejection_sampling': True,
'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/rejection_sampling_scores.
|
ellietang/koch_test
|
ellietang
|
This dataset was created using LeRobot.
|
nbalepur/persona_alignment_test_clean_vague_mnemonic2
|
nbalepur
|
Dataset Card for "persona_alignment_test_clean_vague_mnemonic2"
More Information needed
|
sarahwei/Taiwanese-Minnan-Example-Sentences
|
sarahwei
|
Taiwanese Minnan Example Sentences
The dataset consists of a collection of example sentences designed to aid in recognizing Taiwanese Minnan (Taiwanese Hokkien) for automatic speech recognition (ASR) tasks. This dataset is sourced from the Ministry of Education in Taiwan and aims to provide valuable linguistic resources for researchers and developers working on speech recognition systems.
Dataset Features
Source: Ministry of Education, Taiwan (Sutian Resource Center)
Text:… See the full description on the dataset page: https://huggingface.co/datasets/sarahwei/Taiwanese-Minnan-Example-Sentences.
|
Almheiri/MMLU_ExpertPrompt_RAG_02
|
Almheiri
|
This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge, covering 57 tasks including elementary mathematics, US history, computer science, law, and more.
|
basilry/KB_Real_Estate_Historical_and_Predictional_Data
|
basilry
|
KB_Real_Estate_Historical_and_Predictional_Data
by. Kim Basilri
What is this?
South Korean apartment records and predictions csv based on data from KB Real Estate Data Hub, and random natural language questions and parsing datasets based on them.
Google Korea Organized Machine Learning Boot Camp After completion in 2024, data to be used for the project while participating in the working-level course of cooperation between the Ministry of Science and ICT of Korea… See the full description on the dataset page: https://huggingface.co/datasets/basilry/KB_Real_Estate_Historical_and_Predictional_Data.
|
twodgirl/suppress-layers-in-sd3.5
|
twodgirl
|
Suppress layers in SD3.5 L
Showcase of deactivated layers, use the same seed, use the same checkpoint.
Please note that there are no preceding blocks, which makes the distortion more noticeable than in the Flux variant.
Disable individual layers
Disable one single block at a time.
0-5
6-11
12-17
18-23
24-29
29-37
Disclaimer
Use of… See the full description on the dataset page: https://huggingface.co/datasets/twodgirl/suppress-layers-in-sd3.5.
|
JesusCrist/IEPILE
|
JesusCrist
|
This repo was cloned from zjunlp/iepile on google cloud drive
|
infinite-dataset-hub/AnimeCharacterEmotionAnalysis
|
infinite-dataset-hub
|
AnimeCharacterEmotionAnalysis
tags: character, emotion, sentiment analysis
Note: This is an AI-generated dataset so its content may be inaccurate or false
Dataset Description:
The 'AnimeCharacterEmotionAnalysis' dataset aims to facilitate research in the area of sentiment analysis, specifically focusing on the depiction and expression of emotions by anime characters in various episodes. Each entry in the dataset contains a character's name, the episode they appear in, a snippet… See the full description on the dataset page: https://huggingface.co/datasets/infinite-dataset-hub/AnimeCharacterEmotionAnalysis.
|
dogtooth/llama31-8b-instruct_generated_helpsteer2_binarized_1729749355
|
dogtooth
|
allenai/open_instruct: Generation Dataset
See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail
Configs
args:
{'add_timestamp': True,
'alpaca_eval': False,
'dataset_end_idx': 8636,
'dataset_mixer_list': ['dogtooth/helpsteer2_binarized', '1.0'],
'dataset_splits': ['train'],
'dataset_start_idx': 0,
'hf_entity': 'dogtooth',
'hf_repo_id': 'llama31-8b-instruct_generated_helpsteer2_binarized',
'llm_judge':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/llama31-8b-instruct_generated_helpsteer2_binarized_1729749355.
|
patrickblanks/autotrain-plugilologodci35
|
patrickblanks
|
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
Curated by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/patrickblanks/autotrain-plugilologodci35.
|
alaqsa-akbar/RainyCOCO
|
alaqsa-akbar
|
RainyCOCO
A dataset generated from the COCO train2017 dataset. Generated by simulating rain on top of all 55000 images.
|
sarahwei/Taiwanese-Minnan-Sutiau
|
sarahwei
|
Taiwanese-Minnan-Sutiau Dataset
The dataset consists of a curated collection of words that resemble tokens in Taiwanese Minnan (Taiwanese Hokkien), aimed at enhancing the recognition and processing of the language for various applications. Sourced from the Ministry of Education in Taiwan, this dataset serves as a valuable linguistic resource for researchers and developers engaged in language processing and recognition tasks.
Dataset Features
Source: Ministry of Education… See the full description on the dataset page: https://huggingface.co/datasets/sarahwei/Taiwanese-Minnan-Sutiau.
|
KienNgyuen/SOICT2024_DataAugmentation_Datasets
|
KienNgyuen
|
This dataset belongs to Tien-Dat Nguyen(*) and Kien Nguyen. Please do not copy or use it without permission. Don't hesitate to contact the authors before using it by following [email protected] or [email protected]
|
prsdm/MMLU-Pro-updated
|
prsdm
|
This is a modified version of MMLU-Pro dataset
The MMLU-Pro dataset is a challenging multi-task dataset designed to better test the abilities of large language models. It includes 12,000 complex questions from different subjects.
|
cybersectony/PhishingEmailDetectionv2.0
|
cybersectony
|
Phishing Email Detection Dataset
A comprehensive dataset combining email messages and URLs for phishing detection.
Dataset Overview
Quick Facts
Task Type: Multi-class Classification
Languages: English
Total Samples: 200,000 entries
Size Split:
Email samples: 22,644
URL samples: 177,356
Label Distribution: Four classes (0, 1, 2, 3)
Format: Two columns - content and labels
Dataset Structure
Features
{
'content':… See the full description on the dataset page: https://huggingface.co/datasets/cybersectony/PhishingEmailDetectionv2.0.
|
QDGrasp/QDGset_no_shapenet
|
QDGrasp
|
Dataset is in data folder
qdgset library provides examples to interact with the dataset (visualization of objects, export to .obj and .urdf for loading from standard interface in simulator, ... )
webpage : https://qdgrasp.github.io/qdg_set/
Whole dataset coming soon ...
If you use our work please cite our paper :
@misc{huber2024qdgsetlargescalegrasping, title={QDGset: A Large Scale Grasping Dataset Generated with Quality-Diversity}, author={Johann Huber and François Hélénon… See the full description on the dataset page: https://huggingface.co/datasets/QDGrasp/QDGset_no_shapenet.
|
townwish/mindone-testing-images
|
townwish
|
Images(or other datas) used by UT cases in mindone.diffusers.
|
ishu-newaz/AIUB-Generic-Instruct-Chat
|
ishu-newaz
|
This is a Generic Dataset on AIUB for NLP Model Training (eg. Llama, gemma, Mistral).
If you want to contribute on Improving this dataset Please contact:
|
firstap/audio_tp
|
firstap
|
Dusha is a bi-modal corpus suitable for speech emotion recognition (SER) tasks.
The dataset consists of audio recordings with Russian speech and their emotional labels.
The corpus contains approximately 350 hours of data. Four basic emotions that usually appear in a dialog with
a virtual assistant were selected: Happiness (Positive), Sadness, Anger and Neutral emotion.
|
yudhaht/gkamus-id-en
|
yudhaht
|
Dataset ini merupakan data gKamus.
Bila ada pihak yang berkeberatan silakan hubungi saya.
|
yudhaht/gkamus-en-id
|
yudhaht
|
Dataset ini merupakan data gKamus.
Bila ada pihak yang berkeberatan silakan hubungi saya.
|
nickynicolson/tropical_plant_id_myrtales
|
nickynicolson
|
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
Curated by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/nickynicolson/tropical_plant_id_myrtales.
|
open-llm-leaderboard/meditsolutions__Llama-3.2-SUN-2.5B-chat-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of meditsolutions/Llama-3.2-SUN-2.5B-chat
Dataset automatically created during the evaluation run of model meditsolutions/Llama-3.2-SUN-2.5B-chat
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/meditsolutions__Llama-3.2-SUN-2.5B-chat-details.
|
open-llm-leaderboard/djuna-test-lab__TEST-L3.2-ReWish-3B-ties-w-base-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of djuna-test-lab/TEST-L3.2-ReWish-3B-ties-w-base
Dataset automatically created during the evaluation run of model djuna-test-lab/TEST-L3.2-ReWish-3B-ties-w-base
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/djuna-test-lab__TEST-L3.2-ReWish-3B-ties-w-base-details.
|
open-llm-leaderboard/allura-org__MS-Meadowlark-22B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of allura-org/MS-Meadowlark-22B
Dataset automatically created during the evaluation run of model allura-org/MS-Meadowlark-22B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/allura-org__MS-Meadowlark-22B-details.
|
open-llm-leaderboard/bunnycore__Llama-3.2-3B-Long-Think-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of bunnycore/Llama-3.2-3B-Long-Think
Dataset automatically created during the evaluation run of model bunnycore/Llama-3.2-3B-Long-Think
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/bunnycore__Llama-3.2-3B-Long-Think-details.
|
cadene/moss_debug
|
cadene
|
This dataset was created using LeRobot.
|
datastax/glean-integration-demo
|
datastax
|
A set of files with work-related information to use in a Glean demo.
|
ganga4364/TTS_training_data_with_meta
|
ganga4364
|
Description
Total Audio Files: 673,659
Total Audio Length: 856.20 hours
Total Audio Collected Data: 18 OCT, 2024
Department-wise Statistics (After Removal)
Department
Number of Audio Files
Total Audio Length (hours)
STT_AB
174,525
224.92
STT_HS
87,088
96.27
STT_NS
241,007
318.28
STT_NW
97,145
137.82
STT_PC
73,894
78.90
Summary:
After removing the departments STT_MV, STT_CS, and STT_TT, the dataset now contains:
673,659… See the full description on the dataset page: https://huggingface.co/datasets/ganga4364/TTS_training_data_with_meta.
|
aglazkova/keyphrase_extraction_russian
|
aglazkova
|
A dataset for extracting or generating keywords for scientific texts in Russian. It contains annotations of scientific articles from four domains: mathematics and computer science (a fragment of the Keyphrases CS&Math Russian dataset), history, linguistics, and medicine. For more details about the dataset please refer the original paper: https://arxiv.org/abs/2409.10640
Dataset Structure
abstract, an abstract in a string format;
keywords, a list of keyphrases provided by… See the full description on the dataset page: https://huggingface.co/datasets/aglazkova/keyphrase_extraction_russian.
|
OALL/details_anthracite-org__magnum-v4-9b
|
OALL
|
Dataset Card for Evaluation run of anthracite-org/magnum-v4-9b
Dataset automatically created during the evaluation run of model anthracite-org/magnum-v4-9b.
The dataset is composed of 136 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/OALL/details_anthracite-org__magnum-v4-9b.
|
harsh-7070/COCO-Wholebody
|
harsh-7070
|
COCO-WholeBody
Dataset Details
COCO-WholeBody is the first large-scale benchmark for whole-body human pose estimation.
It extends the COCO 2017 dataset with additional annotations, using the same train/val split as COCO.
For each person, it annotates:
4 types of bounding boxes: person box, face box, left-hand box, right-hand box
133 keypoints:
17 for body
6 for feet
68 for face
42 for hands
Dataset Description
Repository:… See the full description on the dataset page: https://huggingface.co/datasets/harsh-7070/COCO-Wholebody.
|
sergioburdisso/dialog2flow-dataset
|
sergioburdisso
|
Dialog2Flow Training Corpus
This page hosts the dataset introduced in the paper "Dialog2Flow: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction" published in the EMNLP 2024 main conference.
Here we are not only making available the dataset but also each one of the 20 (standardized) task-oriented dialogue datasets used to build it.
The corpus consists of 3.4 million utterances/sentences annotated with dialog act and slot labels… See the full description on the dataset page: https://huggingface.co/datasets/sergioburdisso/dialog2flow-dataset.
|
zlicastro/zanya-unreal-engine-hdr-dataset
|
zlicastro
|
Zanya's Unreal Engine HDR Dataset
Repository: https://huggingface.co/datasets/zlicastro/zanya-unreal-engine-hdr-dataset
Dataset Summary
This dataset contains HDR images rendered from Unreal Engine. Each image is of a Unreal FAB marketplace item which had "Allows usage with AI" set to Yes at the time of adding to my library.
How Images were Captured
Screenshots were taken with Unreal Engine 5.4.4.
Press tilda (~) to open the command window, then… See the full description on the dataset page: https://huggingface.co/datasets/zlicastro/zanya-unreal-engine-hdr-dataset.
|
open-llm-leaderboard/CultriX__Qwen2.5-14B-MergeStock-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of CultriX/Qwen2.5-14B-MergeStock
Dataset automatically created during the evaluation run of model CultriX/Qwen2.5-14B-MergeStock
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/CultriX__Qwen2.5-14B-MergeStock-details.
|
bigbio/ppr
|
bigbio
|
The Plant-Phenotype corpus is a text corpus with human annotations of plants, phenotypes, and their relations on a corpus in 600 PubMed abstracts.
|
Jincenzi/COKE
|
Jincenzi
|
Dataset Card for COKE
Data for the ACL 2024 Oral paper "COKE: A Cognitive Knowledge Graph for Machine Theory of Mind"
Dataset Description
To empower AI systems with Theory of Mind ability and narrow the gap between them and humans,
✅ We propose COKE: the first cognitive knowledge graph for machine theory of mind.
Dataset Structure
Datasets are presented as training and validation sets for the four generation tasks.
For Polarity, 0 indicates… See the full description on the dataset page: https://huggingface.co/datasets/Jincenzi/COKE.
|
arrmlet/reddit_dataset_58
|
arrmlet
|
Bittensor Subnet 13 Reddit Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility of this dataset… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/reddit_dataset_58.
|
arrmlet/x_dataset_58
|
arrmlet
|
Bittensor Subnet 13 X (Twitter) Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/x_dataset_58.
|
arrmlet/reddit_dataset_9
|
arrmlet
|
Bittensor Subnet 13 Reddit Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility of this dataset… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/reddit_dataset_9.
|
arrmlet/x_dataset_9
|
arrmlet
|
Bittensor Subnet 13 X (Twitter) Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/x_dataset_9.
|
arrmlet/x_dataset_155
|
arrmlet
|
Bittensor Subnet 13 X (Twitter) Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/x_dataset_155.
|
gabrielmbmb/proofread-clustered-errors
|
gabrielmbmb
|
Dataset Card for proofread-clustered-errors
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/gabrielmbmb/proofread-clustered-errors/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/gabrielmbmb/proofread-clustered-errors.
|
nsugianto/topicclassification_ng
|
nsugianto
|
CLASS NAME INFORMATION
1: Animal_encounter_and_experience
2: Activity_experience
3: Natural_scene
4: Weather
5: Diving_training
6: Travel_experience_related_photography
7: Natural_impact
8: Local_narrative
9: Travel_experience
10: Dining_experience
11: Holiday_vibe
Dataset Card for "topicclassification_ng"
More Information needed
|
nbalepur/persona_alignment_test_clean_vague
|
nbalepur
|
Dataset Card for "persona_alignment_test_clean_vague"
More Information needed
|
LM-Polygraph/aeslc
|
LM-Polygraph
|
Dataset Card for aeslc
This is a preprocessed version of aeslc dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for performing… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/aeslc.
|
INS-IntelligentNetworkSolutions/Waste-Dumpsites-DroneImagery
|
INS-IntelligentNetworkSolutions
|
Dataset
Contains 2115 drone images of 1280 x 1280 px resolution with a nadir perspective (camera pointing straight down at a 90-degree angle to the ground).
Created for the Illegal Dump Site Detection and Landfill Monitoring project, more details bellow
Initial Dataset for the
Illegal Dump Site Detection and Landfill Monitoring
Open-Source Web Application
🌐 Overview
Utilizing high-resolution drone and satellite imagery for sophisticated… See the full description on the dataset page: https://huggingface.co/datasets/INS-IntelligentNetworkSolutions/Waste-Dumpsites-DroneImagery.
|
LM-Polygraph/trivia_qa_tiny
|
LM-Polygraph
|
Dataset Card for trivia_qa_tiny
This is a preprocessed version of trivia_qa_tiny dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/trivia_qa_tiny.
|
groloch/stable_diffusion_prompts_instruct
|
groloch
|
Stable diffusion prompts for instruction models fine-tuning
Overview
This dataset contains 80,000+ prompts summarized to make it easier to create instruction-tuned prompt enhancing models. Each row of the dataset contains two values:
a short description of a image
a full prompt corresponding to that description in a stable diffusion format
Hope this dataset can help creating amazing apps !
How to use
You can download and use the dataset easily… See the full description on the dataset page: https://huggingface.co/datasets/groloch/stable_diffusion_prompts_instruct.
|
LM-Polygraph/mmlu
|
LM-Polygraph
|
Dataset Card for mmlu
This is a preprocessed version of mmlu dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for performing… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/mmlu.
|
open-llm-leaderboard/CohereForAI__aya-expanse-8b-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of CohereForAI/aya-expanse-8b
Dataset automatically created during the evaluation run of model CohereForAI/aya-expanse-8b
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/CohereForAI__aya-expanse-8b-details.
|
LM-Polygraph/babi_qa
|
LM-Polygraph
|
Dataset Card for babi_qa
This is a preprocessed version of babi_qa dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/babi_qa.
|
LM-Polygraph/coqa
|
LM-Polygraph
|
Dataset Card for coqa
This is a preprocessed version of coqa dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for performing… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/coqa.
|
LM-Polygraph/gsm8k
|
LM-Polygraph
|
Dataset Card for gsm8k
This is a preprocessed version of gsm8k dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for performing… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/gsm8k.
|
LM-Polygraph/trivia_qa
|
LM-Polygraph
|
Dataset Card for trivia_qa
This is a preprocessed version of trivia_qa dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/trivia_qa.
|
mteb/mlsum
|
mteb
|
Re-upload of reciTAL/mlsum dataset version b5d54f8f3b61ae17845046286940f03c6bc79bc7 to avoid script loading issues.
Original dataset link: https://huggingface.co/datasets/reciTAL/mlsum
|
LM-Polygraph/xsum
|
LM-Polygraph
|
Dataset Card for xsum
This is a preprocessed version of xsum dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for performing… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/xsum.
|
jhu-clsp/mFollowIR-parquet
|
jhu-clsp
|
mFollowIR-parquet
This is a parquet version of the mFollowIR dataset that can be loaded directly with load_dataset(). The original dataset can be found at jhu-clsp/mFollowIR.
Dataset Structure
The dataset contains the following configurations for each language (fas, rus, zho):
Configurations
qrels_og_[lang]: Original relevance judgments (test split)
qrels_changed_[lang]: Modified relevance judgments (test split)
corpus_[lang]: Document collection… See the full description on the dataset page: https://huggingface.co/datasets/jhu-clsp/mFollowIR-parquet.
|
jhu-clsp/mFollowIR-cross-lingual-parquet
|
jhu-clsp
|
mFollowIR-cross-lingual-parquet
This is a parquet version of the mFollowIR cross-lingual dataset that can be loaded directly with load_dataset(). The original dataset can be found at jhu-clsp/mFollowIR-cross-lingual.
Dataset Structure
The dataset contains the following configurations for each target language (fas, rus, zho):
Configurations
qrels_og_[lang]: Original relevance judgments (test split)
qrels_changed_[lang]: Modified relevance judgments… See the full description on the dataset page: https://huggingface.co/datasets/jhu-clsp/mFollowIR-cross-lingual-parquet.
|
TroglodyteDerivations/ETTm2
|
TroglodyteDerivations
|
The ETTm2 dataset is a time series dataset that is part of the ETTh1 and ETTm1 datasets, which are used for time series forecasting tasks. The dataset is derived from the operation of an electricity transformer, specifically focusing on the monitoring of various operational parameters. The dataset is designed to be used for research in time series forecasting, anomaly detection, and other related tasks.
Key Characteristics of the ETTm2 Dataset:
Source: The dataset is collected from the… See the full description on the dataset page: https://huggingface.co/datasets/TroglodyteDerivations/ETTm2.
|
huggingMetoo/cudfPackageX86
|
huggingMetoo
|
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
Curated by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/huggingMetoo/cudfPackageX86.
|
LM-Polygraph/wmt14
|
LM-Polygraph
|
Dataset Card for wmt14
This is a preprocessed version of wmt14 dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for performing… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/wmt14.
|
LM-Polygraph/person
|
LM-Polygraph
|
Dataset Card for person
This is a preprocessed version of person dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for performing… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/person.
|
LM-Polygraph/wiki_bio
|
LM-Polygraph
|
Dataset Card for wiki_bio
This is a preprocessed version of wiki_bio dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/wiki_bio.
|
open-llm-leaderboard/zelk12__MT1-Gen1-gemma-2-9B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of zelk12/MT1-Gen1-gemma-2-9B
Dataset automatically created during the evaluation run of model zelk12/MT1-Gen1-gemma-2-9B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT1-Gen1-gemma-2-9B-details.
|
roadz/solusdt_360_days
|
roadz
|
SOLUSDT 360 Days Minute Data
This dataset contains historical SOL/USDT minute-by-minute data for the last 360 days, downloaded from the Binance API.
|
LeroyDyer/Tox21-V-SMILES_QA_to_Base64
|
LeroyDyer
|
question,answer,image,image_base64
|
lenML/longwriter-6k-filtered
|
lenML
|
LongWriter-6k-Filtered
🤖 [LongWriter Dataset] • 💻 [Github Repo] • 📃 [LongWriter Paper] • 📃 [Tech report]
longwriter-6k-filtered dataset contains 666 filtered examples SFT data with ultra-long output ranging from 2k-32k words in length (both English and Chinese) based on LongWriter-6k.The data can support training LLMs to extend their maximum output window size to 10,000+ words with low computational cost.
The tech report is available at Minimum Tuning to Unlock Long… See the full description on the dataset page: https://huggingface.co/datasets/lenML/longwriter-6k-filtered.
|
LM-Polygraph/triviaqa
|
LM-Polygraph
|
Dataset Card for triviaqa
This is a preprocessed version of triviaqa dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/triviaqa.
|
SiliangZ/ultrachat_200k_mistral_sft_temp1
|
SiliangZ
|
Dataset Card for "ultrachat_200k_mistral_sft_temp1"
More Information needed
|
PlanAPlanB/reddit_dataset_45
|
PlanAPlanB
|
Bittensor Subnet 13 Reddit Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility of this dataset… See the full description on the dataset page: https://huggingface.co/datasets/PlanAPlanB/reddit_dataset_45.
|
FrancophonIA/vintage
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/vintage
Seul un extrait est disponible dans ce répertoire. Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger l'ensemble des données.
Description
Le Corpus VIntAGE est constitué de 1080 minutes d’enregistrements vidéo de 36 entretiens en face-à-face réalisés au domicile de 9 locutrices de plus de 75 ans présentant un Trouble Cognitif Léger (Petersen et al., 1999 ; Petersen, 2004 ;… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/vintage.
|
dogtooth/gemma-2-9b-it_generated_helpsteer2_binarized_1729798997
|
dogtooth
|
allenai/open_instruct: Generation Dataset
See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail
Configs
args:
{'add_timestamp': True,
'alpaca_eval': False,
'dataset_end_idx': 8636,
'dataset_mixer_list': ['dogtooth/helpsteer2_binarized', '1.0'],
'dataset_splits': ['train'],
'dataset_start_idx': 0,
'hf_entity': 'dogtooth',
'hf_repo_id': 'gemma-2-9b-it_generated_helpsteer2_binarized',
'llm_judge': False… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/gemma-2-9b-it_generated_helpsteer2_binarized_1729798997.
|
jcantu217/QA-Invasive-Plants-USA
|
jcantu217
|
Dataset used to train jcantu217/gemma-2-7b-invasive-plant-chatbot
|
FrancophonIA/Antibiotic
|
FrancophonIA
|
Dataset origin: https://portulanclarin.net/repository/browse/1c5ff916146911ebb6ec02420a0004094d31b044c80f4109bb228ff6f55a68e8/
Description
Multilingual (CEF languages) corpus acquired from the website https://antibiotic.ecdc.europa.eu/ . It contains 20981 TUs (in total) for EN-X language pairs, where X is a CEF language.
Citation
Not clear :/
|
FrancophonIA/Deltacorpus_1.1
|
FrancophonIA
|
Dataset origin: https://lindat.cz/repository/xmlui/handle/11234/1-1743
Description
Texts in 107 languages from the W2C corpus (http://hdl.handle.net/11858/00-097C-0000-0022-6133-9), first 1,000,000 tokens per language, tagged by the delexicalized tagger described in Yu et al. (2016, LREC, Portorož, Slovenia).
Changes in version 1.1:
Universal Dependencies tagset instead of the older and smaller Google Universal POS tagset.
SVM classifier trained on Universal Dependencies… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/Deltacorpus_1.1.
|
LM-Polygraph/wmt19
|
LM-Polygraph
|
Dataset Card for wmt19
This is a preprocessed version of wmt19 dataset for benchmarks in LM-Polygraph.
Dataset Details
Dataset Description
Curated by: https://huggingface.co/LM-Polygraph
License: https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
Dataset Sources [optional]
Repository: https://github.com/IINemo/lm-polygraph
Uses
Direct Use
This dataset should be used for performing… See the full description on the dataset page: https://huggingface.co/datasets/LM-Polygraph/wmt19.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.