id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_245
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_245"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_246
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_246"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_247
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_247"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_248
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_248"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_249
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_249"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_250
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_250"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_251
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_251"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_252
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_252"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_253
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_253"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_254
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_254"
More Information needed
|
techiaith/llyw-cymru-en-cy-ogl
|
techiaith
|
Llyw Cymru dataset
This is a dataset comprised of aligned sentences scraped from Welsh Government websites.
The data in this dataset is made available under the Open Government License.
Mae hon yn set ddata sy'n cynnwys brawddegau wedi'u halinio wedi'u crafu o wefannau Llywodraeth Cymru.
Mae'r data yn y set ddata hon ar gael o dan y Trwydded Llywodraeth Agored.
Example Usage
Use the datasets module from huggingface to load the dataset:
import datasets
ds =… See the full description on the dataset page: https://huggingface.co/datasets/techiaith/llyw-cymru-en-cy-ogl.
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_255
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_255"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_256
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_256"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_257
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_257"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_258
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_258"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_259
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_259"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_260
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_260"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_261
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_261"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_262
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_262"
More Information needed
|
mesut/congress_speeches
|
mesut
|
curated using
Judd, Nicholas, Dan Drinkard, Jeremy Carbaugh, and Lindsay Young. congressional-record: A parser for the Congressional Record. Chicago, IL: 2017.
https://github.com/unitedstates/congressional-record
Text is preprocessed by removing President names, Vice President names, party names, and some cliche phrases such as "I reserve the balance of my time","I yield the floor" etc.
The data set is balanced based on parties as well.
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_263
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_263"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_264
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_264"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_265
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_265"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_266
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_266"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_267
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_267"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_268
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_268"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_269
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_269"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_270
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_270"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_271
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_271"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_272
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_272"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_273
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_273"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_274
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_274"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_275
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_275"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_276
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_276"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_277
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_277"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_278
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_278"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_279
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_279"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_280
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_280"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_281
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_281"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_282
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_282"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_283
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_283"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_284
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_284"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_285
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_285"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_286
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_286"
More Information needed
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_288
|
HamdanXI
|
Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_with_model_output_chunk_288"
More Information needed
|
L1Fthrasir/Facts-true-false
|
L1Fthrasir
|
The dataset's authors are Amos Azaria and Tom Mitchell, which they first used in their The Internal State of an LLM Knows When It's Lying paper
|
L1Fthrasir/Cities-true-false
|
L1Fthrasir
|
The dataset's authors are Amos Azaria and Tom Mitchell, which they first used in their The Internal State of an LLM Knows When It's Lying paper
|
L1Fthrasir/Companies-true-false
|
L1Fthrasir
|
The dataset's authors are Amos Azaria and Tom Mitchell, which they first used in their The Internal State of an LLM Knows When It's Lying paper
|
browndw/human-ai-parallel-corpus-mini
|
browndw
|
Human-AI Parallel English Corpus Mini (HAP-E mini) 🙃
Purpose
This is a down-sampled version of the HAP-E corpus. Please read the HAP-E data card for detailed information about how the full corpus was created.
This smaller version of the corpus was created to facilitate smaller scale explorations of the data (in classrooms, workshops, etc.).
Note that in down-sampling the data, the parallel nature of the corpus was maintained. There is a text chunk for the… See the full description on the dataset page: https://huggingface.co/datasets/browndw/human-ai-parallel-corpus-mini.
|
open-llm-leaderboard/swap-uniba__LLaMAntino-3-ANITA-8B-Inst-DPO-ITA-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA
Dataset automatically created during the evaluation run of model swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/swap-uniba__LLaMAntino-3-ANITA-8B-Inst-DPO-ITA-details.
|
WiNE-iNEFF/1M-OpenOrca_be
|
WiNE-iNEFF
|
En/Be
🐋 The Belarusian OpenOrca Dataset! 🐋
Belarusian OpenOrca dataset - is rich collection of augmented FLAN data aligns, that translated in belarusian language.
That dataset should help training LLM in belarusian language and should help on other NLP tasks.
This dataset have 2 version:
~1M GPT-4 completions (Now translating)
~3.2M GPT-3.5 completions (Can be translated in future)
Data Fields
The fields are:
'id', a unique numbered identifier which includes one of… See the full description on the dataset page: https://huggingface.co/datasets/WiNE-iNEFF/1M-OpenOrca_be.
|
open-llm-leaderboard/BrainWave-ML__llama3.2-3B-maths-orpo-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of BrainWave-ML/llama3.2-3B-maths-orpo
Dataset automatically created during the evaluation run of model BrainWave-ML/llama3.2-3B-maths-orpo
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/BrainWave-ML__llama3.2-3B-maths-orpo-details.
|
Bretagne/mmid_br
|
Bretagne
|
Dataset origin: https://github.com/penn-nlp/mmid/blob/master/downloads.md
Description
Image/word dataset for breton (100 images by word), the metadata of all images and the webpages they showed up on, and the dictionary containing just the words we have images for in each language, as well as their canonical MMID ID within the language. For more information, see our documentation page.
MMID was constructed by building translations for the bilingual dictionaries found here… See the full description on the dataset page: https://huggingface.co/datasets/Bretagne/mmid_br.
|
Bretagne/audio-breton-corpus
|
Bretagne
|
Dataset origin: https://github.com/Ofis-publik-ar-brezhoneg/audio-breton-corpus
Corpus audio breton
Corpus audio de phrases en breton, créé par l'IRISA et l'Office public de la langue bretonne dans le cadre du projet de synthèse vocale du breton.
Description du corpus
Voix Aziliz et Per
Aziliz et Per sont des voix féminine et masculine enregistrées entre 2021 et 2022. Elles disposent de près de 20 heures d'enregistrements à partir de textes de… See the full description on the dataset page: https://huggingface.co/datasets/Bretagne/audio-breton-corpus.
|
J-LAB/open-perfect_blend_ptbr_sharegpt
|
J-LAB
|
🎨 Open-PerfectBlend
Open-PerfectBlend is an open-source reproduction of the instruction dataset introduced in the paper "The Perfect Blend: Redefining RLHF with Mixture of Judges".
It's a solid general-purpose instruction dataset with chat, math, code, and instruction-following data.
Data source
Here is the list of the datasets used in this mix:
Dataset
# Samples
meta-math/MetaMathQA
395,000
openbmb/UltraInteract_sft
288,579… See the full description on the dataset page: https://huggingface.co/datasets/J-LAB/open-perfect_blend_ptbr_sharegpt.
|
Bretagne/WikiMatrix_br
|
Bretagne
|
Dataset origin: https://opus.nlpl.eu/WikiMatrix/corpus/version/WikiMatrix
Description
Corpus monolingue en breton de WikiMatrix
Citation
@misc{schwenk2019wikimatrixmining135mparallel,
title={WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia},
author={Holger Schwenk and Vishrav Chaudhary and Shuo Sun and Hongyu Gong and Francisco Guzmán},
year={2019},
eprint={1907.05791},
archivePrefix={arXiv}… See the full description on the dataset page: https://huggingface.co/datasets/Bretagne/WikiMatrix_br.
|
open-llm-leaderboard/djuna-test-lab__TEST-L3.2-ReWish-3B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of djuna-test-lab/TEST-L3.2-ReWish-3B
Dataset automatically created during the evaluation run of model djuna-test-lab/TEST-L3.2-ReWish-3B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/djuna-test-lab__TEST-L3.2-ReWish-3B-details.
|
open-llm-leaderboard/ymcki__gemma-2-2b-jpn-it-abliterated-24-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of ymcki/gemma-2-2b-jpn-it-abliterated-24
Dataset automatically created during the evaluation run of model ymcki/gemma-2-2b-jpn-it-abliterated-24
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/ymcki__gemma-2-2b-jpn-it-abliterated-24-details.
|
open-llm-leaderboard/DeepMount00__Llama-3.1-Distilled-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of DeepMount00/Llama-3.1-Distilled
Dataset automatically created during the evaluation run of model DeepMount00/Llama-3.1-Distilled
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/DeepMount00__Llama-3.1-Distilled-details.
|
open-llm-leaderboard/moeru-ai__L3.1-Moe-2x8B-v0.2-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of moeru-ai/L3.1-Moe-2x8B-v0.2
Dataset automatically created during the evaluation run of model moeru-ai/L3.1-Moe-2x8B-v0.2
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/moeru-ai__L3.1-Moe-2x8B-v0.2-details.
|
lnzdanese/koch_test11
|
lnzdanese
|
This dataset was created using LeRobot.
|
open-llm-leaderboard/DeepMount00__mergekit-ties-okvgjfz-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of DeepMount00/mergekit-ties-okvgjfz
Dataset automatically created during the evaluation run of model DeepMount00/mergekit-ties-okvgjfz
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/DeepMount00__mergekit-ties-okvgjfz-details.
|
open-llm-leaderboard/DeepMount00__Qwen2.5-7B-Instruct-MathCoder-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of DeepMount00/Qwen2.5-7B-Instruct-MathCoder
Dataset automatically created during the evaluation run of model DeepMount00/Qwen2.5-7B-Instruct-MathCoder
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/DeepMount00__Qwen2.5-7B-Instruct-MathCoder-details.
|
open-llm-leaderboard/dwikitheduck__gemma-2-2b-id-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of dwikitheduck/gemma-2-2b-id
Dataset automatically created during the evaluation run of model dwikitheduck/gemma-2-2b-id
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/dwikitheduck__gemma-2-2b-id-details.
|
FrancophonIA/autismedascalu
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/autismedascalu
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données (62G).
Description
Corpus d'interaction spontané de deux enfants autistes de haut niveau.
Citation
@misc{11403/autismedascalu/v1,
title = {Autisme-Dascalu},
author = {Camelia Dascalu},
url = {https://hdl.handle.net/11403/autismedascalu/v1},
note = {{ORTOLANG} ({Open} {Resources}… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/autismedascalu.
|
FrancophonIA/convers
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/convers
Ce jeu de données ne contient que les transcriptions. Pour récupérer les audios, vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Nous présentons un nouveau paradigme pour les neurosciences sociales qui compare une interaction sociale humaine (interaction humain-humain, HHI) à une interaction avec un robot conversationnel (interaction humain-robot, HRI)… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/convers.
|
FrancophonIA/plpnat
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/plpnat
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
La base de données Corpus PLPNat « Corpus de Productions Langagières Précoces en situation Naturelle » a été progressivement constituée sous la responsabilité de Dominique Bassano, avec le concours principal d’Isabelle Maillochon, de Magali Lavielle-Guida et de Sameh Yaiche. Cette base est formée de… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/plpnat.
|
OALL/details_CohereForAI__aya-expanse-8b
|
OALL
|
Dataset Card for Evaluation run of CohereForAI/aya-expanse-8b
Dataset automatically created during the evaluation run of model CohereForAI/aya-expanse-8b.
The dataset is composed of 136 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/OALL/details_CohereForAI__aya-expanse-8b.
|
open-llm-leaderboard/CultriX__Qwen2.5-14B-MegaMerge-pt2-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of CultriX/Qwen2.5-14B-MegaMerge-pt2
Dataset automatically created during the evaluation run of model CultriX/Qwen2.5-14B-MegaMerge-pt2
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/CultriX__Qwen2.5-14B-MegaMerge-pt2-details.
|
FrancophonIA/EMA-501-LORIA
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/autismedascalu
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Corpus enregistré avec un articulograph AG 501.Contient 8 langues et différent type de parole.
Les données du corpus peuvent être visualisées avec le logiciel visartico se trouvant à l'adresse suivante : visartico.loria.fr
Citation
@misc{11403/ema-501-loria/v1,
title =… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/EMA-501-LORIA.
|
FrancophonIA/cheese
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/cheese
Ce répertoire contient un sous-ensemble des données. Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger l'ensemble des données.
Description
« Cheese ! » est un corpus de nature conversationnelle enregistré dans la chambre sourde du Laboratoire Parole et langage (Aix-en-Provence).
Initialement conçu pour une analyse comparative du lien entre sourire et humour en français et anglais… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/cheese.
|
FrancophonIA/rhapsodie
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/rhapsodie
Description
Corpus de français parlé annoté pour la prosodie et la syntaxe
Un problème central dans l’étude des langues parlées est la compréhension du rôle que jouent les indices intonosyntaxiques dans la segmentation du continuum sonore en unités informationnelles et discursives. Se posent notamment les questions suivantes : quel est le degré de congruence entre les différentes unités manipulées par la syntaxe… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/rhapsodie.
|
FrancophonIA/regard_dessin
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000837
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Ce corpus d'analyse du regard lors de la description d'un dessin est composé d'une séquence de 3min50 avec enregistrement audio et vidéo.
Le locuteur doit décrire de mémoire une figure (figure complexe de Rey) pour que l'interlocuteur puisse la redessiner.
L'analyse porte sur le regard du locuteur en… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/regard_dessin.
|
FrancophonIA/queer-solidarity
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/queer-solidarity
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Le corpus "Queer Solidarity Smashes Border" rassemble 56 photos de tracts, banderoles ou événements queers en soutien aux migrantes et aux migrants.
Les photos répertoriées ont été prises entre 2005 et 2018, en Europe, Amérique du Nord, Australie et Moyen-Orient.
Citation… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/queer-solidarity.
|
FrancophonIA/multimodal-humor
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000837
Description
La collection "multimodal humor" contient les exemples de séquences vidéos associées à un chapitre d'ouvrage en cours (les références seront données au moment de la publication).
Il s'agit d'extraits de suivi longitudinaux adulte-enfant tirés du corpus CoLaJE déjà entièrement ouvert dans ORTOLANG.
https://www.ortolang.fr/market/corpora/colaje
Les enfants sont ANAE et THEOPHILE. Les enfants ont entre… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/multimodal-humor.
|
open-llm-leaderboard/lemon07r__Gemma-2-Ataraxy-v4d-9B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of lemon07r/Gemma-2-Ataraxy-v4d-9B
Dataset automatically created during the evaluation run of model lemon07r/Gemma-2-Ataraxy-v4d-9B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/lemon07r__Gemma-2-Ataraxy-v4d-9B-details.
|
FrancophonIA/covidis9
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/covidis9
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Le projet Covidis9 a pour objectif la constitution et la publication d'un corpus linguistique à partir des neuf allocutions télévisées du président Emmanuel Macron, prononcées pendant la crise sanitaire en 2020 et 2021 ("adresses aux Français", telles que diffusées sur le site de l’Elysée). Ce corpus… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/covidis9.
|
FrancophonIA/smyle
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/smyle
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
SMYLE is a multimodal corpus in French (16h) including audio-video and neuro-physiological data from 60 participants engaged in face-to-face storytelling (8.2h) and free conversation tasks (7.8h). This corpus covers all modalities, precisely synchronized. It constitutes one of the first corpus of this size… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/smyle.
|
FrancophonIA/BrainKT
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/brainkt
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Attention, le jeu de données fait 100G !
Description
BrainKT est un corpus multimodal (audio, video et données neurophysiologiques) d'interactions conversationnelles dyadiques en français.
Ce corpus a été constitué dans le but d'étudier les transferts d'information dans la conversation naturelle.
Les… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/BrainKT.
|
FrancophonIA/Grenelle_II
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000744 et https://www.ortolang.fr/market/corpora/sldr000768
Description
Extrait de la vidéo de la 2e séance du 4 mai 2010. Le débat sur le « Grenelle II de l’environnement » a été sélectionné en raison de la controverse importante qu’il a déclenchée. Le député Vert Yves Cochet y fait une intervention, de laquelle nous avons retenu 4 minutes du moment le plus vif de la controverse, où le député est interrompu à 11… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/Grenelle_II.
|
ashwinvk94/so100_test_5
|
ashwinvk94
|
This dataset was created using LeRobot.
|
ilyassmoummad/Xeno-Canto-6s-16khz
|
ilyassmoummad
|
Xeno-Canto Bird Sound Dataset
This repository provides access to the Xeno-Canto bird sound dataset (checkpoint from 2022-07-18) used in the benchmark BIRB, specifically pre-processed to facilitate training deep learning models.
The dataset has been processed using CNN14 from PANNs, a model pre-trained on AudioSet, to select 6-second windows with the highest bird sound activation. All audio has been downsampled to 16kHz and converted into Pytorch format (.pt), optimizing it for… See the full description on the dataset page: https://huggingface.co/datasets/ilyassmoummad/Xeno-Canto-6s-16khz.
|
SiliangZ/ultrachat_200k_mistral_sft_temp1_iter1
|
SiliangZ
|
Dataset Card for "ultrachat_200k_mistral_sft_temp1_iter1"
More Information needed
|
FrancophonIA/focus_en_francais
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000490
Description
Focus en français
Citation
@misc{11403/sldr000490/v1,
title = {Focus en fran\c{c}ais},
author = {Clément Plancq},
url = {https://hdl.handle.net/11403/sldr000490/v1},
note = {{ORTOLANG} ({Open} {Resources} {and} {TOols} {for} {LANGuage}) \textendash www.ortolang.fr},
year = {2013}
}
|
FrancophonIA/declaratives_avec_disjonction
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000821
Description
Corpus élicité de déclaratives
Citation
@misc{11403/sldr000821/v1,
title = {Déclaratives avec disjonction},
author = {Clément Plancq},
url = {https://hdl.handle.net/11403/sldr000821/v1},
note = {{ORTOLANG} ({Open} {Resources} {and} {TOols} {for} {LANGuage}) \textendash www.ortolang.fr},
copyright = {Licence Creative Commons Attribution - Pas d'Utilisation… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/declaratives_avec_disjonction.
|
FrancophonIA/dyslexiques_vs_lambda
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000799
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Ce corpus contient 9 enregistrements de trois locuteurs différents. Chaque locuteur effectue trois tâches : (1) raconter une bande-dessinée (http://www.espacegraphique.com/blog/carton/dessins/bd-sur-la-colline-593), (2) lire un texte (L’aigle, d’après Georgette Barthélémy, Les animaux et leurs secrets… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/dyslexiques_vs_lambda.
|
FrancophonIA/zoom_maternelle
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000797
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Ce corpus audio est enregistré dans une classe de maternelle de 32 enfants par un enregistreur Zoom. l’enregistrement est constitué de deux parties, dans la première partie la maîtresse réalise la tâche de lecture d’une histoire et dans la deuxième elle raconte l’histoire sans support écrit. Les… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/zoom_maternelle.
|
FrancophonIA/contractions
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000795
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Corpus de parole spontanée et de lecture pour la comparaison des contractions des mots.
Le corpus contient des enregistrements de 4 locuteurs différents en parole spontanée et en lecture de textes
Citation
@misc{11403/sldr000795/v1,
title = {Contractions de mots en parole spontanée… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/contractions.
|
FrancophonIA/chanteurs
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000792
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Deux mélodies (Joyeux Anniversaire et une mélodie romantique libre) sont chantées par 50 chanteurs confirmés de 19 à 66 ans (moyenne : 36,94 ans). Ces 38 femmes et 12 hommes ont commencé leur formation entre 6 ans et 49 ans (moyenne : 20,18), ont une expérience scénique de 5 à 51 ans (moyenne : 19,86… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/chanteurs.
|
DewiBrynJones/evals-whisper-large-v3-ft-cv-cy-train-all-plus-other-with-excluded
|
DewiBrynJones
|
Model: DewiBrynJones/whisper-large-v3-ft-cv-cy-train-all-plus-other-with-excluded
Test Set: DewiBrynJones/commonvoice_18_0_cy_en
Split: test
WER: 65.138595
CER: 43.317147
|
MrPotter64/uplimit-hw1-2ndtry
|
MrPotter64
|
Dataset Card for uplimit-hw1-2ndtry
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/MrPotter64/uplimit-hw1-2ndtry/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/MrPotter64/uplimit-hw1-2ndtry.
|
argmaxinc/whisperkit-evals-dataset
|
argmaxinc
|
WhisperKit Evals Dataset
Overview
The WhisperKit Evals Dataset is a comprehensive collection of our speech recognition evaluation results, specifically designed to benchmark the performance of WhisperKit models across various devices and operating systems. This dataset provides detailed insights into performance and quality metrics, and model behavior under different conditions.
Dataset Structure
The dataset is organized into JSON files, each… See the full description on the dataset page: https://huggingface.co/datasets/argmaxinc/whisperkit-evals-dataset.
|
DewiBrynJones/evals-whisper-large-v3-ft-cv-cy-train-all-plus-other-with-excluded-btb
|
DewiBrynJones
|
Model: DewiBrynJones/whisper-large-v3-ft-cv-cy-train-all-plus-other-with-excluded
Test Set: DewiBrynJones/banc-trawsgrifiadau-bangor-clean
Split: test
WER: 71.345141
CER: 41.115359
|
open-llm-leaderboard/bunnycore__Llama-3.2-3B-ProdigyPlus-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of bunnycore/Llama-3.2-3B-ProdigyPlus
Dataset automatically created during the evaluation run of model bunnycore/Llama-3.2-3B-ProdigyPlus
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/bunnycore__Llama-3.2-3B-ProdigyPlus-details.
|
FrancophonIA/DMG
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000769
Description
Corpus de 6 brochures regroupant un total de 15 textes, portant sur la culture, le genre, l'action directe, les black blocs, l'antispécisme, ...La plupart des textes connaissent des modifications morphosyntaxiques transgressives du genre masculin/féminin.
Citation
@misc{11403/sldr000769/v2,
title = {Corpus écrit Double Marquage de Genre (DMG) - brochures libertaires},
author =… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/DMG.
|
FrancophonIA/revision_collaborative_etayee
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000772
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Il s'agit d'un corpus de dix interactions verbales en vietnamien et en français produites par trois groupes de pairs lors de leurs séances de révision collaborative étayée.
Citation
@misc{11403/sldr000772/v1,
title = {Interactions entre pairs lors de la révision collaborative étayée}… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/revision_collaborative_etayee.
|
FrancophonIA/Carambouille
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000773
Vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Petits multilogues (4 personnes) enregistrés en chambre sourde avec micro-casque. La situation est une partie du jeu de négociation 'Carambouille'.
Citation
@misc{11403/sldr000773/v1,
title = {Corpus jeux},
author = {Laurent Prévot},
url =… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/Carambouille.
|
FrancophonIA/joyeux_anniversaire
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000774
Description
Chant populaire interprété par 166 francophones non musiciens. Les participants ont produit la mélodie « Joyeux Anniversaire » de manière spontanée, sans tonalité imposée, après la production de deux glissendi (production de façon continue d’une note en partant du plus grave vers le plus aigu, couvrant ainsi la tessiture du sujet). L’objectif de ces glissendi est d’échauffer l’appareil vocal, de… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/joyeux_anniversaire.
|
FrancophonIA/MARC-Fr
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/sldr000786
Description
Corpus français manuellement phonétisé et aligné d’une durée de 7 minutes. Composé de 3 sous-corpus : CID, AixOx et Grenelle.
Annotations entièrement revues le 5/5/2014.
Citation
@misc{11403/sldr000786/v1,
title = {MARC-Fr},
author = {Brigitte Bigi, Pauline Péri},
url = {https://hdl.handle.net/11403/sldr000786/v1},
note = {{ORTOLANG} ({Open} {Resources} {and} {TOols}… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/MARC-Fr.
|
FrancophonIA/DisReg
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/corpus-disreg
Ce jeu de données ne contient que les transcriptions. Pour récupérer les audios, vous devez vous rendre sur le site d'Ortholang et vous connecter afin de télécharger les données.
Description
Le corpus DisReg a été collecté en 2018 dans le cadre d'un travail de thèse (Kosmala, 2021).
Il comporte des vidéos de 12 étudiants et étudiantes francophones, de la Sorbonne Nouvelle, âgé·es de 18 à 22 ans… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/DisReg.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.