id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
NathanGavenski/Ant-v2
|
NathanGavenski
|
Ant-v2 - Continuous Imitation Learning from Observation
This dataset was created for the paper Explorative imitation learning: A path signature approach for continuous environments.
It is based on Ant-v2, which is an older version for the MuJoCo environment.
If you would like to use newer version, be sure to check: IL-Datasets repository for the updated list.
Description
The dataset consists of 10 episodes with an average episodic reward of 5514.0229.
Each entry… See the full description on the dataset page: https://huggingface.co/datasets/NathanGavenski/Ant-v2.
|
tcepi/bidCorpus
|
tcepi
|
Dataset Card for "BidCorpus"
Dataset Summary
The BidCorpus dataset consists of various configurations related to bidding documents. It includes datasets for Named Entity Recognition, Multi-label Classification, Sentence Similarity, and more. Each configuration focuses on different aspects of bidding documents and is designed for specific tasks.
Supported Tasks and Leaderboards
The supported tasks are the following:
DatasetSourceSub-domainTask… See the full description on the dataset page: https://huggingface.co/datasets/tcepi/bidCorpus.
|
MBZUAI-Paris/DarijaAlpacaEval
|
MBZUAI-Paris
|
Dataset Card for DarijaAlpacaEval
Dataset Summary
DarijaAlpacaEval is an evaluation dataset designed to assess the performance of large language models on instruction-following tasks in Moroccan Darija, a variety of Arabic. It is adapted from the AlpacaEval dataset and consists of instructions provided in Moroccan Darija. The dataset aims to provide a culturally relevant benchmark for evaluating language models' capabilities in instructions following and responses… See the full description on the dataset page: https://huggingface.co/datasets/MBZUAI-Paris/DarijaAlpacaEval.
|
NathanGavenski/HalfCheetah-v2
|
NathanGavenski
|
HalfCheetah-v2 - Continuous Imitation Learning from Observation
This dataset was created for the paper Explorative imitation learning: A path signature approach for continuous environments.
It is based on HalfCheetah-v2, which is an older version for the MuJoCo environment.
If you would like to use newer version, be sure to check: IL-Datasets repository for the updated list.
Description
The dataset consists of 10 episodes with an average episodic reward of… See the full description on the dataset page: https://huggingface.co/datasets/NathanGavenski/HalfCheetah-v2.
|
NathanGavenski/Hopper-v2
|
NathanGavenski
|
Hopper-v2 - Continuous Imitation Learning from Observation
This dataset was created for the paper Explorative imitation learning: A path signature approach for continuous environments.
It is based on Hopper-v2, which is an older version for the MuJoCo environment.
If you would like to use newer version, be sure to check: IL-Datasets repository for the updated list.
Description
The dataset consists of 10 episodes with an average episodic reward of 3760.6908.
Each… See the full description on the dataset page: https://huggingface.co/datasets/NathanGavenski/Hopper-v2.
|
NathanGavenski/InvertedPendulum-v2
|
NathanGavenski
|
InvertedPendulum-v2 - Continuous Imitation Learning from Observation
This dataset was created for the paper Explorative imitation learning: A path signature approach for continuous environments.
It is based on InvertedPendulum-v2, which is an older version for the MuJoCo environment.
If you would like to use newer version, be sure to check: IL-Datasets repository for the updated list.
Description
The dataset consists of 10 episodes with an average episodic reward… See the full description on the dataset page: https://huggingface.co/datasets/NathanGavenski/InvertedPendulum-v2.
|
NathanGavenski/Swimmer-v2
|
NathanGavenski
|
Swimmer-v2 - Continuous Imitation Learning from Observation
This dataset was created for the paper Explorative imitation learning: A path signature approach for continuous environments.
It is based on Swimmer-v2, which is an older version for the MuJoCo environment.
If you would like to use newer version, be sure to check: IL-Datasets repository for the updated list.
Description
The dataset consists of 4 episodes with an average episodic reward of 259.5244.
Each… See the full description on the dataset page: https://huggingface.co/datasets/NathanGavenski/Swimmer-v2.
|
open-llm-leaderboard/EpistemeAI2__Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-math-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-math
Dataset automatically created during the evaluation run of model EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-math
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/EpistemeAI2__Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-math-details.
|
FrancophonIA/TermITH
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/termith
Description
Le projet TermITH vise à indexer automatiquement des articles scientifiques intégraux en sciences humaines et sociales. Il a réuni six partenaires : l'Atilf, l'Inist, l'Inria (Grand-Est et Saclay), le Lidilem et le Lina.
Les articles scientifiques ont été fournis par ADBS (Cairn), Lavoisier (Cairn), Elsevier via le projet ISTEX, Canadian Journal of Chemistry, OpenEdition et le projet Scientext. Ils ont… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/TermITH.
|
FrancophonIA/recherches-francais-parle
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/recherches-francais-parle
Description
Durant 27 ans, la revue Recherches sur le français parlé a paru aux Publications de l'Université de Provence. En 1977, année de parution du premier numéro, lancer une revue portant sur le français parlé représentait un pari audacieux. À cette époque, rares étaient les études prenant pour objet le français parlé et il fallait une bonne dose de courage pour se lancer dans une telle… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/recherches-francais-parle.
|
FrancophonIA/interviews_daudet
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/interviewsdaudet
Description
Le fichier Interviews d’Alphonse Daudet comprend actuellement 144 interviews (dont une quinzaine de réponses à des enquêtes). L’objectif de cette collection ne saurait tendre à l’exhaustivité ; il s’agit plutôt d’offrir pour le moment un échantillon représentatif des entretiens que Daudet a pu donner au cours de sa carrière d’écrivain jusqu’à sa disparition le 16 décembre 1897. Fruit d’un… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/interviews_daudet.
|
FrancophonIA/corpus-presidentielle2017
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/corpus-presidentielle2017
Description
Le corpus #Présidentielle2017 a été réalisé dans le cadre du projet #Idéo2017, financé par la Fondation UCP, et qui associe des chercheurs du laboratoire AGORA et du laboratoire ETIS (ENSEA / UCP / CNRS UMR 8051). L’objectif du projet était de créer un outil d’analyse des tweets politiques lors de campagnes politiques. A la fin de la campagne présidentielle, une archive des tweets… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/corpus-presidentielle2017.
|
FrancophonIA/ParCoGLiJe
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/stosic
Description
ParCoGLiJe est un corpus parallèle bilingue français-anglais destiné à l'étude des grands classiques de la littérature de jeunesse.
Il contient 8 ouvrages en français et en anglais alignés avec leur traduction dans l'autre langue du corpus au niveau des chapitres, paragraphes et phrases. Le corpus comporte 1,6 million de mots et il est libre de droits.
Les fichiers diffusés sont au format XML - normé… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/ParCoGLiJe.
|
saarus72/yuvachev
|
saarus72
|
Based on the idea of the Gutenberg-DPO dataset using Vikhr-Nemo-12B-Instruct-R-21-09-24 thanks to Vikhr team.
The LLM has been used for both prompt and article generation.
The original news are from the "Panorama" website which is a Russian equivalent to "The Onion" (not sure about the license of those but the internet is the internet you know).
Cound be taken here anyway.
May come in handy if one tries to teach an LLM how to joke :)
|
open-llm-leaderboard/rombodawg__rombos_Replete-Coder-Instruct-8b-Merged-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of rombodawg/rombos_Replete-Coder-Instruct-8b-Merged
Dataset automatically created during the evaluation run of model rombodawg/rombos_Replete-Coder-Instruct-8b-Merged
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/rombodawg__rombos_Replete-Coder-Instruct-8b-Merged-details.
|
if001/elementray_l
|
if001
|
calm3-22bを使って簡単な日本語の例文を作成したデータセットです。
以下のパターンが含まれるように一文を作成しています。
生成に失敗しているものはクリーニングしています。
"です/だ (肯定文)",
"ではありません/じゃない (否定文)",
"〜ます (動詞の丁寧形)",
"〜ません (動詞の否定形)",
"〜たい (希望・願望)",
"〜ている (進行形)",
"〜てください (依頼)",
"〜てもいいですか (許可)",
"〜なければなりません/〜なきゃいけない (義務)",
"〜でしょう/〜だろう (推測)",
"〜が好きです/嫌いです (好み)",
"〜と思います (意見・思考)",
"〜から/〜ので (理由)",
"〜のが好きです/嫌いです (動作の好み)",
"〜でしょうか (丁寧な質問)",
"〜てしまう (完了・後悔)",
"〜ながら (同時進行)",
"〜ば/〜たら (仮定形)",
"〜ておく (準備)"… See the full description on the dataset page: https://huggingface.co/datasets/if001/elementray_l.
|
open-llm-leaderboard/rombodawg__rombos_Replete-Coder-Llama3-8B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of rombodawg/rombos_Replete-Coder-Llama3-8B
Dataset automatically created during the evaluation run of model rombodawg/rombos_Replete-Coder-Llama3-8B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/rombodawg__rombos_Replete-Coder-Llama3-8B-details.
|
eduardtoni/MENSA-visual-iq-test
|
eduardtoni
|
Dataset Card for "MENSA-visual-iq-test"
More Information needed
|
FrancophonIA/payetoncorpus
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/payetoncorpus
Description
Le corpus Paye ton Corpus rassemble des témoignages d'actes sexistes, recueillis sur un ensemble de treize sites différents, auxquels sont associés une annotation permettant de caractériser ces actes sexistes. Ces sites sont des Tumblr, à l'exception de l'un d'entre eux, Paye Ta Blouse. Ils permettent de soumettre et publier anonymement des témoignages de faits relevant du sexisme. Chaque site se… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/payetoncorpus.
|
FrancophonIA/corpus-taln
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/corpus-taln
Description
Le corpus TALN rassemble les articles des conférences TALN et RÉCITAL des années 1997 à 2019.
Il se compose de 1602 articles scientifiques en français qui traitent du domaine du traitement automatique des langues (TAL) pour un total de 5,8 millions de mots.
Les articles sont au format TEI et leur structure contient les éléments suivants :
métadonnées : titre ; nom des auteurs ; année ; éditeur ;… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/corpus-taln.
|
FrancophonIA/csonu
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/csonu
Description
Texte intégral des 2259 résolutions du Conseil de sécurité adoptées entre 1946 et 2015 inclus.
Les métadonnées des fichiers xml sont tirées de
Gaëtan Moreau (2019) Le langage du Conseil de Sécurité de l'ONU : analyse de discours des résolutions en français et en anglais depuis 1946. Université Paris Sorbonne-Nouvelle.
Ces métadonnées incluent :
Un étiquetage morphosyntaxique (POS - identique en français… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/csonu.
|
FrancophonIA/corpus14
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/corpus14
Description
Le projet « Corpus 14 » donne à lire les correspondances de Poilus ordinaires. Il privilégie les écrits peu-lettrés, encore peu exploités par les historiens de la Grande Guerre. Ces documents, mis à disposition par les Archives départementales de l’Ain, l'Ardèche, la Charente-Maritime, l’Hérault, l'Ille-et-Vilaine, la Saône-et-Loire ainsi que par les familles qui en sont les dépositaires, fourniront… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/corpus14.
|
FrancophonIA/fr-parlement
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/fr-parl
Description
Ce corpus a été collecté et annoté dans le cadre de ma thèse en cotutelle soutenue début 2019 :
« Mais de qui parlez-vous ? »
Analyse pragmatique des expressions de la troisième personne
Une analyse contrastive sur corpus de débats parlementaires français, allemands et britanniques
Mes rattachements institutionnels durant la thèse étaient les suivants :
Centre de Linguistique en Sorbonne (CeLiSo) -… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/fr-parlement.
|
TeraflopAI/test
|
TeraflopAI
|
TODO: document dataset
|
py97/UTKFace-Cropped
|
py97
|
UTKFace Aligned & Cropped Faces Dataset
Disclaimer: I do not own or manage this dataset. It is sourced from the paper Age Progression/Regression by Conditional Adversarial Autoencoder by Zhang et al., published in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017).
The original link to the aligned and cropped faces version of UTKFace is no longer available. To ensure its accessibility for other users, I have uploaded it to this repository.
For more… See the full description on the dataset page: https://huggingface.co/datasets/py97/UTKFace-Cropped.
|
launch/FactBench
|
launch
|
FactBench Leaderboard
VERIFY: A Pipeline for Factuality Evaluation
Language models (LMs) are widely used by an increasing number of users, underscoring the challenge of maintaining factual accuracy across a broad range of topics. We present VERIFY (Verification and Evidence Retrieval for Factuality evaluation), a pipeline to evaluate LMs' factual accuracy in real-world user interactions.
Content Categorization
VERIFY considers the verifiability of… See the full description on the dataset page: https://huggingface.co/datasets/launch/FactBench.
|
open-llm-leaderboard/ibm-granite__granite-3.0-2b-base-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of ibm-granite/granite-3.0-2b-base
Dataset automatically created during the evaluation run of model ibm-granite/granite-3.0-2b-base
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/ibm-granite__granite-3.0-2b-base-details.
|
open-llm-leaderboard/ibm-granite__granite-3.0-2b-instruct-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of ibm-granite/granite-3.0-2b-instruct
Dataset automatically created during the evaluation run of model ibm-granite/granite-3.0-2b-instruct
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/ibm-granite__granite-3.0-2b-instruct-details.
|
jmercat/eval_koch_feed_cat
|
jmercat
|
This dataset was created using 🤗 LeRobot.
|
jmercat/eval_koch_feed_cat2
|
jmercat
|
This dataset was created using 🤗 LeRobot.
|
open-llm-leaderboard/ibm-granite__granite-3.0-8b-instruct-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of ibm-granite/granite-3.0-8b-instruct
Dataset automatically created during the evaluation run of model ibm-granite/granite-3.0-8b-instruct
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/ibm-granite__granite-3.0-8b-instruct-details.
|
selimyagci/l_marco
|
selimyagci
|
legal
|
open-llm-leaderboard/ibm-granite__granite-3.0-8b-base-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of ibm-granite/granite-3.0-8b-base
Dataset automatically created during the evaluation run of model ibm-granite/granite-3.0-8b-base
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/ibm-granite__granite-3.0-8b-base-details.
|
jmercat/eval_koch_feed_cat3
|
jmercat
|
This dataset was created using 🤗 LeRobot.
|
FrancophonIA/megalite
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/megalite
Description
Corpus littéraire en espagnol/français/portugais pour tâches de Traitement Automatique des Langues (Génération automatique de texte; Littérature comparée)
Version filtré et lemmatisée
Versions en n-grammes trois fichiers (n=1,2,SU4)
Version morphosyntaxique POS
Embeddings pré-entraînés (word2vec)
Les fichiers ont été compressés en format .zip
Citation
@misc{11403/megalite/v1,
title… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/megalite.
|
FrancophonIA/girls
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/girls
Description
The Gender Identification Resource for Literature and Style (GIRLS)
Le corpus GIRLS est constitué de 64 romans français du XIXe siècle, dont la moitié a été écrite par des femmes et l'autre par des hommes. Il y a un seul livre par personne. Ces textes sont dans le domaine public et ont été récupérés de manière automatique sur les sites du projet Gutenberg, Gallica, Wikisource, et ebooksgratuits en… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/girls.
|
FrancophonIA/migr-twit-corpus
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/migr-twit-corpus
Description
Le Corpus MIGR-TWIT est un corpus multilingue de tweets sur le sujet de l'immigration en Europe. Dans le cadre du projet de recherche OLiNDiNUM (Observatoire LINguistique du DIscours NUMérique) le Corpus MIGR-TWIT est créé en ayant pour objectif de participer à l'élaboration d'une base de données numériques du débat public. Les contextes politiques français et britannique en lien avec le sujet… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/migr-twit-corpus.
|
FrancophonIA/orthocorpus
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/orthocorpus
Description
Le corpus OrthoCorpus regroupe 1158 articles parus entre 1997 et 2020 dans la revue Rééducation Orthophonique, revue de référence fondée en 1962 par Suzanne Borel-Maisonny. Ces articles ont été écrits par des orthophonistes, par d’autres professionnel·les de santé ou de l'éducation (psychologues, médecins, linguistes, kinésithérapeutes...) ou par d'autres parties prenantes (représentant·es… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/orthocorpus.
|
FrancophonIA/tremolo-tweets
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/tremolo-tweets
Description
Le corpus TREMoLo-Tweets est un corpus de tweets en français annotés en registres de langue. Il comporte 228 505 tweets pour un total de 6~millions de mots.
Des registres tels que familier, courant et soutenu sont un phénomène immédiatement perceptible par tout locuteur d'une langue. Ils restent encore peu étudiés en traitement des langues (TAL), en particulier en dehors de l'anglais.… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/tremolo-tweets.
|
nyuuzyou/pedsite
|
nyuuzyou
|
Dataset Card for Pedsite.ru Pedagogical Website
Dataset Summary
This dataset contains metadata and original files for 9,536 educational materials from the pedsite.ru platform, a website for teachers to share experiences and showcase the best creative findings in the field of teaching and learning. The dataset includes information such as material titles, URLs, download URLs, author information, and extracted text content where available.
Languages… See the full description on the dataset page: https://huggingface.co/datasets/nyuuzyou/pedsite.
|
FrancophonIA/cidre
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/cidre
Description
The Corpus for Idiolectal Research (CIDRE)
Dans ce corpus, il y a plus de 400 œuvres de fiction de 11 auteurs prolifiques du 19e et du début du 20e siècle. Nous avons daté ces œuvres par leur date d’écriture. Cela permet d’étudier l’évolution de l’idiolecte des différents auteurs.
Citation
@misc{11403/cidre/v3,
title = {CIDRE},
author = {Olga Seminck, Philippe Gambette},
url =… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/cidre.
|
FrancophonIA/corea2d
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/corea2d
Description
Le corpus CoReA2D est le résultat d'une opération de recherche éponyme visant à projeter des ressources existantes de niveaux lexical et/ou phraséologique sur des données textuelles disponibles afin de les enrichir et d'y rendre possible toutes sortes d'exploration visant par exemple à l'extraction et à la structuration de connaissances, ou encore à la classification et l'indexation de documents.
La… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/corea2d.
|
FrancophonIA/wikidisc
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/wikidisc
Description
Le corpus WikiDiscussion a été constitué par Lydia-Mai Ho-Dac et Veronika Laippala afin de caractériser le genre « discussion » en étudiant les caractéristiques des discussions Wikipédia [Ho-Dac & Laippala, 2017].
Les données ont été extraites de la Wikipedia francophone, avant de subir une évaluation afin de vérifier le bon déroulement de l’extraction automatique. Le corpus est disponible au format… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/wikidisc.
|
FrancophonIA/cr-an-maijuin2019
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/cr-an-maijuin2019
Description
Corpus annoté (ca. 500 000 tokens) composé de 17 comptes rendus de l'Assemblée nationale française de mai et juin 2019 (XVe législature, Session ordinaire de 2018-2019) traitant différents sujets (création du centre national de la musique, jeunes majeurs vulnérables, engagement associatif, activités agricoles et cultures marines, reconnaissance des proches aidants, préenseignes, droit voisin… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/cr-an-maijuin2019.
|
FrancophonIA/democrat_rc_19-21
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/democrat_rc_19-21
Description
The Gender Identification Resource for Literature and Style (GIRLS)
Le corpus GIRLS est constitué de 64 romans français du XIXe siècle, dont la moitié a été écrite par des femmes et l'autre par des hommes. Il y a un seul livre par personne. Ces textes sont dans le domaine public et ont été récupérés de manière automatique sur les sites du projet Gutenberg, Gallica, Wikisource, et… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/democrat_rc_19-21.
|
open-llm-leaderboard/bunnycore__Phi-3.5-mini-TitanFusion-0.1-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of bunnycore/Phi-3.5-mini-TitanFusion-0.1
Dataset automatically created during the evaluation run of model bunnycore/Phi-3.5-mini-TitanFusion-0.1
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/bunnycore__Phi-3.5-mini-TitanFusion-0.1-details.
|
TrossenRoboticsCommunity/eval_aloha_solo_test
|
TrossenRoboticsCommunity
|
This dataset was created using LeRobot.
|
FrancophonIA/entretiens-radicalisation
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/entretiens-radicalisation
Description
Ce corpus d'entretiens a été construit avec et par les éducateurs de prévention spécialisée dans un dispositif d’enquête élaboré ad hoc, dans le cadre d'une thèse de doctorat (2017-2021) en sciences du langage intitulée Pour une modélisation linguistique de la radicalisation. Étude de discours institutionnels et de discours du travail social.
Ce corpus est composé de… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/entretiens-radicalisation.
|
FrancophonIA/corpus-giec
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/corpus-giec
Description
Le corpus rassemble les résumés à l'intention des décideurs des cinq premiers rapports du Groupe d'experts Intergouvernemental sur l’Évolution du Climat (GIEC) :
AR1, 1992, First Assessment Report. Geneva, IPCC (https://www.ipcc.ch/).
(fr) Premier Rapport d’évaluation du GIEC. Geneva, IPCC (https://www.ipcc.ch/) (tr. inconnu).
(es) Primer Informe de Evaluación del IPCC. Geneva, IPCC… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/corpus-giec.
|
FrancophonIA/corpus-calmer
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/corpus-calmer
Description
Le corpus CALMER (corpus Comparable pour l'étude de l'Acquisition et les Langues: Multilingue, Émotion, Récit) est composé des récits manuscrits inspirés d’un même stimulus visuel dans lequel l’on peut observer ce qui arrive à une jeune fille prénommée Laura pendant un weekend. Il s’agit d’une série de vignettes composées des illustrations et des dialogues disponibles dans la langue de la… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/corpus-calmer.
|
FrancophonIA/est_republicain
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/est_republicain
Description
Dans le cadre d'un accord de collaboration avec la société du journal L'Est Républicain (ISSN 1760-4958, CPPAP 0515 Y 90438), ORTOLANG offre après en avoir assuré le traitement informatique, l'accès à un nouveau corpus de type journalistique. Ce corpus est constitué des données textuelles correspondant à deux années de toutes les éditions intégrales du quotidien régional.
Dans un premier temps… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/est_republicain.
|
FrancophonIA/interrogatives-in-novels
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/interrogatives-in-novels
Description
Ce corpus consiste en deux parties:
La première partie comprend 8956 interrogatives extraites et annotées par deux scripts Perl (inclus). Elle a été construite pour une étude distributionnelle sur la variation des formes morphosyntaxique des questions dans les romans policiers.
La deuxième parie comprend l'enregistrement de toutes les questions à "où" ainsi qu'un échantillon comparable… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/interrogatives-in-novels.
|
AmirMohseni/GroceryList
|
AmirMohseni
|
GroceryList Dataset
Dataset Summary
The GroceryList dataset consists of grocery items and their corresponding categories. It is designed to assist in tasks such as grocery item classification, shopping list organization, and natural language understanding related to common grocery-related terms. The dataset contains only a training split and is not pre-divided into test or validation sets.
It includes two main columns:
Item: Contains the names of various grocery… See the full description on the dataset page: https://huggingface.co/datasets/AmirMohseni/GroceryList.
|
FrancophonIA/disc-insti-radicalisation
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/disc-insti-radicalisation
Description
Ce corpus de discours institutionnels a été constitué dans le cadre d'une thèse de doctorat en sciences du langage (2017-2021) intitulée Pour une modélisation linguistique de la radicalisation. Étude de discours institutionnels et de discours du travail social.
Le corpus se compose de 680 textes issus du site vie- publique.fr (période 2013-2018), site institutionnel géré par la… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/disc-insti-radicalisation.
|
FrancophonIA/migr-twit-corpus-fr-l
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/migr-twit-corpus-fr-l
Description
Le Corpus FR-L-MIGR-TWIT fait partie du Corpus MIGR-TWIT, corpus diachronique de tweets bilingue sur le sujet de l'immigration en Europe.
Dans le cadre du projet de recherche OLiNDiNUM (Observatoire LINguistique du DIscours NUMérique), le Corpus MIGR-TWIT est créé en ayant pour objectif d'étudier l'évolution du discours public sur l'immigration en Europe parcourant la période entre 2011… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/migr-twit-corpus-fr-l.
|
FrancophonIA/malherbe
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/malherbe
Description
Le corpus Malherbe est un corpus de textes versifiés du XVIIe au XXe siècle.
Ce corpus au format XML-TEI a été préparé dans le cadre d'un projet de recherche
du laboratoire CRISCO codirigé par Éliane Delente et Richard Renault et consacré
à l'analyse automatique de la métrique des textes versifiés.
L'analyse automatique porte sur :
l'identification des noyaux syllabiques
le traitement des "e"… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/malherbe.
|
FrancophonIA/e-calm
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/e-calm
Description
Corpus de transcriptions d’écrits d’élèves et d’étudiants (du CP au lycée) encodé selon la TEI-P5 incluant l'annotation des traces d'écritures (ratures, insertions, etc.). Ce corpus a été constitué dans le cadre du projet ANR E-CALM (http://e-calm.huma-num.fr/) coordonné par Claire Doquet.
Claude Ponton, Claire Doquet, Serge Fleury, Lydia Mai Ho-Dac (2022). E-CALM [Corpus]. ORTOLANG (Open Resources and… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/e-calm.
|
FrancophonIA/ema-ecrits-scolaires
|
FrancophonIA
|
Dataset origin: https://www.ortolang.fr/market/corpora/ema-ecrits-scolaires-1
Description
Corpus ÉMA, écrits scolaires
Les textes réunis sous le titre Corpus ÉMA, écrits scolaires constituent le premier ensemble d’un grand corpus longitudinal d’écrits scolaires destinés à la connaissance de la langue écrite des élèves de l’école primaire et du collège. La date de début du recueil coïncide avec la mise en œuvre des programmes 2015 qui préconisent une diversification des… See the full description on the dataset page: https://huggingface.co/datasets/FrancophonIA/ema-ecrits-scolaires.
|
jmercat/eval_koch_feed_cat6
|
jmercat
|
This dataset was created using 🤗 LeRobot.
|
open-llm-leaderboard/OpenBuddy__openbuddy-llama3-70b-v21.2-32k-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of OpenBuddy/openbuddy-llama3-70b-v21.2-32k
Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-llama3-70b-v21.2-32k
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/OpenBuddy__openbuddy-llama3-70b-v21.2-32k-details.
|
azhung/StyleTransferData
|
azhung
|
LUMIC Dataset
This is the dataset fpr LUMIC (insert github link/paper)
Dataset Details
This dataset consists of 2 different datasets. The first is the JUMP Pilot Dataset (link), which we subset by only using the 24H treatment group; the "Style Transfer" dataset was collected for this project and consists of 5 different cell type (HeLa, A549, HEK293T, 3T3, RPTE) treated with 61 different compounds.
All of the images have already been preprocessed using sklearn's… See the full description on the dataset page: https://huggingface.co/datasets/azhung/StyleTransferData.
|
open-llm-leaderboard/lemon07r__Gemma-2-Ataraxy-v4a-Advanced-9B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B
Dataset automatically created during the evaluation run of model lemon07r/Gemma-2-Ataraxy-v4a-Advanced-9B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/lemon07r__Gemma-2-Ataraxy-v4a-Advanced-9B-details.
|
open-llm-leaderboard/lemon07r__Gemma-2-Ataraxy-v4-Advanced-9B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B
Dataset automatically created during the evaluation run of model lemon07r/Gemma-2-Ataraxy-v4-Advanced-9B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/lemon07r__Gemma-2-Ataraxy-v4-Advanced-9B-details.
|
ZephyrUtopia/ratemyprofessors-reviews-2-labels
|
ZephyrUtopia
|
ratemyprofessors-reviews-2-labels
This dataset is a collection of reviews on RateMyProfessors shuffled at random. Ratings of 1-2 were converted to a
label value of 0 while ratings of 4-5 were converted to a label value of 1. Ratings of 3 were ignored in this dataset (see
ratemyprofessor-reviews-3-labels for
a more extensive coverage).
|
yentinglin/ultrafeedback_thinking_llms
|
yentinglin
|
Dataset Card for ultrafeedback_thinking_llms
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/yentinglin/ultrafeedback_thinking_llms/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/yentinglin/ultrafeedback_thinking_llms.
|
open-llm-leaderboard/nazimali__Mistral-Nemo-Kurdish-Instruct-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of nazimali/Mistral-Nemo-Kurdish-Instruct
Dataset automatically created during the evaluation run of model nazimali/Mistral-Nemo-Kurdish-Instruct
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/nazimali__Mistral-Nemo-Kurdish-Instruct-details.
|
open-llm-leaderboard/kms7530__chemeng_qwen-math-7b_24_1_100_1-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of kms7530/chemeng_qwen-math-7b_24_1_100_1
Dataset automatically created during the evaluation run of model kms7530/chemeng_qwen-math-7b_24_1_100_1
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/kms7530__chemeng_qwen-math-7b_24_1_100_1-details.
|
open-llm-leaderboard/nazimali__Mistral-Nemo-Kurdish-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of nazimali/Mistral-Nemo-Kurdish
Dataset automatically created during the evaluation run of model nazimali/Mistral-Nemo-Kurdish
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/nazimali__Mistral-Nemo-Kurdish-details.
|
open-llm-leaderboard/GreenNode__GreenNode-small-9B-it-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of GreenNode/GreenNode-small-9B-it
Dataset automatically created during the evaluation run of model GreenNode/GreenNode-small-9B-it
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/GreenNode__GreenNode-small-9B-it-details.
|
liveperson/raghalu-open
|
liveperson
|
Dataset Card for RAGHalu Open Source Data
This dataset is the public data portion from the paper Two-tiered
Encoder-based Hallucination Detection for Retrieval-Augmented Generation
in the Wild by Ilana Zimmerman, Jadin Tredup, Ethan Selfridge, and
Joseph Bradley, accepted at EMNLP 2024
(Industry Track). The private brand data portion of the dataset is not
included.
Note that this dataset and the paper do not use the common hallucination
terms factuality and faithfulness as… See the full description on the dataset page: https://huggingface.co/datasets/liveperson/raghalu-open.
|
open-llm-leaderboard/MaziyarPanahi__calme-2.1-llama3.1-70b-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of MaziyarPanahi/calme-2.1-llama3.1-70b
Dataset automatically created during the evaluation run of model MaziyarPanahi/calme-2.1-llama3.1-70b
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/MaziyarPanahi__calme-2.1-llama3.1-70b-details.
|
smartcat/Amazon_All_Beauty_2018
|
smartcat
|
Amazon All Beauty Dataset
Directory Structure
metadata: Contains product information.
reviews: Contains user reviews about the products.
filtered:
e5-base-v2_embeddings.jsonl: Contains "asin" and "embeddings" created with e5-base-v2.
metadata.jsonl: Contains "asin" and "text", where text is created from the title, description, brand, main category, and category.
reviews.jsonl: Contains "reviewerID", "reviewTime", and "asin". Reviews are filtered to include… See the full description on the dataset page: https://huggingface.co/datasets/smartcat/Amazon_All_Beauty_2018.
|
nyuuzyou/prezentacii
|
nyuuzyou
|
Dataset Card for Prezentacii.org Educational Materials
Dataset Summary
This dataset contains metadata and original files for 100,381 educational materials from the prezentacii.org platform, a resource for teachers and students providing multimedia presentations and other educational content on various topics. The dataset includes information such as material titles, URLs, download URLs, and extracted text content where available.
Languages
The… See the full description on the dataset page: https://huggingface.co/datasets/nyuuzyou/prezentacii.
|
kellycyy/daily_dilemmas
|
kellycyy
|
DailyDilemmas - Revealing Value Preferences of LLMs with Quandaries of Daily Life
Link: Paper
Description of DailyDilemma
DailyDilemma is a dataset of 1,360 moral dilemmas encountered in everyday life. Each dilemma includes two possible actions and with each action, the affected parties and human values invoked.
We evaluated LLMs on these dilemmas to determine what action they will take and the values represented by these actions… See the full description on the dataset page: https://huggingface.co/datasets/kellycyy/daily_dilemmas.
|
yentinglin/ultrafeedback_binarized_thinking_llms-6
|
yentinglin
|
Dataset Card for ultrafeedback_binarized_thinking_llms-6
This dataset has been created with distilabel.
The pipeline script was uploaded to easily reproduce the dataset:
thinkingllm.py.
It can be run directly using the CLI:
distilabel pipeline run --script "https://huggingface.co/datasets/yentinglin/ultrafeedback_binarized_thinking_llms-6/raw/main/thinkingllm.py"
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce… See the full description on the dataset page: https://huggingface.co/datasets/yentinglin/ultrafeedback_binarized_thinking_llms-6.
|
yentinglin/ultrafeedback_binarized_thinking_llms-8
|
yentinglin
|
Dataset Card for ultrafeedback_binarized_thinking_llms-8
This dataset has been created with distilabel.
The pipeline script was uploaded to easily reproduce the dataset:
thinkingllm.py.
It can be run directly using the CLI:
distilabel pipeline run --script "https://huggingface.co/datasets/yentinglin/ultrafeedback_binarized_thinking_llms-8/raw/main/thinkingllm.py"
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce… See the full description on the dataset page: https://huggingface.co/datasets/yentinglin/ultrafeedback_binarized_thinking_llms-8.
|
yentinglin/ultrafeedback_binarized_thinking_llms-9
|
yentinglin
|
Dataset Card for ultrafeedback_binarized_thinking_llms-9
This dataset has been created with distilabel.
The pipeline script was uploaded to easily reproduce the dataset:
thinkingllm.py.
It can be run directly using the CLI:
distilabel pipeline run --script "https://huggingface.co/datasets/yentinglin/ultrafeedback_binarized_thinking_llms-9/raw/main/thinkingllm.py"
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce… See the full description on the dataset page: https://huggingface.co/datasets/yentinglin/ultrafeedback_binarized_thinking_llms-9.
|
open-llm-leaderboard/byroneverson__Mistral-Small-Instruct-2409-abliterated-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of byroneverson/Mistral-Small-Instruct-2409-abliterated
Dataset automatically created during the evaluation run of model byroneverson/Mistral-Small-Instruct-2409-abliterated
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train"… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/byroneverson__Mistral-Small-Instruct-2409-abliterated-details.
|
jesusoctavioas/montecristo
|
jesusoctavioas
|
A sample dataset of 30 items based on the count of montecristo and its in the aplaca dataset format.
|
infinite-dataset-hub/5GNetworkEfficiency
|
infinite-dataset-hub
|
5GNetworkEfficiency
tags: network performance, IoT communication, regression modeling
Note: This is an AI-generated dataset so its content may be inaccurate or false
Dataset Description:
The '5GNetworkEfficiency' dataset is a curated collection of research studies and articles that focus on the impact of 5G networks on communication industry performance, with a special emphasis on IoT (Internet of Things) communication. Each record in the dataset represents a unique study that… See the full description on the dataset page: https://huggingface.co/datasets/infinite-dataset-hub/5GNetworkEfficiency.
|
infinite-dataset-hub/5GNetworkOptimization
|
infinite-dataset-hub
|
5GNetworkOptimization
tags: Resource Allocation, Throughput Maximization, Load Balancing
Note: This is an AI-generated dataset so its content may be inaccurate or false
Dataset Description:
The '5GNetworkOptimization' dataset comprises of entries that describe different scenarios and strategies for optimizing 5G and 4G network resources. The dataset focuses on the efficiency and effectiveness of resource allocation, throughput maximization, and load balancing strategies that can… See the full description on the dataset page: https://huggingface.co/datasets/infinite-dataset-hub/5GNetworkOptimization.
|
infinite-dataset-hub/5GIndustryImpactCanadaChina
|
infinite-dataset-hub
|
5GIndustryImpactCanadaChina
tags: industrial applications, IoT, smart city analytics
Note: This is an AI-generated dataset so its content may be inaccurate or false
Dataset Description:
The '5GIndustryImpactCanadaChina' dataset comprises a curated collection of articles, studies, and reports that discuss the deployment and impact of 5G technology in the industrial sectors of China and Canada. It focuses on the industrial applications of 5G, IoT integration, and advancements in… See the full description on the dataset page: https://huggingface.co/datasets/infinite-dataset-hub/5GIndustryImpactCanadaChina.
|
infinite-dataset-hub/5GRegulatoryChallenges
|
infinite-dataset-hub
|
5GRegulatoryChallenges
tags: PolicyDifferences, InternationalStandards, SpectrumManagement
Note: This is an AI-generated dataset so its content may be inaccurate or false
Dataset Description:
The '5GRegulatoryChallenges' dataset contains anonymized excerpts from policy documents, white papers, and regulatory discussions focused on the implementation of 5G technology in Canada and China. The data illustrates the differences in policy approaches, adherence to international… See the full description on the dataset page: https://huggingface.co/datasets/infinite-dataset-hub/5GRegulatoryChallenges.
|
uzair921/20_SKILLSPAN_LLM_CONTEXT_42_25
|
uzair921
|
Dataset Card for "20_SKILLSPAN_LLM_CONTEXT_42_25"
More Information needed
|
uzair921/20_SKILLSPAN_LLM_CONTEXT_42_50
|
uzair921
|
Dataset Card for "20_SKILLSPAN_LLM_CONTEXT_42_50"
More Information needed
|
uzair921/20_SKILLSPAN_LLM_CONTEXT_42_75
|
uzair921
|
Dataset Card for "20_SKILLSPAN_LLM_CONTEXT_42_75"
More Information needed
|
yentinglin/ultrafeedback_binarized_thinking_llms-10
|
yentinglin
|
Dataset Card for ultrafeedback_binarized_thinking_llms-10
This dataset has been created with distilabel.
The pipeline script was uploaded to easily reproduce the dataset:
thinkingllm.py.
It can be run directly using the CLI:
distilabel pipeline run --script "https://huggingface.co/datasets/yentinglin/ultrafeedback_binarized_thinking_llms-10/raw/main/thinkingllm.py"
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce… See the full description on the dataset page: https://huggingface.co/datasets/yentinglin/ultrafeedback_binarized_thinking_llms-10.
|
nslaughter/system_design_prompts
|
nslaughter
|
Dataset Card for Synthetic System Design Prompts Dataset (Generated by GPT-4o-mini)
|
dogtooth/llama31-8b-generated-llama31-8b-scored-uf_1729029179
|
dogtooth
|
allenai/open_instruct: Rejection Sampling Dataset
See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail
Configs
args:
{'add_timestamp': True,
'hf_entity': 'dogtooth',
'hf_repo_id': 'llama31-8b-generated-llama31-8b-scored-uf',
'hf_repo_id_scores': 'rejection_sampling_scores',
'include_reference_completion_for_rejection_sampling': True,
'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/llama31-8b-generated-llama31-8b-scored-uf_1729029179.
|
dogtooth/rejection_sampling_scores_1729029179
|
dogtooth
|
allenai/open_instruct: Rejection Sampling Dataset
See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail
Configs
args:
{'add_timestamp': True,
'hf_entity': 'dogtooth',
'hf_repo_id': 'llama31-8b-generated-llama31-8b-scored-uf',
'hf_repo_id_scores': 'rejection_sampling_scores',
'include_reference_completion_for_rejection_sampling': True,
'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/rejection_sampling_scores_1729029179.
|
TextCEsInFinance/fomc-communication-counterfactual
|
TextCEsInFinance
|
Dataset adapted from original work by Shah et al.
About Dataset
The dataset is a collection of sentences from FOMC speeches, meeting minutes and press releases (see corresponding paper). A subset of the data has been manually annotated as hawkish, dovish, or neutral.
Label mapping
LABEL 2: Neutral
LABEL 1: Hawkish
LABEL 0: Dovish
Counterfactual generation split
Additionally, for counterfactual generation tasks, we add a custom split with target… See the full description on the dataset page: https://huggingface.co/datasets/TextCEsInFinance/fomc-communication-counterfactual.
|
TextCEsInFinance/fomc-communication
|
TextCEsInFinance
|
Dataset adapted from original work by Shah et al.
About Dataset
The dataset is a collection of sentences from FOMC speeches, meeting minutes and press releases (see corresponding paper). A subset of the data has been manually annotated as hawkish, dovish, or neutral.
Label mapping
LABEL 2: Neutral
LABEL 1: Hawkish
LABEL 0: Dovish
|
pxyyy/WizardLM_evol_instruct_V2_196k
|
pxyyy
|
Dataset Card for "WizardLM_evol_instruct_V2_196k"
More Information needed
|
pxyyy/GPT4-LLM-Cleaned
|
pxyyy
|
Dataset Card for "GPT4-LLM-Cleaned"
More Information needed
|
pxyyy/GPTeacher-General-Instruct
|
pxyyy
|
Dataset Card for "GPTeacher-General-Instruct"
More Information needed
|
pxyyy/Magicoder-Evol-Instruct-110K
|
pxyyy
|
Dataset Card for "Magicoder-Evol-Instruct-110K"
More Information needed
|
pxyyy/MathInstruct
|
pxyyy
|
Dataset Card for "MathInstruct"
More Information needed
|
pxyyy/SlimOrca
|
pxyyy
|
Dataset Card for "SlimOrca"
More Information needed
|
pxyyy/ShareGPT_V3_unfiltered_cleaned_split_no_imsorry
|
pxyyy
|
Dataset Card for "ShareGPT_V3_unfiltered_cleaned_split_no_imsorry"
More Information needed
|
molssiai-hub/pubchem-10-15-2024
|
molssiai-hub
|
PubChem (https://pubchem.ncbi.nlm.nih.gov) is a popular chemical information resource that serves a wide range of use cases. In the past two years, a number of changes were made to PubChem. Data from more than 120 data sources was added to PubChem. Some major highlights include: the integration of Google Patents data into PubChem, which greatly expanded the coverage of the PubChem Patent data collection; the creation of the Cell Line and Taxonomy data collections, which provide quick and easy access to chemical information for a given cell line and taxon, respectively; and the update of the bioassay data model. In addition, new functionalities were added to the PubChem programmatic access protocols, PUG-REST and PUG-View, including support for target-centric data download for a given protein, gene, pathway, cell line, and taxon and the addition of the `standardize` option to PUG-REST, which returns the standardized form of an input chemical structure. A significant update was also made to PubChemRDF. The present paper provides an overview of these changes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.