id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
psyche/korean_idioms
|
psyche
|
NLI를 위한 한국어 속담 데이터셋입니다.
'question'은 속담의 의미와 보기(5지선다)가 표시되어 있으며,
'label'에는 정답의 번호(0-4)가 표시되어 있습니다.
licence: cc-by-sa-2.0-kr (원본 출처:국립국어원 표준국어대사전)
Model
psyche/korean_idioms
klue/bert-base
0.7646
|
skytnt/fbanimehq
|
skytnt
|
FBAnimeHQ is a dataset with high-quality full-body anime girl images in a resolution of 1024 × 512.
|
rajistics/electricity_demand
|
rajistics
|
The Victoria electricity demand dataset from the MAPIE github repository.
It consists of hourly electricity demand (in GW)
of the Victoria state in Australia together with the temperature
(in Celsius degrees).
|
TheGreatRambler/mm2_user
|
TheGreatRambler
|
Mario Maker 2 users
Part of the Mario Maker 2 Dataset Collection
Dataset Description
The Mario Maker 2 users dataset consists of 6 million users from Nintendo's online service totaling around 1.2GB of data. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022.
How to use it
The Mario Maker 2 users dataset is a very large dataset so for most use cases it is recommended to make use of the… See the full description on the dataset page: https://huggingface.co/datasets/TheGreatRambler/mm2_user.
|
yerevann/coco-karpathy
|
yerevann
|
Dataset Card for "yerevann/coco-karpathy"
The Karpathy split of COCO for image captioning.
|
THUDM/humaneval-x
|
THUDM
|
HumanEval-X is a benchmark for the evaluation of the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks.
|
detection-datasets/fashionpedia
|
detection-datasets
|
Dataset Card for Fashionpedia
Dataset Summary
Fashionpedia is a dataset mapping out the visual aspects of the fashion world.
From the paper:
Fashionpedia is a new dataset which consists of two parts: (1) an ontology built by fashion experts containing 27 main apparel categories, 19 apparel parts, 294 fine-grained attributes and their relationships; (2) a dataset with everyday and celebrity event fashion images annotated with segmentation masks and their… See the full description on the dataset page: https://huggingface.co/datasets/detection-datasets/fashionpedia.
|
detection-datasets/fashionpedia_4_categories
|
detection-datasets
|
Dataset Card for Fashionpedia_4_categories
This dataset is a variation of the fashionpedia dataset available here, with 2 key differences:
It contains only 4 categories:
Clothing
Shoes
Bags
Accessories
New splits were created:
Train: 90% of the images
Val: 5%
Test 5%
The goal is to make the detection task easier with 4 categories instead of 46 for the full fashionpedia dataset.
This dataset was created using the detection_datasets library (GitHub, PyPI), you can check here… See the full description on the dataset page: https://huggingface.co/datasets/detection-datasets/fashionpedia_4_categories.
|
dbal0503/Bundesliga
|
dbal0503
|
Bundesliga Videos dataset from Kaggle competition: https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout
|
ashraq/esc50
|
ashraq
|
https://github.com/karolpiczak/ESC-50
The dataset is available under the terms of the Creative Commons Attribution Non-Commercial license.
K. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015.
[DOI: http://dx.doi.org/10.1145/2733373.2806390]
|
joelniklaus/Multi_Legal_Pile
|
joelniklaus
|
Multi Legal Pile is a dataset of legal documents in the 24 EU languages.
|
nuprl/MultiPL-E
|
nuprl
|
Dataset Card for MultiPL-E
Dataset Summary
MultiPL-E is a dataset for evaluating large language models for code
generation that supports 22 programming languages. It takes the OpenAI
HumanEval and the Mostly Basic Python Programs (MBPP) benchmarks and uses little compilers to
translate them to other languages. It is easy to add support for new languages
and benchmarks.
The dataset is divided into several configurations named SRCDATA-LANG, where
SRCDATA is… See the full description on the dataset page: https://huggingface.co/datasets/nuprl/MultiPL-E.
|
joelniklaus/eurlex_resources
|
joelniklaus
|
Dataset Card for EurlexResources: A Corpus Covering the Largest EURLEX Resources
Dataset Summary
This dataset contains large text resources (~179GB in total) from EURLEX that can be used for pretraining language models.
Use the dataset like this:
from datasets import load_dataset
config = "de_caselaw" # {lang}_{resource}
dataset = load_dataset("joelito/eurlex_resources", config, split='train', streaming=True)
Supported Tasks and Leaderboards
The… See the full description on the dataset page: https://huggingface.co/datasets/joelniklaus/eurlex_resources.
|
skytnt/anime-segmentation
|
skytnt
|
A segmentation dataset for anime character
|
heegyu/namuwiki-extracted
|
heegyu
|
namu.wiki database dump
https://namu.wiki/ database dump 2022/03/01
571308rows
download size: 2.19GB
주의사항
namu-wiki-extractor를 이용하여 전처리, 추가로 아래 전처리를 수행했습니다
헤더 제거 == 개요 ==
테이블 제거
[age(1997-01-01)] 는 전처리 시점 기준으로 적용(2022년 10월 2일)
[math(a / b + c)] 는 제거하지 않음.
math 마크다운이 각주 내에 있을 경우, 각주가 전처리되지 않은 문제 있음.
Usage
pip install datasets
from datasets import load_dataset
dataset = load_dataset("heegyu/namuwiki-extracted")… See the full description on the dataset page: https://huggingface.co/datasets/heegyu/namuwiki-extracted.
|
NeelNanda/pile-10k
|
NeelNanda
|
The first 10K elements of The Pile, useful for debugging models trained on it. See the HuggingFace page for the full Pile for more info. Inspired by stas' great resource doing the same for OpenWebText
|
biglam/europeana_newspapers
|
biglam
|
Dataset Card for Dataset Name
This dataset contains historic newspapers from Europeana. In total the collection has ~32 Billion tokens. Documentation for this dataset is a WIP.
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Sources [optional]
Repository: [More Information Needed]
Paper [optional]: [More Information Needed]
Demo [optional]: [More Information… See the full description on the dataset page: https://huggingface.co/datasets/biglam/europeana_newspapers.
|
YaYaB/onepiece-blip-captions
|
YaYaB
|
Disclaimer
This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
Dataset Card for One Piece BLIP captions
Dataset used to train One Piece text to image model
BLIP generated captions for One piece images collected from the web. Original images were obtained from Anime Characters and captioned with the pre-trained BLIP model.
For each row the dataset contains image and text keys. image is a varying size PIL jpeg, and text is the… See the full description on the dataset page: https://huggingface.co/datasets/YaYaB/onepiece-blip-captions.
|
barkermrl/imagenet-a
|
barkermrl
|
The ImageNet-A dataset contains 7,500 natural adversarial examples.
Source: https://github.com/hendrycks/natural-adv-examples.Also see the ImageNet-C and ImageNet-P datasets at https://github.com/hendrycks/robustness
@article{hendrycks2019nae, title={Natural Adversarial Examples}, author={Dan Hendrycks and Kevin Zhao and Steven Basart and Jacob Steinhardt and Dawn Song}, journal={arXiv preprint arXiv:1907.07174}, year={2019}}
There are 200 classes we consider. The WordNet ID and a… See the full description on the dataset page: https://huggingface.co/datasets/barkermrl/imagenet-a.
|
stochastic/random_streetview_images_pano_v0.0.2
|
stochastic
|
Dataset Card for panoramic street view images (v.0.0.2)
Dataset Summary
The random streetview images dataset are labeled, panoramic images scraped from randomstreetview.com. Each image shows a location
accessible by Google Streetview that has been roughly combined to provide ~360 degree view of a single location. The dataset was designed with the intent to geolocate an image purely based on its visual content.
Supported Tasks and Leaderboards
None… See the full description on the dataset page: https://huggingface.co/datasets/stochastic/random_streetview_images_pano_v0.0.2.
|
bigcode/the-stack-dedup
|
bigcode
|
Dataset Card for The Stack
Changelog
Release
Description
v1.0
Initial release of the Stack. Included 30 programming languages and 18 permissive licenses. Note: Three included licenses (MPL/EPL/LGPL) are considered weak copyleft licenses. The resulting near-deduplicated dataset is 1.5TB in size.
v1.1
The three copyleft licenses ((MPL/EPL/LGPL) were excluded and the list of permissive licenses extended to 193 licenses in total. The list of programming… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/the-stack-dedup.
|
dennlinger/eur-lex-sum
|
dennlinger
|
The EUR-Lex-Sum dataset is a multilingual resource intended for text summarization in the legal domain.
It is based on human-written summaries of legal acts issued by the European Union.
It distinguishes itself by introducing a smaller set of high-quality human-written samples,
each of which have much longer references (and summaries!) than comparable datasets.
Additionally, the underlying legal acts provide a challenging domain-specific application to legal texts,
which are so far underrepresented in non-English languages.
For each legal act, the sample can be available in up to 24 languages
(the officially recognized languages in the European Union);
the validation and test samples consist entirely of samples available in all languages,
and are aligned across all languages at the paragraph level.
|
bigscience/xP3
|
bigscience
|
xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
|
bigcode/the-stack-smol
|
bigcode
|
Dataset Description
A small subset (~0.1%) of the-stack dataset, each programming language has 10,000 random samples from the original dataset. The dataset has 2.6GB of text (code).
Languages
The dataset contains 30 programming languages:
"assembly", "batchfile", "c++", "c", "c-sharp", "cmake", "css", "dockerfile", "fortran", "go", "haskell", "html", "java",
"javascript", "julia", "lua", "makefile", "markdown", "perl", "php", "powershell", "python", "ruby"… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/the-stack-smol.
|
miracl/miracl
|
miracl
|
Dataset Card for MIRACL (Topics and Qrels)
Dataset Description
Homepage |
Repository: |
Paper |
ArXiv
MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
This dataset contains the collection data of the 16 "known languages". The remaining 2 "surprise languages"… See the full description on the dataset page: https://huggingface.co/datasets/miracl/miracl.
|
projecte-aina/raco_forums
|
projecte-aina
|
Dataset Card for Racó Forums Corpus
Dataset Summary
The Racó Forums Corpus is a 19-million-sentence corpus of Catalan user-generated text built from the forums of Racó Català.
Since the existing available corpora in Catalan lacked conversational data, we searched for a major source of such data for Catalan, and we found Racó Català, a popular multitopic online forum. We obtained a database dump and we transformed all the threads so that we obtained documents that… See the full description on the dataset page: https://huggingface.co/datasets/projecte-aina/raco_forums.
|
biglam/gutenberg-poetry-corpus
|
biglam
|
Allison Parrish's Gutenberg Poetry Corpus
This corpus was originally published under the CC0 license by Allison Parrish. Please visit Allison's fantastic accompanying GitHub repository for usage inspiration as well as more information on how the data was mined, how to create your own version of the corpus, and examples of projects using it.
This dataset contains 3,085,117 lines of poetry from hundreds of Project Gutenberg books. Each line has a corresponding gutenberg_id (1191… See the full description on the dataset page: https://huggingface.co/datasets/biglam/gutenberg-poetry-corpus.
|
machelreid/m2d2
|
machelreid
|
M2D2: A Massively Multi-domain Language Modeling Dataset
From the paper "M2D2: A Massively Multi-domain Language Modeling Dataset", (Reid et al., EMNLP 2022)
Load the dataset as follows:
import datasets
dataset = datasets.load_dataset("machelreid/m2d2", "cs.CL") # replace cs.CL with the domain of your choice
print(dataset['train'][0]['text'])
Domains
Culture_and_the_arts
Culture_and_the_arts__Culture_and_Humanities
Culture_and_the_arts__Games_and_Toys… See the full description on the dataset page: https://huggingface.co/datasets/machelreid/m2d2.
|
truongpdd/laion-2b-vietnamese-subset
|
truongpdd
|
Dataset Card for "laion-2b-vietnamese-subset"
More Information needed
|
lcw99/cc100-ko-only
|
lcw99
|
cc100 dataset Korean only
|
esb/datasets
|
esb
|
All eight of datasets in ESB can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
from datasets import load_dataset
librispeech = load_dataset("esb/datasets", "librispeech", split="train")
"esb/datasets": the repository namespace. This is fixed for all ESB datasets.
"librispeech": the dataset name. This can be changed to any of any one of the eight datasets in ESB to download that dataset.
split="train": the split. Set this to one of… See the full description on the dataset page: https://huggingface.co/datasets/esb/datasets.
|
andrewkroening/Star-wars-scripts-dialogue-IV-VI
|
andrewkroening
|
Dataset Contents
This dataset contains the concatenated scripts from the original (and best) Star Wars trilogy. The scripts are reduced to dialogue only, and are tagged with a line number and speaker.
Dataset Disclaimer
I don't own this data; or Star Wars. But it would be cool if I did.
Star Wars is owned by Lucasfilms. I do not own any of the rights to this information.
The scripts are derived from a couple sources:
This GitHub Repo with raw files
A Kaggle… See the full description on the dataset page: https://huggingface.co/datasets/andrewkroening/Star-wars-scripts-dialogue-IV-VI.
|
poloclub/diffusiondb
|
poloclub
|
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains 2
million images generated by Stable Diffusion using prompts and hyperparameters
specified by real users. The unprecedented scale and diversity of this
human-actuated dataset provide exciting research opportunities in understanding
the interplay between prompts and generative models, detecting deepfakes, and
designing human-AI interaction tools to help users more easily use these models.
|
lambdalabs/naruto-blip-captions
|
lambdalabs
|
Dataset Card for Naruto BLIP captions
Dataset used to train TBD.
The original images were obtained from narutopedia.com and captioned with the pre-trained BLIP model.
For each row the dataset contains image and text keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.
Example stable diffusion outputs
"Bill Gates with a hoodie", "John Oliver with Naruto style", "Hello Kitty with Naruto style", "Lebron… See the full description on the dataset page: https://huggingface.co/datasets/lambdalabs/naruto-blip-captions.
|
allenai/prosocial-dialog
|
allenai
|
Dataset Card for ProsocialDialog Dataset
Dataset Summary
ProsocialDialog is the first large-scale multi-turn English dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs). Created via a human-AI collaborative… See the full description on the dataset page: https://huggingface.co/datasets/allenai/prosocial-dialog.
|
Norod78/cartoon-blip-captions
|
Norod78
|
Dataset Card for "cartoon-blip-captions"
|
ProGamerGov/StableDiffusion-v1-5-Regularization-Images
|
ProGamerGov
|
A collection of regularization / class instance datasets for the Stable Diffusion v1-5 model to use for DreamBooth prior preservation loss training. Files labeled with "mse vae" used the stabilityai/sd-vae-ft-mse VAE. For ease of use, datasets are stored as zip files containing 512x512 PNG images. The number of images in each zip file is specified at the end of the filename.
There is currently a bug where HuggingFace is incorrectly reporting that the datasets are pickled. They are not picked… See the full description on the dataset page: https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images.
|
lewtun/music_genres
|
lewtun
|
Dataset Card for "music_genres"
More Information needed
|
0xJustin/Dungeons-and-Diffusion
|
0xJustin
|
This is the dataset! Not the .ckpt trained model - the model is located here: https://huggingface.co/0xJustin/Dungeons-and-Diffusion/tree/main
The newest version has manually captioned races and classes, and the model is trained with EveryDream. 30 images each of: aarakocra, aasimar, air_genasi, centaur, dragonborn, drow,
dwarf, earth_genasi, elf, firbolg, fire_genasi, gith, gnome, goblin, goliath, halfling, human, illithid, kenku, kobold, lizardfolk, minotaur, orc, tabaxi, thrikreen, tiefling… See the full description on the dataset page: https://huggingface.co/datasets/0xJustin/Dungeons-and-Diffusion.
|
VietAI/vi_pubmed
|
VietAI
|
Dataset Summary
20M Vietnamese PubMed biomedical abstracts translated by the state-of-the-art English-Vietnamese Translation project. The data has been used as unlabeled dataset for pretraining a Vietnamese Biomedical-domain Transformer model.
image source: Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation
Language
English: Original biomedical abstracts from Pubmed
Vietnamese: Synthetic abstract translated by a… See the full description on the dataset page: https://huggingface.co/datasets/VietAI/vi_pubmed.
|
jpwahle/machine-paraphrase-dataset
|
jpwahle
|
Dataset Card for Machine Paraphrase Dataset (MPC)
Dataset Summary
The Machine Paraphrase Corpus (MPC) consists of ~200k examples of original, and paraphrases using two online paraphrasing tools.
It uses two paraphrasing tools (SpinnerChief, SpinBot) on three source texts (Wikipedia, arXiv, student theses).
The examples are not aligned, i.e., we sample different paragraphs for originals and paraphrased versions.
How to use it
You can load the dataset… See the full description on the dataset page: https://huggingface.co/datasets/jpwahle/machine-paraphrase-dataset.
|
ChristophSchuhmann/aesthetic-logo-ratings
|
ChristophSchuhmann
|
~ 15k logo images from LAION-5B have been rated for aesthetic preference ( preference_average ) and for how professional the design look ( professionalism_average ).
license: apache-2.0
|
muhammadbilal5110/indian_food_images
|
muhammadbilal5110
|
Dataset Card for "indian_food_images"
More Information needed
|
pszemraj/text2image-multi-prompt
|
pszemraj
|
text2image multi-prompt(s): a dataset collection
collection of several text2image prompt datasets
data was cleaned/normalized with the goal of removing "model specific APIs" like the "--ar" for Midjourney and so on
data de-duplicated on a basic level: exactly duplicate prompts were dropped (after cleaning and normalization)
updates
Oct 2023: the default config has been updated with better deduplication. It was deduplicated with minhash (params: n-gram size set… See the full description on the dataset page: https://huggingface.co/datasets/pszemraj/text2image-multi-prompt.
|
armanc/ScienceQA
|
armanc
|
This is the ScientificQA dataset by Saikh et al (2022).
@article{10.1007/s00799-022-00329-y,
author = {Saikh, Tanik and Ghosal, Tirthankar and Mittal, Amish and Ekbal, Asif and Bhattacharyya, Pushpak},
title = {ScienceQA: A Novel Resource for Question Answering on Scholarly Articles},
year = {2022},
journal = {Int. J. Digit. Libr.},
month = {sep}
}
|
statworx/leipzip-swiss
|
statworx
|
Dataset Card for Leipzig Corpora Swiss German
Dataset Summary
Swiss German Wikipedia corpus based on material from 2021.The corpus gsw_wikipedia_2021 is a Swiss German Wikipedia corpus based on material from 2021. It contains 232,933 sentences and 3,824,547 tokens.
Languages
Swiss-German
Dataset Structure
Data Instances
Single sentences.
Data Fields
sentence: Text as string.
Data Splits… See the full description on the dataset page: https://huggingface.co/datasets/statworx/leipzip-swiss.
|
bigbio/med_qa
|
bigbio
|
In this work, we present the first free-form multiple-choice OpenQA dataset for solving medical problems, MedQA,
collected from the professional medical board exams. It covers three languages: English, simplified Chinese, and
traditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively. Together
with the question data, we also collect and release a large-scale corpus from medical textbooks from which the reading
comprehension models can obtain necessary knowledge for answering the questions.
|
bigbio/meddialog
|
bigbio
|
The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com.
All copyrights of the data belong to healthcaremagic.com and icliniq.com.
|
bigbio/mqp
|
bigbio
|
Medical Question Pairs dataset by McCreery et al (2020) contains pairs of medical questions and paraphrased versions of
the question prepared by medical professional. Paraphrased versions were labelled as similar (syntactically dissimilar
but contextually similar ) or dissimilar (syntactically may look similar but contextually dissimilar). Labels 1: similar, 0: dissimilar
|
bigbio/pubmed_qa
|
bigbio
|
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research biomedical questions with yes/no/maybe using the corresponding abstracts.
PubMedQA has 1k expert-annotated (PQA-L), 61.2k unlabeled (PQA-U) and 211.3k artificially generated QA instances (PQA-A).
Each PubMedQA instance is composed of:
(1) a question which is either an existing research article title or derived from one,
(2) a context which is the corresponding PubMed abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and
(4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts,
especially their quantitative contents, is required to answer the questions.
PubMedQA datasets comprise of 3 different subsets:
(1) PubMedQA Labeled (PQA-L): A labeled PubMedQA subset comprises of 1k manually annotated yes/no/maybe QA data collected from PubMed articles.
(2) PubMedQA Artificial (PQA-A): An artificially labelled PubMedQA subset comprises of 211.3k PubMed articles with automatically generated questions from the statement titles and yes/no answer labels generated using a simple heuristic.
(3) PubMedQA Unlabeled (PQA-U): An unlabeled PubMedQA subset comprises of 61.2k context-question pairs data collected from PubMed articles.
|
bigbio/tmvar_v3
|
bigbio
|
This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them. In addition, it contains variant normalization options such as allele-specific identifiers from the ClinGen Allele Registry It can be used for NER tasks and NED tasks, This dataset does NOT have splits.
|
stjiris/portuguese-legal-sentences-v0
|
stjiris
|
Work developed as part of Project IRIS.
Thesis: A Semantic Search System for Supremo Tribunal de Justiça
Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
Contributions
@rufimelo99
If you use this work, please cite:
@InProceedings{MeloSemantic,
author="Melo, Rui
and Santos, Pedro A.
and Dias, Jo{\~a}o",
editor="Moniz, Nuno
and Vale, Zita
and… See the full description on the dataset page: https://huggingface.co/datasets/stjiris/portuguese-legal-sentences-v0.
|
shjwudp/chinese-c4
|
shjwudp
|
Introduction
Chinese-C4 is a clean Chinese internet dataset based on Common Crawl. The dataset is 46.29GB and has undergone multiple cleaning strategies, including Chinese filtering, heuristic cleaning based on punctuation, line-based hashing for deduplication, and repetition removal.
The dataset is open source and free for commercial use, and you are welcome to use the data and the cleaning strategies provided and contribute your cleaning strategies.
You can find the cleaning… See the full description on the dataset page: https://huggingface.co/datasets/shjwudp/chinese-c4.
|
Jzuluaga/atcosim_corpus
|
Jzuluaga
|
Dataset Card for ATCOSIM corpus
Dataset Summary
The ATCOSIM Air Traffic Control Simulation Speech corpus is a speech database of air traffic control (ATC) operator speech, provided by Graz University of Technology (TUG) and Eurocontrol Experimental Centre (EEC). It consists of ten hours of speech data, which were recorded during ATC real-time simulations using a close-talk headset microphone. The utterances are in English language and pronounced by ten non-native… See the full description on the dataset page: https://huggingface.co/datasets/Jzuluaga/atcosim_corpus.
|
thennal/IMaSC
|
thennal
|
IMaSC: ICFOSS Malayalam Speech Corpus
IMaSC is a Malayalam text and speech corpus made available by ICFOSS for the purpose of developing speech technology for Malayalam, particularly text-to-speech. The corpus contains 34,473 text-audio pairs of Malayalam sentences spoken by 8 speakers, totalling in approximately 50 hours of audio.
Dataset Structure
The dataset consists of 34,473 instances with fields text, speaker, and audio. The audio is mono, sampled at 16kH.… See the full description on the dataset page: https://huggingface.co/datasets/thennal/IMaSC.
|
Nerfgun3/bad_prompt
|
Nerfgun3
|
Negative Embedding / Textual Inversion
Idea
The idea behind this embedding was to somehow train the negative prompt as an embedding, thus unifying the basis of the negative prompt into one word or embedding.
Side note: Embedding has proven to be very helpful for the generation of hands! :)
Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder.
Please put the embedding in… See the full description on the dataset page: https://huggingface.co/datasets/Nerfgun3/bad_prompt.
|
WillHeld/mtop
|
WillHeld
|
Dataset Card for "mtop"
More Information needed
|
TheFusion21/PokemonCards
|
TheFusion21
|
Dataset Card for PokemonCards
Languages
All of the data is in English.
Dataset Structure
Data Instances
{
"id": "pl1-1",
"image_url": "https://images.pokemontcg.io/pl1/1_hires.png",
"caption": "A Stage 2 Pokemon Card of type Lightning with the title ""Ampharos"" and 130 HP of rarity ""Rare Holo"" evolved from Flaaffy from the set Platinum and the flavor text: ""None"". It has the attack ""Gigavolt"" with the cost Lightning… See the full description on the dataset page: https://huggingface.co/datasets/TheFusion21/PokemonCards.
|
deutsche-telekom/ger-backtrans-paraphrase
|
deutsche-telekom
|
German Backtranslated Paraphrase Dataset
This is a dataset of more than 21 million German paraphrases.
These are text pairs that have the same meaning but are expressed with different words.
The source of the paraphrases are different parallel German / English text corpora.
The English texts were machine translated back into German to obtain the paraphrases.
This dataset can be used for example to train semantic text embeddings.
To do this, for example, SentenceTransformers
and… See the full description on the dataset page: https://huggingface.co/datasets/deutsche-telekom/ger-backtrans-paraphrase.
|
sayakpaul/nyu_depth_v2
|
sayakpaul
|
The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.
|
bsmock/pubtables-1m
|
bsmock
|
PubTables-1M
GitHub: https://github.com/microsoft/table-transformer
Paper: "PubTables-1M: Towards comprehensive table extraction from unstructured documents"
Hugging Face:
Detection model
Structure recognition model
Currently we only support downloading the dataset as tar.gz files. Integrating with HuggingFace Datasets is something we hope to support in the future!
Please switch to the "Files and versions" tab to download all of the files or use a command such as wget to… See the full description on the dataset page: https://huggingface.co/datasets/bsmock/pubtables-1m.
|
syzym/xbmu_amdo31
|
syzym
|
Dataset Card for [XBMU-AMDO31]
Dataset Summary
XBMU-AMDO31 dataset is a speech recognition corpus of Amdo Tibetan dialect. The open source corpus contains 31 hours of speech data and resources related to build speech recognition systems, including transcribed texts and a Tibetan pronunciation dictionary.
Supported Tasks and Leaderboards
automatic-speech-recognition: The dataset can be used to train a model for Amdo Tibetan Automatic Speech… See the full description on the dataset page: https://huggingface.co/datasets/syzym/xbmu_amdo31.
|
DTU54DL/commonvoice_accent_test
|
DTU54DL
|
Dataset Card for [Dataset Name]
Dataset Summary
[More Information Needed]
Supported Tasks and Leaderboards
[More Information Needed]
Languages
[More Information Needed]
Dataset Structure
Data Instances
[More Information Needed]
Data Fields
[More Information Needed]
Data Splits
[More Information Needed]
Dataset Creation
Curation Rationale
[More… See the full description on the dataset page: https://huggingface.co/datasets/DTU54DL/commonvoice_accent_test.
|
Jzuluaga/uwb_atcc
|
Jzuluaga
|
Dataset Card for UWB-ATCC corpus
Dataset Summary
The UWB-ATCC Corpus is provided provided by University of West Bohemia, Department of Cybernetics. The corpus contains recordings of communication between air traffic controllers and pilots. The speech is manually transcribed and labeled with the information about the speaker (pilot/controller, not the full identity of the person). The corpus is currently small (20 hours) but we plan to search for additional data… See the full description on the dataset page: https://huggingface.co/datasets/Jzuluaga/uwb_atcc.
|
ashraf-ali/quran-data
|
ashraf-ali
|
Dataset Card for Quran audio
Content
7 Imam Full Quran Recitation: 7*6236 wav file
csv contains the Text info for 11k subset short wav file
Tarteel.io user dataset ~25k wav
csv contains the Text info for 18k subset of the accepted user quality
|
Elite35P-Server/EliteVoiceProject
|
Elite35P-Server
|
Elite Voice Project
これはホロライブ所属Vtuberさくらみこ氏の声をデータセット化し音声認識などで活用できるようにする事を目的とした非公式プロジェクトです。
LICENSEについて
データセット内の音声データ
すべてのデータは、hololive productionの二次創作ガイドラインに準拠する形で利用されています。
これらのデータの著作権はカバー株式会社等が保有しており、リポジトリオーナー、コントリビューターは一切の権利を有しておりません。
当プロジェクトへのご協力
当プロジェクトは皆様のご協力を心より歓迎いたします。 以下の方法をご一読いただき、そのうえでプルリクエストをお願い致します。
始める前に
hololive productionの二次創作ガイドラインを必ずお読みください。
音声データの追加… See the full description on the dataset page: https://huggingface.co/datasets/Elite35P-Server/EliteVoiceProject.
|
1aurent/ICDAR-2011
|
1aurent
|
ICDAR 2011 Signature Verification Competition (SigComp2011)
Description
The collection contains simultaneously acquired online and offline samples.
The collection contains offline and online signature samples.
The offline dataset comprises PNG images, scanned at 400 dpi, RGB color.
The online dataset comprises ascii files with the format: X, Y, Z (per line).
Marcus Liwicki, Michael Blumenstein, Elisa van den Heuvel, Charles E.H. Berger, Reinoud D. Stoel, Bryan… See the full description on the dataset page: https://huggingface.co/datasets/1aurent/ICDAR-2011.
|
ashraq/tmdb-people-image
|
ashraq
|
Data was obtained from TMDB API
|
arbml/sudanese_dialect_speech
|
arbml
|
Dataset Card for "sudanese_dialect_speech"
More Information needed
|
RobotsMaliAI/bayelemabaga
|
RobotsMaliAI
|
The Bayelemabaga dataset is a collection of 44160 aligned machine translation ready Bambara-French lines,
originating from Corpus Bambara de Reference. The dataset is constitued of text extracted from 231 source files,
varing from periodicals, books, short stories, blog posts, part of the Bible and the Quran.
|
Whispering-GPT/linustechtips-transcript-audio
|
Whispering-GPT
|
Dataset Card for "linustechtips"
Dataset Summary
This dataset is created by applying whisper to the videos of the Youtube channel Linus Tech Tips. The dataset was created a medium size whisper model.
Languages
Language: English
Dataset Structure
The dataset contains all the transcripts plus the audio of the different videos of Linus Tech Tips.
Data Fields
The dataset is composed by:
id: Id of the youtube video.… See the full description on the dataset page: https://huggingface.co/datasets/Whispering-GPT/linustechtips-transcript-audio.
|
lucadiliello/english_wikipedia
|
lucadiliello
|
Dataset Card for "english_wikipedia"
More Information needed
|
Jzuluaga/atco2_corpus_1h
|
Jzuluaga
|
Dataset Card for ATCO2 test set corpus (1hr set)
Dataset Summary
ATCO2 project aims at developing a unique platform allowing to collect, organize and pre-process air-traffic control (voice communication) data from air space. This project has received funding from the Clean Sky 2 Joint Undertaking (JU) under grant agreement No 864702. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and the Clean Sky 2 JU members… See the full description on the dataset page: https://huggingface.co/datasets/Jzuluaga/atco2_corpus_1h.
|
domenicrosati/clinical_trial_texts
|
domenicrosati
|
Dataset Card for "clinical_trial_texts"
These are the text of clinical trials dowloaded from https://ClinicalTrials.gov/AllAPIJSON.zip on Dec 3rd 2022.
Total trials is 434977
Number of tokens is 2,184,397,556 (2.1bn tokens).
The tokens here are from the default BERT tokenizer in hugginface.
This data can be used for pretraining in the clinical trial and biomedical domains.
If you use this data please acknowledge @domenicrosati and link to this dataset
More Information needed
|
nguyenvulebinh/libris_clean_100
|
nguyenvulebinh
|
Dataset Card for librispeech_asr
Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for… See the full description on the dataset page: https://huggingface.co/datasets/nguyenvulebinh/libris_clean_100.
|
MCG-NJU/MultiSports
|
MCG-NJU
|
This is a multi-person video dataset of spatio-temporally localized sports actions. Please refer to the github repo for evaluation.
|
argilla/uber-reviews
|
argilla
|
Dataset Card for "uber-reviews"
Dataset Summary
Using Python's Beautiful Soup library and Scrappy framework, scraped date, star rating, and comment from all reviews from 2013 - 2019.
Languages
english
Citation Information
https://www.kaggle.com/datasets/jschne61701/uber-rides-costumer-reviews-dataset
https://www.sitejabber.com/reviews/uber.com
https://www.consumeraffairs.com/travel/uber.html… See the full description on the dataset page: https://huggingface.co/datasets/argilla/uber-reviews.
|
dipesh/python-code-ds-mini
|
dipesh
|
Dataset Card for "python-code-ds-mini"
More Information needed
|
tarteel-ai/everyayah
|
tarteel-ai
|
﷽
Dataset Card for Tarteel AI's EveryAyah Dataset
Dataset Summary
This dataset is a collection of Quranic verses and their transcriptions, with diacritization, by different reciters.
Supported Tasks and Leaderboards
[Needs More Information]
Languages
The audio is in Arabic.
Dataset Structure
Data Instances
A typical data point comprises the audio file audio, and its transcription called text.
The duration… See the full description on the dataset page: https://huggingface.co/datasets/tarteel-ai/everyayah.
|
thennal/indic_tts_ml
|
thennal
|
Indic TTS Malayalam Speech Corpus
The Malayalam subset of Indic TTS Corpus, taken from
this Kaggle database. The corpus contains
one male and one female speaker, with a 2:1 ratio of samples due to missing files for the female speaker. The license is given
in the repository.
|
HuggingFaceM4/NoCaps
|
HuggingFaceM4
|
Dubbed NoCaps, for novel object captioning at scale, NoCaps consists of 166,100 human-generated captions describing 15,100 images from the Open Images validation and test sets.
The associated training data consists of COCO image-caption pairs, plus Open Images image-level labels and object bounding boxes.
Since Open Images contains many more classes than COCO, nearly 400 object classes seen in test images have no or very few associated training captions (hence, nocaps).
|
zeynepgulhan/mediaspeech-with-cv-tr
|
zeynepgulhan
|
Dataset Card for "mediaspeech-with-cv-tr"
More Information needed
|
zmao/chinese_food_caption
|
zmao
|
For finetunning stable diffuser with chinese food images
|
Muennighoff/flan
|
Muennighoff
|
This is a repreprocessed version of the FLAN dataset with any updates that have been made to the FLAN datasets since the release of the original FLAN. The script is available here.
Tasks:
{'aeslc_10templates',
'ag_news_subset_10templates',
'anli_r1_10templates',
'anli_r2_10templates',
'anli_r3_10templates',
'arc_challenge_10templates',
'arc_easy_10templates',
'bool_q_10templates',
'cb_10templates',
'cnn_dailymail_10templates',
'cola_10templates',
'common_gen_10templates'… See the full description on the dataset page: https://huggingface.co/datasets/Muennighoff/flan.
|
allenai/objaverse
|
allenai
|
Objaverse
Objaverse is a Massive Dataset with 800K+ Annotated 3D Objects.
More documentation is coming soon. In the meantime, please see our paper and website for additional details.
License
The use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse are all licensed as creative commons distributable objects, and may be under the following licenses:
CC-BY 4.0 - 721K objects
CC-BY-NC 4.0 - 25K objects
CC-BY-NC-SA… See the full description on the dataset page: https://huggingface.co/datasets/allenai/objaverse.
|
Kanakmi/mental-disorders
|
Kanakmi
|
Labels:
0:'BPD'
1:'bipolar'
2:'depression'
3:'Anxiety'
4:'schizophrenia'
5:'mentalillness'
|
Dahoas/full-hh-rlhf
|
Dahoas
|
Dataset Card for "full-hh-rlhf"
Anthropic's HH dataset reformatted into prompt, chosen, rejected samples.
|
Muennighoff/natural-instructions
|
Muennighoff
|
Preprocessed version of Super-Natural-Instructions from https://github.com/allenai/natural-instructions/tree/master/splits. The same inputs may appear with different outputs, thus to avoid duplicate inputs, you can deduplicate by the id or the inputs field.
Train Tasks:
['task001_quoref_question_generation', 'task002_quoref_answer_generation', 'task022_cosmosqa_passage_inappropriate_binary', 'task023_cosmosqa_question_generation', 'task024_cosmosqa_answer_generation'… See the full description on the dataset page: https://huggingface.co/datasets/Muennighoff/natural-instructions.
|
openai/webgpt_comparisons
|
openai
|
WebGPT Comparisons contains all of the comparisons marked as suitable for reward modelling from the WebGPT paper.
|
bigcode/the-stack-metadata
|
bigcode
|
Dataset Card for The Stack Metadata
Changelog
Release
Description
v1.1
This is the first release of the metadata. It is for The Stack v1.1
v1.2
Metadata dataset matching The Stack v1.2
Dataset Summary
This is a set of additional information for repositories used for The Stack. It contains file paths, detected licenes as well as some other information for the repositories.
Supported Tasks and Leaderboards
The main… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/the-stack-metadata.
|
gamino/wiki_medical_terms
|
gamino
|
Dataset Card for [Dataset Name]
Dataset Summary
This data set contains over 6,000 medical terms and their wikipedia text. It is intended to be used on a downstream task that requires medical terms and their wikipedia explanation.
Dataset Structure
Data Instances
[More Information Needed]
Data Fields
[More Information Needed]
Data Splits
[More Information Needed]
Dataset Creation… See the full description on the dataset page: https://huggingface.co/datasets/gamino/wiki_medical_terms.
|
masakhane/afriqa_wiki_en_fr_100
|
masakhane
| |
Anthropic/model-written-evals
|
Anthropic
|
Model-Written Evaluation Datasets
This repository includes datasets written by language models, used in our paper on "Discovering Language Model Behaviors with Model-Written Evaluations."
We intend the datasets to be useful to:
Those who are interested in understanding the quality and properties of model-generated data
Those who wish to use our datasets to evaluate other models for the behaviors we examined in our work (e.g., related to model persona, sycophancy, advanced AI… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/model-written-evals.
|
mrm8488/unnatural-instructions-core
|
mrm8488
|
Dataset Card for Unnatural Instructions (Core data)
This info comes from the Unnatural Instructions GitHub repo.
Unnatural Instructions is a dataset of instructions automatically generated by a Large Language model.
See full details in the paper: "Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor"
🗃️ Content
The Unnatural Instructions core dataset of 68,478 instruction-input-output triplets.
📄 Format
Core data… See the full description on the dataset page: https://huggingface.co/datasets/mrm8488/unnatural-instructions-core.
|
mrm8488/unnatural-instructions-full
|
mrm8488
|
Dataset Card for Unnatural Instructions (Full data)
This info comes from the Unnatural Instructions GitHub repo.
Unnatural Instructions is a dataset of instructions automatically generated by a Large Language model.
See full details in the paper: "Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor"
🗃️ Content
It contains the full 240,670 Unnatural Instructions (instruction-input-output triplets) examples. It was constructed by expanding… See the full description on the dataset page: https://huggingface.co/datasets/mrm8488/unnatural-instructions-full.
|
bookbot/ljspeech_phonemes
|
bookbot
|
Dataset Card for "ljspeech_phonemes"
More Information needed
|
akariasai/PopQA
|
akariasai
|
Dataset Card for PopQA
Dataset Summary
PopQA is a large-scale open-domain question answering (QA) dataset, consisting of 14k entity-centric QA pairs. Each question is created by converting a knowledge tuple retrieved from Wikidata using a template. Each question come with the original subject_entitiey, object_entityand relationship_type annotation, as well as Wikipedia monthly page views.
Languages
The dataset contains samples in English only.… See the full description on the dataset page: https://huggingface.co/datasets/akariasai/PopQA.
|
Dahoas/rm-static
|
Dahoas
|
Dataset Card for "rm-static"
Split of hh-static used for training reward models after supervised fine-tuning.
|
olm/olm-wikipedia-20221220
|
olm
|
Dataset Card for OLM December 2022 Wikipedia
Pretraining dataset, created with the OLM repo here from a December 2022 Wikipedia snapshot.
|
Jean-Baptiste/financial_news_sentiment
|
Jean-Baptiste
|
Dataset Card for "financial_news_sentiment"
Manually validated sentiment for ~2000 Canadian news articles.
The dataset also include a column topic which contains one of the following value:
acquisition
other
quaterly financial release
appointment to new position
dividend
corporate update
drillings results
conference
share repurchase program
grant of stocks
This was generated automatically using a zero-shot classification model and was not reviewed manually.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.