id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
bigscience/P3
|
bigscience
|
Dataset Card for P3
Dataset Summary
P3 (Public Pool of Prompts) is a collection of prompted English datasets covering a diverse set of NLP tasks. A prompt is the combination of an input template and a target template. The templates are functions mapping a data example into natural language for the input and target sequences. For example, in the case of an NLI dataset, the data example would include fields for Premise, Hypothesis, Label. An input template would be… See the full description on the dataset page: https://huggingface.co/datasets/bigscience/P3.
|
castorini/afriberta-corpus
|
castorini
|
Corpus used for training AfriBERTa models
|
ccdv/arxiv-classification
|
ccdv
|
Arxiv Classification: a classification of Arxiv Papers (11 classes).
This dataset is intended for long context classification (documents have all > 4k tokens). Copied from "Long Document Classification From Local Word Glimpses via Recurrent Attention Learning"
@ARTICLE{8675939,
author={He, Jun and Wang, Liqun and Liu, Liu and Feng, Jiao and Wu, Hao},
journal={IEEE Access},
title={Long Document Classification From Local Word Glimpses via Recurrent Attention Learning},
year={2019}… See the full description on the dataset page: https://huggingface.co/datasets/ccdv/arxiv-classification.
|
ccdv/arxiv-summarization
|
ccdv
|
Arxiv dataset for summarization
Dataset for summarization of long documents.Adapted from this repo.Note that original data are pre-tokenized so this dataset returns " ".join(text) and add "\n" for paragraphs. This dataset is compatible with the run_summarization.py script from Transformers if you add this line to the summarization_name_mapping variable:
"ccdv/arxiv-summarization": ("article", "abstract")
Data Fields
id: paper id
article: a string containing the… See the full description on the dataset page: https://huggingface.co/datasets/ccdv/arxiv-summarization.
|
ccdv/govreport-summarization
|
ccdv
|
GovReport dataset for summarization
Dataset for summarization of long documents.Adapted from this repo and this paperThis dataset is compatible with the run_summarization.py script from Transformers if you add this line to the summarization_name_mapping variable:
"ccdv/govreport-summarization": ("report", "summary")
Data Fields
id: paper id
report: a string containing the body of the report
summary: a string containing the summary of the report
Data… See the full description on the dataset page: https://huggingface.co/datasets/ccdv/govreport-summarization.
|
ccdv/pubmed-summarization
|
ccdv
|
PubMed dataset for summarization
Dataset for summarization of long documents.Adapted from this repo.Note that original data are pre-tokenized so this dataset returns " ".join(text) and add "\n" for paragraphs. This dataset is compatible with the run_summarization.py script from Transformers if you add this line to the summarization_name_mapping variable:
"ccdv/pubmed-summarization": ("article", "abstract")
Data Fields
id: paper id
article: a string containing… See the full description on the dataset page: https://huggingface.co/datasets/ccdv/pubmed-summarization.
|
clips/mfaq
|
clips
|
We present the first multilingual FAQ dataset publicly available. We collected around 6M FAQ pairs from the web, in 21 different languages.
|
clips/mqa
|
clips
|
MQA is a multilingual corpus of questions and answers parsed from the Common Crawl. Questions are divided between Frequently Asked Questions (FAQ) pages and Community Question Answering (CQA) pages.
|
dynabench/dynasent
|
dynabench
|
Dynabench.DynaSent is a Sentiment Analysis dataset collected using a
human-and-model-in-the-loop.
|
flax-sentence-embeddings/stackexchange_math_jsonl
|
flax-sentence-embeddings
|
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
|
gabtan99/pex-conversations
|
gabtan99
|
PinoyExchange (PEx) Conversations Dataset
Summary
PEx Conversations is a dataset composed of collected threads from PinoyExchange.com (Consisting of Tagalog, English, or Taglish responses).
The corpus consists of 45K total scraped threads from 8 subforums. The data only consists of the user message which means any images, videos, links, or any embdedded html are not collected in the scraping process. All characters have been transliterated to its closest ASCII… See the full description on the dataset page: https://huggingface.co/datasets/gabtan99/pex-conversations.
|
gsarti/clean_mc4_it
|
gsarti
|
A thoroughly cleaned version of the Italian portion of the multilingual
colossal, cleaned version of Common Crawl's web crawl corpus (mC4) by AllenAI.
Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's mC4 dataset by AllenAI, with further cleaning
detailed in the repository README file.
|
gsarti/flores_101
|
gsarti
|
One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the
lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource
languages, consider only restricted domains, or are low quality because they are constructed using
semi-automatic procedures. In this work, we introduce the FLORES evaluation benchmark, consisting of 3001
sentences extracted from English Wikipedia and covering a variety of different topics and domains.
These sentences have been translated in 101 languages by professional translators through a carefully
controlled process. The resulting dataset enables better assessment of model quality on the long tail of
low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all
translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset,
we hope to foster progress in the machine translation community and beyond.
|
huggingartists/kendrick-lamar
|
huggingartists
|
This dataset is designed to generate lyrics with HuggingArtists.
|
huggingartists/mf-doom
|
huggingartists
|
This dataset is designed to generate lyrics with HuggingArtists.
|
huggingartists/young-thug
|
huggingartists
|
This dataset is designed to generate lyrics with HuggingArtists.
|
iarfmoose/qa_evaluator
|
iarfmoose
|
This is the same dataset as the question_generator dataset but with the context removed and the question and answer in separate fields. This is intended to be used with the question_generator repo to train the qa_evaluator model which predicts whether a question and answer pair makes sense.
|
iarfmoose/question_generator
|
iarfmoose
|
This dataset is made up of data taken from SQuAD v2.0, RACE, CoQA, and MSMARCO. Some examples have been filtered out of the original datasets and others have been modified.
There are two fields; question and text. The question field contains the question, and the text field contains both the answer and the context in the following format:
"<answer> (answer text) <context> (context text)"
The and are included as special tokens in the question generator's tokenizer.
This dataset is intended to… See the full description on the dataset page: https://huggingface.co/datasets/iarfmoose/question_generator.
|
imvladikon/hebrew_speech_kan
|
imvladikon
|
Dataset Card for Dataset Name
Dataset Summary
Hebrew Dataset for ASR
Supported Tasks and Leaderboards
[More Information Needed]
Languages
[More Information Needed]
Dataset Structure
Data Instances
{'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/8ce7402f6482c6053251d7f3000eec88668c994beb48b7ca7352e77ef810a0b6/train/e429593fede945c185897e378a5839f4198.wav',
'array':… See the full description on the dataset page: https://huggingface.co/datasets/imvladikon/hebrew_speech_kan.
|
jfrenz/legalglue
|
jfrenz
|
\
Legal General Language Understanding Evaluation (LegalGLUE) benchmark is
a collection of datasets for evaluating model performance across a diverse set of legal NLP tasks
|
jgammack/SAE-door-abstracts
|
jgammack
|
SAE-door-abstracts
This dataset includes ~1,550 texts of abstracts of technical papers and journal articles from the SAE Mobilus database that cover the topics of automotive or aerospace doors, noise, acoustics, and vibrations.
|
jglaser/binding_affinity
|
jglaser
|
A dataset to fine-tune language models on protein-ligand binding affinity prediction.
|
kiyoung2/aistage-mrc
|
kiyoung2
|
AI Stage MRC task
Version Info
v4.1.1
v3.2.3데이터 (train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation
train_aug_punctuation에 있음
v4.1.0의 오류 해결
v4.1.0
v3.2.2데이터(train_dataset_aug)에 punctuation추가한 데이터셋, both train and validation
train_data_aug에 있음
answers 잘못 labeling된 데이터
v4.0.1
punctuation추가한 데이터셋, both train and validation
answers type 정상
v4.0.0
punctuation추가한 데이터셋, only train
answers… See the full description on the dataset page: https://huggingface.co/datasets/kiyoung2/aistage-mrc.
|
lara-martin/Scifi_TV_Shows
|
lara-martin
|
Dataset Card for Science Fiction TV Show Plots Corpus
Dataset Description
A collection of long-running (80+ episodes) science fiction TV show plot synopses, scraped from Fandom.com wikis. Collected Nov 2017. Each episode is considered a "story".
Contains plot summaries from:
Babylon 5 (https://babylon5.fandom.com/wiki/Main_Page) - 84 stories
Doctor Who (https://tardis.fandom.com/wiki/Doctor_Who_Wiki) - 311 stories
Doctor Who spin-offs - 95 stories
Farscape… See the full description on the dataset page: https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows.
|
lewtun/github-issues
|
lewtun
|
Dataset Card for GitHub Issues
Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
Supported Tasks and Leaderboards
For each of the tasks… See the full description on the dataset page: https://huggingface.co/datasets/lewtun/github-issues.
|
liweili/c4_200m
|
liweili
|
\
GEC Dataset Generated from C4
|
lukesjordan/worldbank-project-documents
|
lukesjordan
|
Dataset Card for World Bank Project Documents
Dataset Summary
This is a dataset of documents related to World Bank development projects in the period 1947-2020. The dataset includes
the documents used to propose or describe projects when they are launched, and those in the review. The documents are indexed
by the World Bank project ID, which can be used to obtain features from multiple publicly available tabular datasets.
Supported Tasks and… See the full description on the dataset page: https://huggingface.co/datasets/lukesjordan/worldbank-project-documents.
|
luozhouyang/dureader
|
luozhouyang
|
dureader
数据来自千言DuReader数据集,这里是原始地址 千言数据集:阅读理解。
本数据集只用作学术研究使用。如果本仓库涉及侵权行为,会立即删除。
目前包含以下两个子集:
DuReader-robust
DuReader-checklist
from datasets import load_dataset
robust = load_dataset("luozhouyang/dureader", "robust")
checklist = load_dataset("luozhouyang/dureader", "checklist")
|
codeparrot/codeparrot-clean
|
codeparrot
|
CodeParrot 🦜 Dataset Cleaned
What is it?
A dataset of Python files from Github. This is the deduplicated version of the codeparrot.
Processing
The original dataset contains a lot of duplicated and noisy data. Therefore, the dataset was cleaned with the following steps:
Deduplication
Remove exact matches
Filtering
Average line length < 100
Maximum line length < 1000
Alpha numeric characters fraction > 0.25
Remove auto-generated files (keyword… See the full description on the dataset page: https://huggingface.co/datasets/codeparrot/codeparrot-clean.
|
ml6team/cnn_dailymail_nl
|
ml6team
|
This dataset is the CNN/Dailymail dataset translated to Dutch.
This is the original dataset:
```
load_dataset("cnn_dailymail", '3.0.0')
```
And this is the HuggingFace translation pipeline:
```
pipeline(
task='translation_en_to_nl',
model='Helsinki-NLP/opus-mt-en-nl',
tokenizer='Helsinki-NLP/opus-mt-en-nl')
```
|
ml6team/xsum_nl
|
ml6team
|
Dataset Card for XSum NL
Dataset Summary
This dataset is a machine translated dataset. It's the XSum dataset translated with this model from English to Dutch.
See the Hugginface page of the original dataset for more information on the format of this dataset.
Use with:
from datasets import load_dataset
load_dataset("csv", "ml6team/xsum_nl")
Languages
Dutch
Dataset Structure
Data Instances
[More Information Needed]… See the full description on the dataset page: https://huggingface.co/datasets/ml6team/xsum_nl.
|
mostol/wiktionary-ipa
|
mostol
|
Pronunciation information pulled from wiktionary.org.
|
nateraw/fairface
|
nateraw
|
Dataset Card for FairFace
Usage
from io import BytesIO
from PIL import Image
import datasets
def bytes_to_pil(example_batch):
example_batch['img'] = [
Image.open(BytesIO(b)) for b in example_batch.pop('img_bytes')
]
return example_batch
ds = datasets.load_dataset('nateraw/fairface')
ds = ds.with_transform(bytes_to_pil)
Dataset Summary
Existing public face datasets are strongly biased toward Caucasian faces, and other races… See the full description on the dataset page: https://huggingface.co/datasets/nateraw/fairface.
|
ncduy/mt-en-vi
|
ncduy
|
Dataset Card for Machine Translation Paired English-Vietnamese Sentences
Dataset Summary
[More Information Needed]
Supported Tasks and Leaderboards
[More Information Needed]
Languages
The language of the dataset text sentence is English ('en') and Vietnamese (vi).
Dataset Structure
Data Instances
An instance example:
{
'en': 'And what I think the world needs now is more connections.',
'vi': 'Và tôi… See the full description on the dataset page: https://huggingface.co/datasets/ncduy/mt-en-vi.
|
nferruz/UR50_2021_04
|
nferruz
|
Dataset Card for UR50_2021_04
Dataset Summary
The Uniref50 (UR50) dataset version 2021/04 is a biological dataset taken from the Uniprot database: https://www.uniprot.org/
Supported Tasks and Leaderboards
The UR50 dataset contains 48 Million protein sequences. It is a useful dataset to train protein language models.
Languages
Proteins
Dataset Structure
Data Instances
Data Fields… See the full description on the dataset page: https://huggingface.co/datasets/nferruz/UR50_2021_04.
|
nthngdy/ccnews_split
|
nthngdy
|
CC-News containing news articles from news sites all over the world The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. This version of the dataset has 708241 articles. It represents a small portion of English language subset of the CC-News dataset created using news-please(Hamborg et al.,2017) to collect and extract English language portion of CC-News.
|
ought/raft
|
ought
|
Large pre-trained language models have shown promise for few-shot learning, completing text-based tasks given only a few task-specific examples. Will models soon solve classification tasks that have so far been reserved for human research assistants?
[RAFT](https://raft.elicit.org) is a few-shot classification benchmark that tests language models:
- across multiple domains (lit review, tweets, customer interaction, etc.)
- on economically valuable classification tasks (someone inherently cares about the task)
- in a setting that mirrors deployment (50 examples per task, info retrieval allowed, hidden test set)
|
pile-of-law/pile-of-law
|
pile-of-law
|
We curate a large corpus of legal and administrative data. The utility of this data is twofold: (1) to aggregate legal and administrative data sources that demonstrate different norms and legal standards for data filtering; (2) to collect a dataset that can be used in the future for pretraining legal-domain language models, a key direction in access-to-justice initiatives.
|
pmc/open_access
|
pmc
|
The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets
|
qanastek/ELRC-Medical-V2
|
qanastek
|
ELRC-Medical-V2 : European parallel corpus for healthcare machine translation
Dataset Summary
ELRC-Medical-V2 is a parallel corpus for neural machine translation funded by the European Commission and coordinated by the German Research Center for Artificial Intelligence.
Supported Tasks and Leaderboards
translation: The dataset can be used to train a model for translation.
Languages
In our case, the corpora consists of a pair of… See the full description on the dataset page: https://huggingface.co/datasets/qanastek/ELRC-Medical-V2.
|
qanastek/EMEA-V3
|
qanastek
|
EMEA-V3 : European parallel translation corpus from the European Medicines Agency
Dataset Summary
EMEA-V3 is a parallel corpus for neural machine translation collected and aligned by Tiedemann, Jorg during the OPUS project.
Supported Tasks and Leaderboards
translation: The dataset can be used to train a model for translation.
Languages
In our case, the corpora consists of a pair of source and target sentences for all 22 different… See the full description on the dataset page: https://huggingface.co/datasets/qanastek/EMEA-V3.
|
qanastek/WMT-16-PubMed
|
qanastek
|
WMT'16 Biomedical Translation Task - PubMed parallel datasets
http://www.statmt.org/wmt16/biomedical-translation-task.html
|
rahular/itihasa
|
rahular
|
A Sanskrit-English machine translation dataset.
|
sentence-transformers/embedding-training-data
|
sentence-transformers
|
Training Data for Text Embedding Models
This repository contains raw datasets, all of which have also been formatted for easy training in the Embedding Model Datasets collection. We recommend looking there first.
This repository contains training files to train text embedding models, e.g. using sentence-transformers.
Data Format
All files are in a jsonl.gz format: Each line contains a JSON-object that represent one training example.
The JSON objects can come in… See the full description on the dataset page: https://huggingface.co/datasets/sentence-transformers/embedding-training-data.
|
sentence-transformers/reddit-title-body
|
sentence-transformers
|
Reddit (Title, Body)-Pairs
This dataset contains jsonl-Files about (title, body) pairs from Reddit. Each line is a JSON object of the following format:
{'title': 'The title of a thread', 'body': 'The longer body of the thread', 'subreddit': 'subreddit_name'}
The 2021 file contains submissions up until including 2021-06. Entries in the respective files are shuffled on a monthly basis.
The data has been filtered for:
Remove threads with an upvote_ratio < 0.5
Only include threads… See the full description on the dataset page: https://huggingface.co/datasets/sentence-transformers/reddit-title-body.
|
shibing624/source_code
|
shibing624
|
纯文本数据,内容:高质量编程源代码,包括Python,Java,CPP源代码
|
solomonk/reddit_mental_health_posts
|
solomonk
|
Reddit posts about mental health
files
adhd.csv from r/adhd
aspergers.csv from r/aspergers
depression.csv from r/depression
ocd.csv from r/ocd
ptsd.csv from r/ptsd
fields
author
body
created_utc
id
num_comments
score
subreddit
title
upvote_ratio
url
for more details about theses fields Praw Submission.
|
stas/openwebtext-10k
|
stas
|
An open-source replication of the WebText dataset from OpenAI.
This is a small subset representing the first 10K records from the original dataset - created for testing.
The full 8M-record dataset is at https://huggingface.co/datasets/openwebtext
|
svakulenk0/qrecc
|
svakulenk0
|
Dataset Card for QReCC: Question Rewriting in Conversational Context
Dataset Summary
QReCC (Question Rewriting in Conversational Context) is an end-to-end open-domain question answering dataset comprising of 14K conversations with 81K question-answer pairs. The goal of this dataset is to provide a challenging benchmark for end-to-end conversational question answering that includes the individual subtasks of question rewriting, passage retrieval and reading… See the full description on the dataset page: https://huggingface.co/datasets/svakulenk0/qrecc.
|
tau/scrolls
|
tau
|
SCROLLS: Standardized CompaRison Over Long Language Sequences.
A suite of natural language datasets that require reasoning over long texts.
https://scrolls-benchmark.com/
|
tharindu/SOLID
|
tharindu
|
SOLID: A Large-Scale Semi-Supervised Dataset for Offensive Language Identification
The widespread use of offensive content in social media has led to an abundance of research in detecting language such as hate speech, cyberbullying, and cyber-aggression. Recent work presented the OLID dataset, which follows a taxonomy for offensive language identification that provides meaningful information for understanding the type and the target of offensive messages. However, it is limited… See the full description on the dataset page: https://huggingface.co/datasets/tharindu/SOLID.
|
transformersbook/codeparrot
|
transformersbook
|
CodeParrot 🦜 Dataset
What is it?
This is the full CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the NLP with Transformers book. You can find the full code in the accompanying Github repository.
Creation
It was created with the GitHub dataset available via Google's BigQuery. It contains approximately 22 million Python files and is 180 GB (50 GB compressed)… See the full description on the dataset page: https://huggingface.co/datasets/transformersbook/codeparrot.
|
turingbench/TuringBench
|
turingbench
|
This benchmark environment contains a dataset comprised of generated texts from pre-trained language models.
We also have two benchmark tasks - human vs. machine (i.e., binary classification) and authorship
attribution (i.e., multi-class classification). These benchmark tasks and dataset are hosted on the
TuringBench website with Leaderboards for each task.
|
ucberkeley-dlab/measuring-hate-speech
|
ucberkeley-dlab
|
Dataset card for Measuring Hate Speech
This is a public release of the dataset described in Kennedy et al. (2020) and Sachdeva et al. (2022), consisting of 39,565 comments annotated by 7,912 annotators, for 135,556 combined rows. The primary outcome variable is the "hate speech score" but the 10 constituent ordinal labels (sentiment, (dis)respect, insult, humiliation, inferior status, violence, dehumanization, genocide, attack/defense, hate speech benchmark) can also be treated… See the full description on the dataset page: https://huggingface.co/datasets/ucberkeley-dlab/measuring-hate-speech.
|
unicamp-dl/mmarco
|
unicamp-dl
|
mMARCO translated datasets
|
wikimedia/wikisource
|
wikimedia
|
Dataset Card for Wikimedia Wikisource
Dataset Summary
Wikisource dataset containing cleaned articles of all languages.
The dataset is built from the Wikisource dumps (https://dumps.wikimedia.org/)
with one subset per language, each containing a single train split.
Each example contains the content of one full Wikisource text with cleaning to strip
markdown and unwanted sections (references, etc.).
All language subsets have already been processed for recent dump… See the full description on the dataset page: https://huggingface.co/datasets/wikimedia/wikisource.
|
winvoker/turkish-sentiment-analysis-dataset
|
winvoker
|
Dataset
This dataset contains positive , negative and notr sentences from several data sources given in the references. In the most sentiment models , there are only two labels; positive and negative. However , user input can be totally notr sentence. For such cases there were no data I could find. Therefore I created this dataset with 3 class. Positive and negative sentences are listed below. Notr examples are extraced from turkish wiki dump. In addition, added some random text… See the full description on the dataset page: https://huggingface.co/datasets/winvoker/turkish-sentiment-analysis-dataset.
|
z-uo/female-LJSpeech-italian
|
z-uo
|
Italian Male Voice
This dataset is an Italian version of LJSpeech, that merge all female audio of the same speaker finded into M-AILABS Speech Dataset.
This dataset contains 8h 23m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with female voice.
|
zhoujun/hitab
|
zhoujun
|
annotations_creators:
crowdsourced
language_creators:
crowdsourced
languages:
en
multilinguality:
monolingual
size_categories:
100K<n<1M
source_datasets:
original
task_categories:
tableqa, data2text
task_ids:
tableqa
|
huggan/anime-faces
|
huggan
|
Dataset Card for anime-faces
Dataset Summary
This is a dataset consisting of 21551 anime faces scraped from www.getchu.com, which are then cropped using the anime face detection algorithm in https://github.com/nagadomi/lbpcascade_animeface. All images are resized to 64 * 64 for the sake of convenience. Please also cite the two sources when using this dataset.
Some outliers are still present in the dataset:
Bad cropping results
Some non-human faces.
Feel free to… See the full description on the dataset page: https://huggingface.co/datasets/huggan/anime-faces.
|
nlpaueb/finer-139
|
nlpaueb
|
FiNER-139 is a named entity recognition dataset consisting of 10K annual
and quarterly English reports (filings) of publicly traded companies
downloaded from the U.S. Securities and Exchange Commission (SEC)
annotated with 139 XBRL tags in the IOB2 format.
|
google/xtreme_s
|
google
|
XTREME-S covers four task families: speech recognition, classification, speech-to-text translation and retrieval. Covering 102
languages from 10+ language families, 3 different domains and 4
task families, XTREME-S aims to simplify multilingual speech
representation evaluation, as well as catalyze research in “universal” speech representation learning.
|
McGill-NLP/feedbackQA
|
McGill-NLP
|
FeedbackQA is a retrieval-based QA dataset that contains interactive feedback from users. It has two parts: the first part contains a conventional RQA dataset, whilst this repo contains the second part, which contains feedback(ratings and natural language explanations) for QA pairs.
|
gigant/horse2zebra
|
gigant
|
Two unpaired sets of photos of respectively horses and zebras, designed for unpaired image-to-image translation, as seen in the paper introducing CycleGAN
|
ctheodoris/Genecorpus-30M
|
ctheodoris
|
Dataset Card for Genecorpus-30M
Dataset Description
Point of Contact: [email protected]
Dataset Summary
We assembled a large-scale pretraining corpus, Genecorpus-30M, comprised of ~30 million human single cell transcriptomes from a broad range of tissues from publicly available data. This corpus was used for pretraining Geneformer, a pretrained transformer model that enables context-aware predictions in settings with limited… See the full description on the dataset page: https://huggingface.co/datasets/ctheodoris/Genecorpus-30M.
|
SetFit/amazon_reviews_multi_ja
|
SetFit
|
#amazon reviews multi japanese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Japanese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task.
|
Stanford/wikitablequestions
|
Stanford
|
This WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.
|
openclimatefix/uk_pv
|
openclimatefix
|
# UK PV dataset
PV solar generation data from the UK.
This dataset contains dataa from 1311 PV systems from 2018-01-01 to 2021-10-27.
The time series of solar generation is in 5 minutes chunks.
This data is from collected from live PV systems in the UK. We have obfuscated the location of the pv systems for privacy.
If you are the owner of a PV system in the dataset, and do not want this data to be shared,
please do get in contact with [email protected].
## Files
The dataset contains two files
- metadata.csv: Data about the PV systems, e.g location
- pv.netcdf: Time series of PV solar generation
### metadata.csv
Metadata of the different PV systems.
Note that there are extra PV systems in this metadata that do not appear in the pv timeseries data
The csv columns are
- ss_id: the id of the system
- latitude_rounded: latitude of the pv system, but rounded to approximately the nearest km
- longitude_rounded: latitude of the pv system, but rounded to approximately the nearest km
- llsoacd: TODO
- orientation: The orientation of the pv system
- tilt: The tilt of the pv system
- kwp: The capacity of the pv system
- operational_at: the datetime the pv system started working
### pv.netcdf
Time series data of pv solar generation data is in a [xarray](https://docs.xarray.dev/en/stable/) format.
The data variables are the same as 'ss_id' in the metadata.
Each data variable contains the solar generation (in kw) for that pv system.
The ss_id's here are a subset of the all the ss_id's in the metadata
The co-ordinates of the date are 'datetime' which is the datetime of the solar generation reading.
|
Monash-University/monash_tsf
|
Monash-University
|
Monash Time Series Forecasting Repository which contains 30+ datasets of related time series for global forecasting research. This repository includes both real-world and competition time series datasets covering varied domains.
|
emrecan/nli_tr_for_simcse
|
emrecan
|
NLI-TR for Supervised SimCSE
This dataset is a modified version of NLI-TR dataset. Its intended use is to train Supervised SimCSE models for sentence-embeddings. Steps followed to produce this dataset are listed below:
Merge train split of snli_tr and multinli_tr subsets.
Find every premise that has an entailment hypothesis and a contradiction hypothesis.
Write found triplets into sent0 (premise), sent1 (entailment hypothesis), hard_neg (contradiction hypothesis) format.
See… See the full description on the dataset page: https://huggingface.co/datasets/emrecan/nli_tr_for_simcse.
|
huggan/few-shot-art-painting
|
huggan
|
Citation
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775}… See the full description on the dataset page: https://huggingface.co/datasets/huggan/few-shot-art-painting.
|
marksverdhei/wordnet-definitions-en-2021
|
marksverdhei
|
Wordnet definitions for English
Dataset by Princeton WordNet and the Open English WordNet team
https://github.com/globalwordnet/english-wordnet
This dataset contains every entry in wordnet that has a definition and an example.
Be aware that the word "null" can be misinterpreted as a null value if loading it in with e.g. pandas
|
somosnlp-hackathon-2022/spanish-to-quechua
|
somosnlp-hackathon-2022
|
Spanish to Quechua
Dataset Description
This dataset is a recopilation of webs and others datasets that shows in dataset creation section. This contains translations from spanish (es) to Qechua of Ayacucho (qu).
Dataset Structure
Data Fields
es: The sentence in Spanish.
qu: The sentence in Quechua of Ayacucho.
Data Splits
train: To train the model (102 747 sentences).
Validation: To validate the model during training… See the full description on the dataset page: https://huggingface.co/datasets/somosnlp-hackathon-2022/spanish-to-quechua.
|
SocialGrep/the-reddit-dataset-dataset
|
SocialGrep
|
A meta dataset of Reddit's own /r/datasets community.
|
bible-nlp/biblenlp-corpus
|
bible-nlp
|
Dataset Card for BibleNLP Corpus
Dataset Summary
Partial and complete Bible translations in 833 languages, aligned by verse.
Languages
aai, aak, aau, aaz, abt, abx, aby, acf, acr, acu, adz, aer, aey, agd, agg, agm, agn, agr, agt, agu, aia, aii, aka, ake, alp, alq, als, aly, ame, amf, amk, amm, amn, amo, amp, amr, amu, amx, anh, anv, aoi, aoj, aom, aon, apb, ape, apn, apr, apu, apw, apz, arb, are, arl, arn, arp, asm, aso, ata, atb, atd, atg, att, auc… See the full description on the dataset page: https://huggingface.co/datasets/bible-nlp/biblenlp-corpus.
|
skt/kobest_v1
|
skt
|
Dataset Card for KoBEST
Dataset Summary
KoBEST is a Korean benchmark suite consists of 5 natural language understanding tasks that requires advanced knowledge in Korean.
Supported Tasks and Leaderboards
Boolean Question Answering, Choice of Plausible Alternatives, Words-in-Context, HellaSwag, Sentiment Negation Recognition
Languages
ko-KR
Dataset Structure
Data Instances
KB-BoolQ
An example… See the full description on the dataset page: https://huggingface.co/datasets/skt/kobest_v1.
|
huggingnft/cryptopunks
|
huggingnft
|
Dataset Card
Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available here.
Model is available here.
Check Space: link.
Supported Tasks and Leaderboards
More Information Needed
How to use
How to load this dataset directly with the datasets library:
from… See the full description on the dataset page: https://huggingface.co/datasets/huggingnft/cryptopunks.
|
vicenteor/sbu_captions
|
vicenteor
|
The SBU Captioned Photo Dataset is a collection of over 1 million images with associated text descriptions extracted from Flicker.
|
patriziobellan/PET
|
patriziobellan
|
Abstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, gateways, actors and flow information. We present our new resource, including a detailed overview of the annotation schema and guidelines, as well as a variety of baselines to benchmark the difficulty and challenges of business process extraction from text.
|
Divyanshu/indicxnli
|
Divyanshu
|
IndicXNLI is a translated version of XNLI to 11 Indic Languages. As with XNLI, the goal is
to predict textual entailment (does sentence A imply/contradict/neither sentence
B) and is a classification task (given two sentences, predict one of three
labels).
|
KevinZ/oLMpics
|
KevinZ
|
This is a set a eight datasets from the paper "oLMpics - On what Language Model Pre-training Captures"
by Alon Talmor et al.
|
stepp1/tweet_emotion_intensity
|
stepp1
|
Tweet Emotion Intensity Dataset
Papers:
Emotion Intensities in Tweets. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the sixth joint conference on lexical and computational semantics (*Sem), August 2017, Vancouver, Canada.
WASSA-2017 Shared Task on Emotion Intensity. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the EMNLP 2017 Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media (WASSA), September 2017… See the full description on the dataset page: https://huggingface.co/datasets/stepp1/tweet_emotion_intensity.
|
bigscience-historical-texts/Open_Medieval_French
|
bigscience-historical-texts
|
Open Medieval French
Source: https://github.com/OpenMedFr/texts
|
enoriega/GENIA-Term-Corpus
|
enoriega
|
GENIA Term corpus
|
ranjaykrishna/visual_genome
|
ranjaykrishna
|
Visual Genome enable to model objects and relationships between objects.
They collect dense annotations of objects, attributes, and relationships within each image.
Specifically, the dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects.
|
aharley/rvl_cdip
|
aharley
|
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images.
|
blinoff/kinopoisk
|
blinoff
|
Dataset Summary
Kinopoisk movie reviews dataset (TOP250 & BOTTOM100 rank lists).
In total it contains 36,591 reviews from July 2004 to November 2012.
With following distribution along the 3-point sentiment scale:
Good: 27,264;
Bad: 4,751;
Neutral: 4,576.
Data Fields
Each sample contains the following fields:
part: rank list top250 or bottom100;
movie_name;
review_id;
author: review author;
date: date of a review;
title: review title;
grade3: sentiment score… See the full description on the dataset page: https://huggingface.co/datasets/blinoff/kinopoisk.
|
google/wit
|
google
|
Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset.
WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages.
Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.
|
wikimedia/wit_base
|
wikimedia
|
Dataset Card for WIT
Dataset Summary
Wikimedia's version of the Wikipedia-based Image Text (WIT) Dataset, a large multimodal multilingual dataset.
From the official blog post:
The core training data is taken from the Wikipedia Image-Text (WIT) Dataset, a large curated set of more than 37 million image-text associations extracted from Wikipedia articles in 108 languages that was recently released by Google Research.
The WIT dataset offers extremely valuable data… See the full description on the dataset page: https://huggingface.co/datasets/wikimedia/wit_base.
|
openlifescienceai/medmcqa
|
openlifescienceai
|
Dataset Card for MedMCQA
Dataset Summary
MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
Each sample contains a question, correct answer(s), and other options which… See the full description on the dataset page: https://huggingface.co/datasets/openlifescienceai/medmcqa.
|
Fhrozen/FSD50k
|
Fhrozen
|
Freesound Dataset 50k (FSD50K)
Important
This data set is a copy from the original one located at Zenodo.
Citation
If you use the FSD50K dataset, or part of it, please cite our paper:
Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, Xavier Serra. "FSD50K: an Open Dataset of Human-Labeled Sound Events", arXiv 2020.
Data curators
Eduardo Fonseca, Xavier Favory, Jordi Pons, Mercedes Collado, Ceren Can, Rachit Gupta, Javier… See the full description on the dataset page: https://huggingface.co/datasets/Fhrozen/FSD50k.
|
HugoLaurencon/libri_light
|
HugoLaurencon
|
Libri-light is a large dataset of 60K hours of unlabelled speech from audiobooks in English.
It is a benchmark for the training of automatic speech recognition (ASR) systems with limited or no supervision.
|
HuggingFaceM4/charades
|
HuggingFaceM4
|
Charades is dataset composed of 9848 videos of daily indoors activities collected through Amazon Mechanical Turk. 267 different users were presented with a sentence, that includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence (like in a game of Charades). The dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos.
|
pere/italian_tweets_500k
|
pere
|
\\nItalian tweets.
|
LIUM/tedlium
|
LIUM
|
Dataset Card for tedlium
Dataset Summary
The TED-LIUM corpus is English-language TED talks, with transcriptions, sampled at 16kHz. The three releases of the corpus range from 118 to 452 hours of transcribed speech data.
Example
from datasets import load_dataset
tedlium = load_dataset("LIUM/tedlium", "release1") # for Release 1
# see structure
print(tedlium)
# load audio sample on the fly
audio_input = tedlium["train"][0]["audio"] # first decoded… See the full description on the dataset page: https://huggingface.co/datasets/LIUM/tedlium.
|
Team-PIXEL/rendered-wikipedia-english
|
Team-PIXEL
|
Dataset Card for Team-PIXEL/rendered-wikipedia-english
Dataset Summary
This dataset contains the full English Wikipedia from February 1, 2018, rendered into images of 16x8464 resolution.
The original text dataset was built from a Wikipedia dump. Each example in the original text dataset contained the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.). Each rendered example contains a subset of one full… See the full description on the dataset page: https://huggingface.co/datasets/Team-PIXEL/rendered-wikipedia-english.
|
mwritescode/slither-audited-smart-contracts
|
mwritescode
|
This dataset contains source code and deployed bytecode for Solidity Smart Contracts that have been verified on Etherscan.io, along with a classification of their vulnerabilities according to the Slither static analysis framework.
|
WorkInTheDark/FairytaleQA
|
WorkInTheDark
|
FairytaleQA dataset, an open-source dataset focusing on comprehension of narratives, targeting students from kindergarten to eighth grade. The FairytaleQA dataset is annotated by education experts based on an evidence-based theoretical framework. It consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations.
|
strombergnlp/nlpcc-stance
|
strombergnlp
|
This is a stance prediction dataset in Chinese.
The data is that from a shared task, stance detection in Chinese microblogs, in NLPCC-ICCPOL 2016. It covers Task A, a mandatory supervised task which detects stance towards five targets of interest with given labeled data.
|
HuggingFaceM4/yttemporal180m
|
HuggingFaceM4
|
YT-Temporal-180M, a large and diverse dataset of 6 million videos (spanning 180M extracted frames)
that covers diverse topics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.