id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
deepmind/narrativeqa
|
deepmind
|
Dataset Card for Narrative QA
Dataset Summary
NarrativeQA is an English-lanaguage dataset of stories and corresponding questions designed to test reading comprehension, especially on long documents.
Supported Tasks and Leaderboards
The dataset is used to test reading comprehension. There are 2 tasks proposed in the paper: "summaries only" and "stories only", depending on whether the human-generated summary or the full story text is used to answer… See the full description on the dataset page: https://huggingface.co/datasets/deepmind/narrativeqa.
|
google-research-datasets/natural_questions
|
google-research-datasets
|
Dataset Card for Natural Questions
Dataset Summary
The NQ corpus contains questions from real users, and it requires QA systems to
read and comprehend an entire Wikipedia article that may or may not contain the
answer to the question. The inclusion of real user questions, and the
requirement that solutions should read an entire page to find the answer, cause
NQ to be a more realistic and challenging task than prior QA datasets.
Supported Tasks and… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/natural_questions.
|
google-research-datasets/nq_open
|
google-research-datasets
|
Dataset Card for nq_open
Dataset Summary
The NQ-Open task, introduced by Lee et.al. 2019,
is an open domain question answering benchmark that is derived from Natural Questions.
The goal is to predict an English answer string for an input English question.
All questions can be answered using the contents of English Wikipedia.
Supported Tasks and Leaderboards
Open Domain Question-Answering,
EfficientQA Leaderboard:… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/nq_open.
|
malmaud/onestop_qa
|
malmaud
|
Dataset Card for OneStopQA
Dataset Summary
OneStopQA is a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme. The reading materials are Guardian articles taken from the OneStopEnglish corpus. Each article comes in three difficulty levels, Elementary, Intermediate and Advanced. Each paragraph is annotated with three multiple choice reading comprehension questions. The reading… See the full description on the dataset page: https://huggingface.co/datasets/malmaud/onestop_qa.
|
openai/openai_humaneval
|
openai
|
Dataset Card for OpenAI HumanEval
Dataset Summary
The HumanEval dataset released by OpenAI includes 164 programming problems with a function sig- nature, docstring, body, and several unit tests. They were handwritten to ensure not to be included in the training set of code generation models.
Supported Tasks and Leaderboards
Languages
The programming problems are written in Python and contain English natural text in comments and… See the full description on the dataset page: https://huggingface.co/datasets/openai/openai_humaneval.
|
allenai/openbookqa
|
allenai
|
Dataset Card for OpenBookQA
Dataset Summary
OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic
(with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In
particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge,
and rich text comprehension.
OpenBookQA is a new kind of… See the full description on the dataset page: https://huggingface.co/datasets/allenai/openbookqa.
|
Skylion007/openwebtext
|
Skylion007
|
An open-source replication of the WebText dataset from OpenAI.
|
Helsinki-NLP/opus_books
|
Helsinki-NLP
|
Dataset Card for OPUS Books
Dataset Summary
This is a collection of copyright free books aligned by Andras Farkas, which are available from http://www.farkastranslations.com/bilingual_books.php
Note that the texts are rather dated due to copyright issues and that some of them are manually reviewed (check the meta-data at the top of the corpus files in XML). The source is multilingually aligned, which is available from… See the full description on the dataset page: https://huggingface.co/datasets/Helsinki-NLP/opus_books.
|
oscar-corpus/oscar
|
oscar-corpus
|
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\
|
google-research-datasets/paws
|
google-research-datasets
|
Dataset Card for PAWS: Paraphrase Adversaries from Word Scrambling
Dataset Summary
PAWS: Paraphrase Adversaries from Word Scrambling
This dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, and word order information for the problem of paraphrase identification. The dataset has two subsets, one based on Wikipedia and the other one based on the Quora Question Pairs (QQP) dataset.
For… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/paws.
|
peixiang/pec
|
peixiang
|
\
A dataset of around 350K persona-based empathetic conversations. Each speaker is associated with a persona, which comprises multiple persona sentences. The response of each conversation is empathetic.
|
peoples-daily-ner/peoples_daily_ner
|
peoples-daily-ner
|
People's Daily NER Dataset is a commonly used dataset for Chinese NER, with
text from People's Daily (人民日报), the largest official newspaper.
The dataset is in BIO scheme. Entity types are: PER (person), ORG (organization)
and LOC (location).
|
deepmind/pg19
|
deepmind
|
This repository contains the PG-19 language modeling benchmark.
It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919.
It also contains metadata of book titles and publication dates.
PG-19 is over double the size of the Billion Word benchmark and contains documents that are 20X longer, on average, than the WikiText long-range language modelling benchmark.
Books are partitioned into a train, validation, and test set. Book metadata is stored in metadata.csv which contains (book_id, short_book_title, publication_date).
Unlike prior benchmarks, we do not constrain the vocabulary size --- i.e. mapping rare words to an UNK token --- but instead release the data as an open-vocabulary benchmark. The only processing of the text that has been applied is the removal of boilerplate license text, and the mapping of offensive discriminatory words as specified by Ofcom to placeholder tokens. Users are free to model the data at the character-level, subword-level, or via any mechanism that can model an arbitrary string of text.
To compare models we propose to continue measuring the word-level perplexity, by calculating the total likelihood of the dataset (via any chosen subword vocabulary or character-based scheme) divided by the number of tokens --- specified below in the dataset statistics table.
One could use this dataset for benchmarking long-range language models, or use it to pre-train for other natural language processing tasks which require long-range reasoning, such as LAMBADA or NarrativeQA. We would not recommend using this dataset to train a general-purpose language model, e.g. for applications to a production-system dialogue agent, due to the dated linguistic style of old texts and the inherent biases present in historical writing.
|
ybisk/piqa
|
ybisk
|
To apply eyeshadow without a brush, should I use a cotton swab or a toothpick?
Questions requiring this kind of physical commonsense pose a challenge to state-of-the-art
natural language understanding systems. The PIQA dataset introduces the task of physical commonsense reasoning
and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA.
Physical commonsense knowledge is a major challenge on the road to true AI-completeness,
including robots that interact with the world and understand natural language.
PIQA focuses on everyday situations with a preference for atypical solutions.
The dataset is inspired by instructables.com, which provides users with instructions on how to build, craft,
bake, or manipulate objects using everyday materials.
The underlying task is formualted as multiple choice question answering:
given a question `q` and two possible solutions `s1`, `s2`, a model or
a human must choose the most appropriate solution, of which exactly one is correct.
The dataset is further cleaned of basic artifacts using the AFLite algorithm which is an improvement of
adversarial filtering. The dataset contains 16,000 examples for training, 2,000 for development and 3,000 for testing.
|
ncbi/pubmed
|
ncbi
|
NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.
|
allenai/qasc
|
allenai
|
Dataset Card for "qasc"
Dataset Summary
QASC is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice
questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences.
Supported Tasks and Leaderboards
More Information Needed
Languages
More Information Needed
Dataset Structure
Data Instances… See the full description on the dataset page: https://huggingface.co/datasets/allenai/qasc.
|
allenai/qasper
|
allenai
|
A dataset containing 1585 papers with 5049 information-seeking questions asked by regular readers of NLP papers, and answered by a separate set of NLP practitioners.
|
quora-competitions/quora
|
quora-competitions
|
Dataset Card for "quora"
Dataset Summary
The Quora dataset is composed of question pairs, and the task is to determine if the questions are paraphrases of each other (have the same meaning).
Supported Tasks and Leaderboards
More Information Needed
Languages
More Information Needed
Dataset Structure
Data Instances
default
Size of downloaded dataset files: 58.17 MB
Size of the generated… See the full description on the dataset page: https://huggingface.co/datasets/quora-competitions/quora.
|
mbien/recipe_nlg
|
mbien
|
The dataset contains 2231142 cooking recipes (>2 millions). It's processed in more careful way and provides more samples than any other dataset in the area.
|
webis/tldr-17
|
webis
|
This corpus contains preprocessed posts from the Reddit dataset.
The dataset consists of 3,848,330 posts with an average length of 270 words for content,
and 28 words for the summary.
Features includes strings: author, body, normalizedBody, content, summary, subreddit, subreddit_id.
Content is used as document and summary is used as summary.
|
ucirvine/reuters21578
|
ucirvine
|
The Reuters-21578 dataset is one of the most widely used data collections for text
categorization research. It is collected from the Reuters financial newswire service in 1987.
|
allenai/ropes
|
allenai
|
Dataset Card for ROPES
Dataset Summary
ROPES (Reasoning Over Paragraph Effects in Situations) is a QA dataset which tests a system's ability to apply knowledge from a passage of text to a new situation. A system is presented a background passage containing a causal or qualitative relation(s) (e.g., "animal pollinators increase efficiency of fertilization in flowers"), a novel situation that uses this background, and questions that require reasoning about effects… See the full description on the dataset page: https://huggingface.co/datasets/allenai/ropes.
|
kuznetsoffandrey/sberquad
|
kuznetsoffandrey
|
Dataset Card for sberquad
Dataset Summary
Sber Question Answering Dataset (SberQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
Russian original analogue presented in Sberbank Data Science Journey 2017.
Supported Tasks and Leaderboards… See the full description on the dataset page: https://huggingface.co/datasets/kuznetsoffandrey/sberquad.
|
airesearch/scb_mt_enth_2020
|
airesearch
|
scb-mt-en-th-2020: A Large English-Thai Parallel Corpus
The primary objective of our work is to build a large-scale English-Thai dataset for machine translation.
We construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources,
namely news, Wikipedia articles, SMS messages, task-based dialogs, web-crawled data and government documents.
Methodology for gathering data, building parallel texts and removing noisy sentence pairs are presented in a reproducible manner.
We train machine translation models based on this dataset. Our models' performance are comparable to that of
Google Translation API (as of May 2020) for Thai-English and outperform Google when the Open Parallel Corpus (OPUS) is
included in the training data for both Thai-English and English-Thai translation.
The dataset, pre-trained models, and source code to reproduce our work are available for public use.
|
google-research-datasets/schema_guided_dstc8
|
google-research-datasets
|
The Schema-Guided Dialogue dataset (SGD) was developed for the Dialogue State Tracking task of the Eights Dialogue Systems Technology Challenge (dstc8).
The SGD dataset consists of over 18k annotated multi-domain, task-oriented conversations between a human and a virtual assistant.
These conversations involve interactions with services and APIs spanning 17 domains, ranging from banks and events to media, calendar, travel, and weather.
For most of these domains, the SGD dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces,
which reflects common real-world scenarios.
|
armanc/scientific_papers
|
armanc
|
Scientific papers datasets contains two sets of long and structured documents.
The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
Both "arxiv" and "pubmed" have two features:
- article: the body of the document, pagragraphs seperated by "/n".
- abstract: the abstract of the document, pagragraphs seperated by "/n".
- section_names: titles of sections, seperated by "/n".
|
allenai/scifact
|
allenai
|
SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.
|
allenai/sciq
|
allenai
|
Dataset Card for "sciq"
Dataset Summary
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
Supported Tasks and Leaderboards
More Information Needed
Languages
More Information… See the full description on the dataset page: https://huggingface.co/datasets/allenai/sciq.
|
kyunghyuncho/search_qa
|
kyunghyuncho
|
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind
CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article
and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google.
Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context
tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation
as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human
and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
|
stanfordnlp/sentiment140
|
stanfordnlp
|
Sentiment140 consists of Twitter messages with emoticons, which are used as noisy labels for
sentiment classification. For more detailed information please refer to the paper.
|
ucirvine/sms_spam
|
ucirvine
|
Dataset Card for [Dataset Name]
Dataset Summary
The SMS Spam Collection v.1 is a public set of SMS labeled messages that have been collected for mobile phone spam research.
It has one collection composed by 5,574 English, real and non-enconded messages, tagged according being legitimate (ham) or spam.
Supported Tasks and Leaderboards
[More Information Needed]
Languages
English
Dataset Structure
Data… See the full description on the dataset page: https://huggingface.co/datasets/ucirvine/sms_spam.
|
stanfordnlp/snli
|
stanfordnlp
|
Dataset Card for SNLI
Dataset Summary
The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE).
Supported Tasks and Leaderboards
Natural Language Inference (NLI), also known as Recognizing Textual Entailment… See the full description on the dataset page: https://huggingface.co/datasets/stanfordnlp/snli.
|
google/speech_commands
|
google
|
This is a set of one-second .wav audio files, each containing a single spoken
English word or background noise. These words are from a small set of commands, and are spoken by a
variety of different speakers. This data set is designed to help train simple
machine learning models. This dataset is covered in more detail at
[https://arxiv.org/abs/1804.03209](https://arxiv.org/abs/1804.03209).
Version 0.01 of the data set (configuration `"v0.01"`) was released on August 3rd 2017 and contains
64,727 audio files.
In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine",
"Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow".
In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual".
In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation
it is marked by `True` value of `"is_unknown"` feature). Their function is to teach a model to distinguish core words
from unrecognized ones.
The `_silence_` class contains a set of longer audio clips that are either recordings or
a mathematical simulation of noise.
|
xlangai/spider
|
xlangai
|
Dataset Card for Spider
Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students.
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
Supported Tasks and Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider
Languages
The text in the dataset is in English.
Dataset… See the full description on the dataset page: https://huggingface.co/datasets/xlangai/spider.
|
crux82/squad_it
|
crux82
|
Dataset Card for "squad_it"
Dataset Summary
SQuAD-it is derived from the SQuAD dataset and it is obtained through semi-automatic translation of the SQuAD dataset
into Italian. It represents a large-scale dataset for open question answering processes on factoid questions in Italian.
The dataset contains more than 60,000 question/answer pairs derived from the original English dataset. The dataset is
split into training and test sets to support the replicability of… See the full description on the dataset page: https://huggingface.co/datasets/crux82/squad_it.
|
KorQuAD/squad_kor_v1
|
KorQuAD
|
Dataset Card for KorQuAD v1.0
Dataset Summary
KorQuAD 1.0 is a large-scale question-and-answer dataset constructed for Korean machine reading comprehension, and investigate the dataset to understand the distribution of answers and the types of reasoning required to answer the question. This dataset benchmarks the data generating process of SQuAD v1.0 to meet the standard.
Supported Tasks and Leaderboards
question-answering
Languages… See the full description on the dataset page: https://huggingface.co/datasets/KorQuAD/squad_kor_v1.
|
KorQuAD/squad_kor_v2
|
KorQuAD
|
KorQuAD 2.0 is a Korean question and answering dataset consisting of a total of 100,000+ pairs. There are three major differences from KorQuAD 1.0, which is the standard Korean Q & A data. The first is that a given document is a whole Wikipedia page, not just one or two paragraphs. Second, because the document also contains tables and lists, it is necessary to understand the document structured with HTML tags. Finally, the answer can be a long text covering not only word or phrase units, but paragraphs, tables, and lists. As a baseline model, BERT Multilingual is used, released by Google as an open source. It shows 46.0% F1 score, a very low score compared to 85.7% of the human F1 score. It indicates that this data is a challenging task. Additionally, we increased the performance by no-answer data augmentation. Through the distribution of this data, we intend to extend the limit of MRC that was limited to plain text to real world tasks of various lengths and formats.
|
PhilipMay/stsb_multi_mt
|
PhilipMay
|
Dataset Card for STSb Multi MT
Dataset Summary
STS Benchmark comprises a selection of the English datasets used in the STS tasks organized
in the context of SemEval between 2012 and 2017. The selection of datasets include text from
image captions, news headlines and user forums. (source)
These are different multilingual translations and the English original of the STSbenchmark dataset. Translation has been done with deepl.com. It can be used to train sentence… See the full description on the dataset page: https://huggingface.co/datasets/PhilipMay/stsb_multi_mt.
|
ufldl-stanford/svhn
|
ufldl-stanford
|
Dataset Card for Street View House Numbers
Dataset Summary
SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images) and comes from a significantly harder, unsolved, real world problem… See the full description on the dataset page: https://huggingface.co/datasets/ufldl-stanford/svhn.
|
community-datasets/swahili_news
|
community-datasets
|
Dataset Card for Swahili : News Classification Dataset
Dataset Summary
Swahili is spoken by 100-150 million people across East Africa. In Tanzania, it is one of two national languages (the other is English) and it is the official language of instruction in all schools. News in Swahili is an important part of the media sphere in Tanzania.
News contributes to education, technology, and the economic growth of a country, and news in local languages plays an important… See the full description on the dataset page: https://huggingface.co/datasets/community-datasets/swahili_news.
|
rcds/swiss_judgment_prediction
|
rcds
|
Swiss-Judgment-Prediction is a multilingual, diachronic dataset of 85K Swiss Federal Supreme Court (FSCS) cases annotated with the respective binarized judgment outcome (approval/dismissal), posing a challenging text classification task. We also provide additional metadata, i.e., the publication year, the legal area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP.
|
community-datasets/tashkeela
|
community-datasets
|
Dataset Card for Tashkeela
Dataset Summary
It contains 75 million of fully vocalized words mainly
97 books from classical and modern Arabic language.
Supported Tasks and Leaderboards
[More Information Needed]
Languages
The dataset is based on Arabic.
Dataset Structure
Data Instances
{'book':… See the full description on the dataset page: https://huggingface.co/datasets/community-datasets/tashkeela.
|
google-research-datasets/taskmaster2
|
google-research-datasets
|
Taskmaster is dataset for goal oriented conversations. The Taskmaster-2 dataset consists of 17,289 dialogs in the seven domains which include restaurants, food ordering, movies, hotels, flights, music and sports. Unlike Taskmaster-1, which includes both written "self-dialogs" and spoken two-person dialogs, Taskmaster-2 consists entirely of spoken two-person dialogs. In addition, while Taskmaster-1 is almost exclusively task-based, Taskmaster-2 contains a good number of search- and recommendation-oriented dialogs. All dialogs in this release were created using a Wizard of Oz (WOz) methodology in which crowdsourced workers played the role of a 'user' and trained call center operators played the role of the 'assistant'. In this way, users were led to believe they were interacting with an automated system that “spoke” using text-to-speech (TTS) even though it was in fact a human behind the scenes. As a result, users could express themselves however they chose in the context of an automated interface.
|
Helsinki-NLP/tatoeba
|
Helsinki-NLP
|
This is a collection of translated sentences from Tatoeba
359 languages, 3,403 bitexts
total number of files: 750
total number of tokens: 65.54M
total number of sentence fragments: 8.96M
|
tmu-nlp/thai_toxicity_tweet
|
tmu-nlp
|
Thai Toxicity Tweet Corpus contains 3,300 tweets annotated by humans with guidelines including a 44-word dictionary.
The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus
analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains
toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing
target, and word sense ambiguity.
Notes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020.
By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`.
Processing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1).
|
EleutherAI/pile
|
EleutherAI
|
The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
datasets combined together.
|
CogComp/trec
|
CogComp
|
The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set.
The dataset has 6 coarse class labels and 50 fine class labels. Average length of each sentence is 10, vocabulary size of 8700.
Data are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set. These questions were manually labeled.
|
savasy/ttc4900
|
savasy
|
Dataset Card for TTC4900: A Benchmark Data for Turkish Text Categorization
Dataset Summary
The data set is taken from kemik group
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study "A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer… See the full description on the dataset page: https://huggingface.co/datasets/savasy/ttc4900.
|
fthbrmnby/turkish_product_reviews
|
fthbrmnby
|
Dataset Card for Turkish Product Reviews
Dataset Summary
This Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews.
Supported Tasks and Leaderboards
[More Information Needed]
Languages
The dataset is based on Turkish.
Dataset Structure
Data Instances
Example 1:
sentence: beklentimin altında bir ürün kaliteli değil
sentiment:… See the full description on the dataset page: https://huggingface.co/datasets/fthbrmnby/turkish_product_reviews.
|
cardiffnlp/tweet_eval
|
cardiffnlp
|
Dataset Card for tweet_eval
Dataset Summary
TweetEval consists of seven heterogenous tasks in Twitter, all framed as multi-class tweet classification. The tasks include - irony, hate, offensive, stance, emoji, emotion, and sentiment. All tasks have been unified into the same benchmark, with each dataset presented in the same format and with fixed training, validation and test splits.
Supported Tasks and Leaderboards
text_classification: The dataset… See the full description on the dataset page: https://huggingface.co/datasets/cardiffnlp/tweet_eval.
|
google-research-datasets/tydiqa
|
google-research-datasets
|
Dataset Card for "tydiqa"
Dataset Summary
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
in the world. It contains language phenomena that would not be found… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/tydiqa.
|
CSTR-Edinburgh/vctk
|
CSTR-Edinburgh
|
The CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents.
|
HDLTex/web_of_science
|
HDLTex
|
The Web Of Science (WOS) dataset is a collection of data of published papers
available from the Web of Science. WOS has been released in three versions: WOS-46985, WOS-11967 and WOS-5736. WOS-46985 is the
full dataset. WOS-11967 and WOS-5736 are two subsets of WOS-46985.
|
hltcoe/weibo_ner
|
hltcoe
|
Tags: PER(人名), LOC(地点名), GPE(行政区名), ORG(机构名)
Label Tag Meaning
PER PER.NAM 名字(张三)
PER.NOM 代称、类别名(穷人)
LOC LOC.NAM 特指名称(紫玉山庄)
LOC.NOM 泛称(大峡谷、宾馆)
GPE GPE.NAM 行政区的名称(北京)
ORG ORG.NAM 特定机构名称(通惠医院)
ORG.NOM 泛指名称、统称(文艺公司)
|
bea2019st/wi_locness
|
bea2019st
|
Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native
English students with their writing. Specifically, students from around the world submit letters,
stories, articles and essays in response to various prompts, and the W&I system provides instant
feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these
submissions and assigned them a CEFR level.
|
CUHK-CSE/wider_face
|
CUHK-CSE
|
WIDER FACE dataset is a face detection benchmark dataset, of which images are
selected from the publicly available WIDER dataset. We choose 32,203 images and
label 393,703 faces with a high degree of variability in scale, pose and
occlusion as depicted in the sample images. WIDER FACE dataset is organized
based on 61 event classes. For each event class, we randomly select 40%/10%/50%
data as training, validation and testing sets. We adopt the same evaluation
metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets,
we do not release bounding box ground truth for the test images. Users are
required to submit final prediction files, which we shall proceed to evaluate.
|
michaelauli/wiki_bio
|
michaelauli
|
This dataset gathers 728,321 biographies from wikipedia. It aims at evaluating text generation
algorithms. For each article, we provide the first paragraph and the infobox (both tokenized).
For each article, we extracted the first paragraph (text), the infobox (structured data). Each
infobox is encoded as a list of (field name, field value) pairs. We used Stanford CoreNLP
(http://stanfordnlp.github.io/CoreNLP/) to preprocess the data, i.e. we broke the text into
sentences and tokenized both the text and the field values. The dataset was randomly split in
three subsets train (80%), valid (10%), test (10%).
|
microsoft/wiki_qa
|
microsoft
|
Dataset Card for "wiki_qa"
Dataset Summary
Wiki Question Answering corpus from Microsoft.
The WikiQA corpus is a publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering.
Supported Tasks and Leaderboards
More Information Needed
Languages
More Information Needed
Dataset Structure
Data Instances
default
Size of downloaded… See the full description on the dataset page: https://huggingface.co/datasets/microsoft/wiki_qa.
|
unimelb-nlp/wikiann
|
unimelb-nlp
|
Dataset Card for WikiANN
Dataset Summary
WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles annotated with LOC (location), PER (person), and ORG (organisation) tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of Rahimi et al. (2019), which supports 176 of the 282 languages from the original WikiANN corpus.
Supported Tasks and Leaderboards… See the full description on the dataset page: https://huggingface.co/datasets/unimelb-nlp/wikiann.
|
Salesforce/wikisql
|
Salesforce
|
A large crowd-sourced dataset for developing natural language interfaces for relational databases
|
allenai/winogrande
|
allenai
|
WinoGrande is a new collection of 44k problems, inspired by Winograd Schema Challenge (Levesque, Davis, and Morgenstern
2011), but adjusted to improve the scale and robustness against the dataset-specific bias. Formulated as a
fill-in-a-blank task with binary options, the goal is to choose the right option for a given sentence which requires
commonsense reasoning.
|
wmt/wmt14
|
wmt
|
Dataset Card for "wmt14"
Dataset Summary
Warning: There are issues with the Common Crawl corpus data (training-parallel-commoncrawl.tgz):
Non-English files contain many English sentences.
Their "parallel" sentences in English are not aligned: they are uncorrelated with their counterpart.
We have contacted the WMT organizers, and in response, they have indicated that they do not have plans to update the Common Crawl corpus data. Their rationale… See the full description on the dataset page: https://huggingface.co/datasets/wmt/wmt14.
|
cambridgeltl/xcopa
|
cambridgeltl
|
Dataset Card for "xcopa"
Dataset Summary
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning
The Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across
languages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around
the globe. The dataset is… See the full description on the dataset page: https://huggingface.co/datasets/cambridgeltl/xcopa.
|
google/xquad
|
google
|
Dataset Card for "xquad"
Dataset Summary
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering
performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set
of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten languages: Spanish, German,
Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and… See the full description on the dataset page: https://huggingface.co/datasets/google/xquad.
|
google/xtreme
|
google
|
Dataset Card for "xtreme"
Dataset Summary
The Cross-lingual Natural Language Inference (XNLI) corpus is a crowd-sourced collection of 5,000 test and
2,500 dev pairs for the MultiNLI corpus. The pairs are annotated with textual entailment and translated into
14 languages: French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese,
Hindi, Swahili and Urdu. This results in 112.5k annotated pairs. Each premise can be associated with… See the full description on the dataset page: https://huggingface.co/datasets/google/xtreme.
|
community-datasets/yahoo_answers_topics
|
community-datasets
|
Dataset Card for "Yahoo Answers Topics"
Dataset Summary
[More Information Needed]
Supported Tasks and Leaderboards
[More Information Needed]
Languages
[More Information Needed]
Dataset Structure
Data Instances
[More Information Needed]
Data Fields
[More Information Needed]
Data Splits
[More Information Needed]
Dataset Creation
Curation Rationale… See the full description on the dataset page: https://huggingface.co/datasets/community-datasets/yahoo_answers_topics.
|
fancyzhx/yelp_polarity
|
fancyzhx
|
Dataset Card for "yelp_polarity"
Dataset Summary
Large Yelp Review Dataset.
This is a dataset for binary sentiment classification. We provide a set of 560,000 highly polar yelp reviews for training, and 38,000 for testing.
ORIGIN
The Yelp reviews dataset consists of reviews from Yelp. It is extracted
from the Yelp Dataset Challenge 2015 data. For more information, please
refer to http://www.yelp.com/dataset_challenge
The Yelp reviews polarity dataset is… See the full description on the dataset page: https://huggingface.co/datasets/fancyzhx/yelp_polarity.
|
AI-Sweden/SuperLim
|
AI-Sweden
|
\
|
Abirate/english_quotes
|
Abirate
|
Dataset Card for English quotes
I-Dataset Summary
english_quotes is a dataset of all the quotes retrieved from goodreads quotes. This dataset can be used for multi-label text classification and text generation. The content of each quote is in English and concerns the domain of datasets for NLP and beyond.
II-Supported Tasks and Leaderboards
Multi-label text classification : The dataset can be used to train a model for text-classification, which… See the full description on the dataset page: https://huggingface.co/datasets/Abirate/english_quotes.
|
AlekseyKorshuk/comedy-scripts
|
AlekseyKorshuk
|
This dataset is designed to generate lyrics with HuggingArtists.
|
Babelscape/wikineural
|
Babelscape
|
Dataset Card for WikiNEuRal dataset
Description
Summary: In a nutshell, WikiNEuRal consists in a novel technique which builds upon a multilingual lexical knowledge base (i.e., BabelNet) and transformer-based architectures (i.e., BERT) to produce high-quality annotations for multilingual NER. It shows consistent improvements of up to 6 span-based F1-score points against state-of-the-art alternative data production methods on common benchmarks for NER. We used this… See the full description on the dataset page: https://huggingface.co/datasets/Babelscape/wikineural.
|
CAiRE/ASCEND
|
CAiRE
|
Dataset Card for ASCEND
Dataset Summary
ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong. ASCEND consists of 10.62 hours of spontaneous speech with a total of ~12.3K utterances. The corpus is split into 3 sets: training, validation, and test with a ratio of 8:1:1 while maintaining a balanced gender proportion on each set.… See the full description on the dataset page: https://huggingface.co/datasets/CAiRE/ASCEND.
|
Nexdata/accented_english
|
Nexdata
|
Dataset Card for accented-english
Dataset Summary
The dataset contains 20,000 hours of accented English speech data. It's collected from local English speakers in more than 20 countries, such as USA, China, UK, Germany, Japan, India, France, Spain, Russia, Latin America, covering a variety of pronunciation habits and characteristics, accent severity, and the distribution of speakers. The format is 16kHz, 16bit, uncompressed wav, mono channel. The sentence accuracy… See the full description on the dataset page: https://huggingface.co/datasets/Nexdata/accented_english.
|
DiFronzo/Human_Activity_Recognition
|
DiFronzo
|
Human Activity Recognition (HAR) using smartphones dataset. Classifying the type of movement amongst five categories:
WALKING,
WALKING_UPSTAIRS,
WALKING_DOWNSTAIRS,
SITTING,
STANDING
The experiments have been carried out with a group of 16 volunteers within an age bracket of 19-26 years. Each person performed five activities (WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING) wearing a smartphone (Samsung Galaxy S8) in the pucket. Using its embedded accelerometer and gyroscope… See the full description on the dataset page: https://huggingface.co/datasets/DiFronzo/Human_Activity_Recognition.
|
Emanuel/UD_Portuguese-Bosque
|
Emanuel
|
AutoNLP Dataset for project: pos-tag-bosque
Table of content
Dataset Description
Languages
Dataset Structure
Data Instances
Data Fields
Data Splits
Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project pos-tag-bosque.
Languages
The BCP-47 code for the dataset's language is pt.
Dataset Structure
Data Instances
A sample from this dataset looks as follows:
[
{… See the full description on the dataset page: https://huggingface.co/datasets/Emanuel/UD_Portuguese-Bosque.
|
Fraser/dream-coder
|
Fraser
|
Program Synthesis Data
Generated program synthesis datasets used to train dreamcoder.
Currently just supports text & list data.
|
GEM/ART
|
GEM
|
the Abductive Natural Language Generation Dataset from AI2
|
GEM/totto
|
GEM
|
ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
|
GEM/wiki_auto_asset_turk
|
GEM
|
Dataset Card for GEM/wiki_auto_asset_turk
Link to Main Data Card
You can find the main data card on the GEM Website.
Dataset Summary
WikiAuto is an English simplification dataset that we paired with ASSET and TURK, two very high-quality evaluation datasets, as test sets. The input is an English sentence taken from Wikipedia and the target a simplified sentence. ASSET and TURK contain the same test examples but have references that are simplified in… See the full description on the dataset page: https://huggingface.co/datasets/GEM/wiki_auto_asset_turk.
|
GonzaloA/fake_news
|
GonzaloA
|
TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 30k<n<50k
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
- intent-classification
pretty_name: GonzaloA / Fake News… See the full description on the dataset page: https://huggingface.co/datasets/GonzaloA/fake_news.
|
Helsinki-NLP/tatoeba_mt
|
Helsinki-NLP
|
The Tatoeba Translation Challenge is a multilingual data set of
machine translation benchmarks derived from user-contributed
translations collected by [Tatoeba.org](https://tatoeba.org/) and
provided as parallel corpus from [OPUS](https://opus.nlpl.eu/). This
dataset includes test and development data sorted by language pair. It
includes test sets for hundreds of language pairs and is continuously
updated. Please, check the version number tag to refer to the release
that your are using.
|
NbAiLab/norwegian_parliament
|
NbAiLab
|
Dataset Card Creation Guide
Dataset Summary
This is a classification dataset created from a subset of the Talk of Norway. This dataset contains text phrases from the political parties Fremskrittspartiet and Sosialistisk Venstreparti. The dataset is annotated with the party the speaker, as well as a timestamp. The classification task is to, simply by looking at the text, being able to predict is the speech was done by a representative from Fremskrittspartiet or… See the full description on the dataset page: https://huggingface.co/datasets/NbAiLab/norwegian_parliament.
|
Paul/hatecheck
|
Paul
|
Dataset Card for HateCheck
Dataset Description
HateCheck is a suite of functional test for hate speech detection models.
The dataset contains 3,728 validated test cases in 29 functional tests.
19 functional tests correspond to distinct types of hate. The other 11 functional tests cover challenging types of non-hate.
This allows for targeted diagnostic insights into model performance.
In our ACL paper, we found critical weaknesses in all commercial and academic… See the full description on the dataset page: https://huggingface.co/datasets/Paul/hatecheck.
|
PaulLerner/viquae_wikipedia
|
PaulLerner
|
See https://github.com/PaulLerner/ViQuAE
license: cc-by-3.0
|
PlanTL-GOB-ES/SQAC
|
PlanTL-GOB-ES
|
This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment.
The sources of the contexts are:
* Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
* News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
* Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence] (https://creativecommons.org/licenses/by/4.0/legalcode).
This dataset can be used to build extractive-QA.
|
PlanTL-GOB-ES/cantemist-ner
|
PlanTL-GOB-ES
|
https://temu.bsc.es/cantemist/
|
SetFit/20_newsgroups
|
SetFit
|
This is a version of the 20 newsgroups dataset that is provided in Scikit-learn. From the Scikit-learn docs:
The 20 newsgroups dataset comprises around 18000 newsgroups posts on 20 topics split in two subsets: one for training (or development) and the other one for testing (or for performance evaluation). The split between the train and test set is based upon a messages posted before and after a specific date.
We followed the recommended practice to remove headers, signature blocks, and… See the full description on the dataset page: https://huggingface.co/datasets/SetFit/20_newsgroups.
|
SetFit/emotion
|
SetFit
|
** Attention: There appears an overlap in train / test. I trained a model on the train set and achieved 100% acc on test set. With the original emotion dataset this is not the case (92.4% acc)**
|
SetFit/hate_speech_offensive
|
SetFit
|
hate_speech_offensive
This dataset is a version from hate_speech_offensive, splitted into train and test set.
|
SetFit/sst5
|
SetFit
|
Stanford Sentiment Treebank - Fine-Grained
Stanford Sentiment Treebank with 5 labels: very positive, positive, neutral, negative, very negative
Splits are from:
https://github.com/AcademiaSinicaNLPLab/sentiment_dataset/tree/master/data
Training data is on sentence level, not on phrase level!
|
SetFit/subj
|
SetFit
|
Subjective vs Objective
This is the SUBJ dataset as used in SentEval. It contains sentences with an annotation if they sentence describes something subjective about a movie or something objective
|
SetFit/tweet_sentiment_extraction
|
SetFit
|
Tweet Sentiment Extraction
Source: https://www.kaggle.com/c/tweet-sentiment-extraction/data
|
SoLID/shellcode_i_a32
|
SoLID
|
Shellcode_IA32 is a dataset for shellcode generation from English intents. The shellcodes are compilable on Intel Architecture 32-bits.
|
SocialGrep/one-million-reddit-confessions
|
SocialGrep
|
Dataset Card for one-million-reddit-confessions
Dataset Summary
This corpus contains a million posts from the following subreddits:
/r/trueoffmychest
/r/confession
/r/confessions
/r/offmychest
Posts are annotated with their score.
Languages
Mainly English.
Dataset Structure
Data Instances
A data point is a Reddit post.
Data Fields
'type': the type of the data point. Can be 'post' or 'comment'.
'id':… See the full description on the dataset page: https://huggingface.co/datasets/SocialGrep/one-million-reddit-confessions.
|
Sunbird/salt-dataset
|
Sunbird
|
A parallel text corpus, SALT -- (Sunbird African Language Translation Dataset), was created for five Ugandan languages (Luganda,
Runyankore, Acholi, Lugbara and Ateso) and various methods were explored to train and evaluate translation models.
|
ai4bharat/samanantar
|
ai4bharat
|
Samanantar is the largest publicly available parallel corpora collection for Indic languages: Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu. The corpus has 49.6M sentence pairs between English to Indian Languages.
|
albertvillanova/legal_contracts
|
albertvillanova
|
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
|
alistvt/coqa-stories
|
alistvt
|
This is a dataset containing just stories of the CoQA dataset with their respective ids. This can be used in the pretraining phase for the MLM tasks.
|
andstor/smart_contracts
|
andstor
|
Smart Contracts Dataset.
This is a dataset of verified (Etherscan.io) Smart Contracts that are deployed to the Ethereum blockchain. A set of about 100,000 to 200,000 contracts are provided, containing both Solidity and Vyper code.
|
bavard/personachat_truecased
|
bavard
|
A version of the PersonaChat dataset that has been true-cased, and also has been given more normalized punctuation.
The original PersonaChat dataset is in all lower case, and has extra space around each clause/sentence separating
punctuation mark. This version of the dataset has more of a natural language look, with sentence capitalization,
proper noun capitalization, and normalized whitespace. Also, each dialogue turn includes a pool of distractor
candidate responses, which can be used by a multiple choice regularization loss during training.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.