id
stringlengths
6
121
author
stringlengths
2
42
description
stringlengths
0
6.67k
Aratako/Magpie-Tanuki-8B-annotated-96k
Aratako
Magpie-Tanuki-8B-annotated-96k Magpieの手法をweblab-GENIAC/Tanuki-8B-dpo-v1.0に対して適用し作成したデータセットであるAratako/Magpie-Tanuki-8B-97kに対して、cyberagent/calm3-22b-chatを用いてinstructionに対して難易度、クオリティ、カテゴリをアノテーションしたデータセットです。 アノテーションのプロンプト calm3によるアノテーションにはそれぞれ以下のプロンプトを利用しました。 難易度のアノテーション # 指示 まず、与えられたユーザーの意図を特定し、その後、ユーザーのクエリの内容に基づいて難易度レベルをラベル付けしてください。 ## ユーザーのクエリ ``` {input} ``` ## 出力フォーマット ユーザーのクエリに基づき、まずユーザーの意図を特定し、そのクエリを解決するために必要な知識を明示してください。 その後、難易度レベルを `very… See the full description on the dataset page: https://huggingface.co/datasets/Aratako/Magpie-Tanuki-8B-annotated-96k.
BAAI/CCI3-HQ-Annotation-Benchmark
BAAI
CCI3-HQ-Annotation-Benchmark These 14k samples were randomly extracted from a large corpus of Chinese texts, containing both the original text and corresponding labels. They can be used to evaluate the quality of Chinese corpora.
yale-nlp/M3SciQA
yale-nlp
🧑‍🔬 M3SciQA: A Multi-Modal Multi-Document Scientific QA Benchmark For Evaluating Foundatio Models EMNLP 2024 Findings 🖥️ Code Introduction In the realm of foundation models for scientific research, current benchmarks predominantly focus on single-document, text-only tasks and fail to adequately represent the complex workflow of such research. These benchmarks lack the $\textit{multi-modal}$, $\textit{multi-document}$ nature of scientific research, where… See the full description on the dataset page: https://huggingface.co/datasets/yale-nlp/M3SciQA.
leftyfeep/Robot.E.Howard.v2
leftyfeep
Dataset Card for Robot E. Howard v2 This is a dataset meant for training LLMs based on the works of the fantastic Robert E Howard. Dataset Details Dataset Description Robert E. Howard was a fantastic author with vivid and energetic prose. The format of this dataset mimics that found in gutenberg-dpo-v0.1, so it SHOULD be useful as a drop in addition to or replacement for that set. And I prepared the data in much the same way. I split all of the… See the full description on the dataset page: https://huggingface.co/datasets/leftyfeep/Robot.E.Howard.v2.
adamtopaz/equational_dataset
adamtopaz
This dataset was generated using (a fork of) T. Tao's equational theories project. The file implications.jsonl was generated using the command lake exe extract_implications --jsonl --closure > implications.jsonl. The file tokenized_equations.jsonl was generated using the command lake exe tokenized_data equations > tokenized_equations.jsonl. The file random_tokenized_equations.jsonl was generated using the command lake exe tokenized_data lake exe tokenized_data generate xyzwuvrst 10000000 1 10… See the full description on the dataset page: https://huggingface.co/datasets/adamtopaz/equational_dataset.
DeepNLP/ChatGPT-Gemini-Claude-Perplexity-Human-Evaluation-Multi-Aspects-Review-Dataset
DeepNLP
ChatGPT Gemini Claude Perplexity Human Evaluation Multi Aspect Review Dataset Introduction Human evaluation and reviews with scalar score of AI Services responses are very usefuly in LLM Finetuning, Human Preference Alignment, Few-Shot Learning, Bad Case Shooting, etc, but extremely difficult to collect. This dataset is collected from DeepNLP AI Service User Review panel (http://www.deepnlp.org/store), which is an open review website for users to give reviews and… See the full description on the dataset page: https://huggingface.co/datasets/DeepNLP/ChatGPT-Gemini-Claude-Perplexity-Human-Evaluation-Multi-Aspects-Review-Dataset.
UCLNLP/adversarial_qa
UCLNLP
Dataset Card for adversarialQA Dataset Summary We have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop. We use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples. The adversarial… See the full description on the dataset page: https://huggingface.co/datasets/UCLNLP/adversarial_qa.
Yale-LILY/aeslc
Yale-LILY
Dataset Card for "aeslc" Dataset Summary A collection of email messages of employees in the Enron Corporation. There are two features: email_body: email body text. subject_line: email subject text. Supported Tasks and Leaderboards More Information Needed Languages Monolingual English (mainly en-US) with some exceptions. Dataset Structure Data Instances default Size of downloaded dataset… See the full description on the dataset page: https://huggingface.co/datasets/Yale-LILY/aeslc.
nwu-ctext/afrikaans_ner_corpus
nwu-ctext
Dataset Card for Afrikaans Ner Corpus Dataset Summary The Afrikaans Ner Corpus is an Afrikaans dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Afrikaans language. The dataset uses CoNLL shared task annotation standards. Supported Tasks and Leaderboards [More… See the full description on the dataset page: https://huggingface.co/datasets/nwu-ctext/afrikaans_ner_corpus.
fancyzhx/ag_news
fancyzhx
Dataset Card for "ag_news" Dataset Summary AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc)… See the full description on the dataset page: https://huggingface.co/datasets/fancyzhx/ag_news.
allenai/ai2_arc
allenai
Dataset Card for "ai2_arc" Dataset Summary A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also including a corpus of over 14 million science sentences… See the full description on the dataset page: https://huggingface.co/datasets/allenai/ai2_arc.
google/air_dialogue
google
Dataset Card for air_dialogue Dataset Summary AirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. News in v1.3: We have included the test split of the AirDialogue dataset. We… See the full description on the dataset page: https://huggingface.co/datasets/google/air_dialogue.
fancyzhx/amazon_polarity
fancyzhx
Dataset Card for Amazon Review Polarity Dataset Summary The Amazon reviews dataset consists of reviews from amazon. The data span a period of 18 years, including ~35 million reviews up to March 2013. Reviews include product and user information, ratings, and a plaintext review. Supported Tasks and Leaderboards text-classification, sentiment-classification: The dataset is mainly used for text classification: given the content and the title, predict… See the full description on the dataset page: https://huggingface.co/datasets/fancyzhx/amazon_polarity.
defunct-datasets/amazon_us_reviews
defunct-datasets
Amazon Customer Reviews (a.k.a. Product Reviews) is one of Amazons iconic products. In a period of over two decades since the first review in 1995, millions of Amazon customers have contributed over a hundred million reviews to express opinions and describe their experiences regarding products on the Amazon.com website. This makes Amazon Customer Reviews a rich source of information for academic researchers in the fields of Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning (ML), amongst others. Accordingly, we are releasing this data to further research in multiple disciplines related to understanding customer product experiences. Specifically, this dataset was constructed to represent a sample of customer evaluations and opinions, variation in the perception of a product across geographical regions, and promotional intent or bias in reviews. Over 130+ million customer reviews are available to researchers as part of this release. The data is available in TSV files in the amazon-reviews-pds S3 bucket in AWS US East Region. Each line in the data files corresponds to an individual review (tab delimited, with no quote and escape characters). Each Dataset contains the following columns: - marketplace: 2 letter country code of the marketplace where the review was written. - customer_id: Random identifier that can be used to aggregate reviews written by a single author. - review_id: The unique ID of the review. - product_id: The unique Product ID the review pertains to. In the multilingual dataset the reviews for the same product in different countries can be grouped by the same product_id. - product_parent: Random identifier that can be used to aggregate reviews for the same product. - product_title: Title of the product. - product_category: Broad product category that can be used to group reviews (also used to group the dataset into coherent parts). - star_rating: The 1-5 star rating of the review. - helpful_votes: Number of helpful votes. - total_votes: Number of total votes the review received. - vine: Review was written as part of the Vine program. - verified_purchase: The review is on a verified purchase. - review_headline: The title of the review. - review_body: The review text. - review_date: The date the review was written.
legacy-datasets/ami
legacy-datasets
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings were recorded in English using three different rooms with different acoustic properties, and include mostly non-native speakers. \n
facebook/anli
facebook
Dataset Card for "anli" Dataset Summary The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset, The dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure. ANLI is much more difficult than its predecessors including SNLI and MNLI. It contains three rounds. Each round has train/dev/test splits. Supported Tasks and Leaderboards More Information Needed Languages… See the full description on the dataset page: https://huggingface.co/datasets/facebook/anli.
deepmind/aqua_rat
deepmind
Dataset Card for AQUA-RAT Dataset Summary A large-scale dataset consisting of approximately 100,000 algebraic word problems. The solution to each question is explained step-by-step using natural language. This data is used to train a program generation model that learns to generate the explanation, while generating the program that solves the question. Supported Tasks and Leaderboards Languages en Dataset Structure… See the full description on the dataset page: https://huggingface.co/datasets/deepmind/aqua_rat.
halabi2016/arabic_speech_corpus
halabi2016
This Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton. The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .flac format and is not converted to a float32 array. To convert, the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
hsseinmz/arcd
hsseinmz
Dataset Card for "arcd" Dataset Summary Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed Dataset Structure Data Instances plain_text Size of downloaded dataset files: 1.94 MB Size of the generated dataset: 1.70 MB Total amount… See the full description on the dataset page: https://huggingface.co/datasets/hsseinmz/arcd.
facebook/asset
facebook
Dataset Card for ASSET Dataset Summary ASSET (Alva-Manchego et al., 2020) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from TurkCorpus (Xu et al., 2016) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence splitting in HSplit), the… See the full description on the dataset page: https://huggingface.co/datasets/facebook/asset.
AI-Lab-Makerere/beans
AI-Lab-Makerere
Dataset Card for Beans Dataset Summary Beans leaf dataset with images of diseased and health leaves. Supported Tasks and Leaderboards image-classification: Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any. Languages English Dataset Structure Data Instances A sample from the training set is provided below: { 'image_file_path':… See the full description on the dataset page: https://huggingface.co/datasets/AI-Lab-Makerere/beans.
nectec/best2009
nectec
Dataset Card for best2009 Dataset Summary best2009 is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by NECTEC (148,995/2,252 lines of train/test). It was created for BEST 2010: Word Tokenization Competition. The test set answers are not provided publicly. Supported Tasks and Leaderboards word tokenization Languages Thai Dataset Structure Data Instances {'char': ['?', 'ภ'… See the full description on the dataset page: https://huggingface.co/datasets/nectec/best2009.
FiscalNote/billsum
FiscalNote
Dataset Card for "billsum" Dataset Summary BillSum, summarization of US Congressional and California state bills. There are several features: text: bill text. summary: summary of the bills. title: title of the bills. features for us bills. ca bills does not have. text_len: number of chars in text. sum_len: number of chars in summary. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed… See the full description on the dataset page: https://huggingface.co/datasets/FiscalNote/billsum.
TheBritishLibrary/blbooks
TheBritishLibrary
A dataset comprising of text created by OCR from the 49,455 digitised books, equating to 65,227 volumes (25+ million pages), published between c. 1510 - c. 1900. The books cover a wide range of subject areas including philosophy, history, poetry and literature.
ParlAI/blended_skill_talk
ParlAI
Dataset Card for "blended_skill_talk" Dataset Summary A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed Dataset Structure Data Instances default Size of downloaded dataset files: 38.11 MB Size… See the full description on the dataset page: https://huggingface.co/datasets/ParlAI/blended_skill_talk.
nyu-mll/blimp
nyu-mll
Dataset Card for "blimp" Dataset Summary BLiMP is a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics. The data is automatically generated according to expert-crafted grammars. Supported Tasks and Leaderboards More Information Needed Languages… See the full description on the dataset page: https://huggingface.co/datasets/nyu-mll/blimp.
barilan/blog_authorship_corpus
barilan
The Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from blogger.com in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person. Each blog is presented as a separate file, the name of which indicates a blogger id# and the blogger’s self-provided gender, age, industry and astrological sign. (All are labeled for gender and age but for many, industry and/or sign is marked as unknown.) All bloggers included in the corpus fall into one of three age groups: - 8240 "10s" blogs (ages 13-17), - 8086 "20s" blogs (ages 23-27), - 2994 "30s" blogs (ages 33-47). For each age group there are an equal number of male and female bloggers. Each blog in the corpus includes at least 200 occurrences of common English words. All formatting has been stripped with two exceptions. Individual posts within a single blogger are separated by the date of the following post and links within a post are denoted by the label urllink. The corpus may be freely used for non-commercial research purposes.
google/boolq
google
Dataset Card for Boolq Dataset Summary BoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally occurring ---they are generated in unprompted and unconstrained settings. Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context. The text-pair classification setup is similar to existing natural language inference tasks. Supported Tasks… See the full description on the dataset page: https://huggingface.co/datasets/google/boolq.
UFRGS/brwac
UFRGS
The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, which was made public for research purposes. The current corpus version, released in January 2017, is composed by 3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available solely for academic research purposes, and you agreed not to use it for any commercial applications. Manually download at https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC
ryo0634/bsd_ja_en
ryo0634
Dataset Card for Business Scene Dialogue Dataset Summary This is the Business Scene Dialogue (BSD) dataset, a Japanese-English parallel corpus containing written conversations in various business scenarios. The dataset was constructed in 3 steps: selecting business scenes, writing monolingual conversation scenarios according to the selected scenes, and translating the scenarios into the other language. Half of the monolingual scenarios were written in… See the full description on the dataset page: https://huggingface.co/datasets/ryo0634/bsd_ja_en.
microsoft/cats_vs_dogs
microsoft
Dataset Card for Cats Vs. Dogs Dataset Summary A large set of images of cats and dogs. There are 1738 corrupted images that are dropped. This dataset is part of a now-closed Kaggle competition and represents a subset of the so-called Asirra dataset. From the competition page: The Asirra data set Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a CAPTCHA… See the full description on the dataset page: https://huggingface.co/datasets/microsoft/cats_vs_dogs.
cam-cst/cbt
cam-cst
Dataset Card for CBT Dataset Summary The Children’s Book Test (CBT) is designed to measure directly how well language models can exploit wider linguistic context. The CBT is built from books that are freely available. This dataset contains four different configurations: V: where the answers to the questions are verbs. P: where the answers to the questions are pronouns. NE: where the answers to the questions are named entities. CN: where the answers to the… See the full description on the dataset page: https://huggingface.co/datasets/cam-cst/cbt.
statmt/cc100
statmt
This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. No claims of intellectual property are made on the work of preparation of the corpus.
google-research-datasets/cfq
google-research-datasets
Dataset Card for "cfq" Dataset Summary The Compositional Freebase Questions (CFQ) is a dataset that is specifically designed to measure compositional generalization. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding SPARQL query against the Freebase knowledge base. This means that CFQ can also be used for semantic parsing. Supported Tasks and Leaderboards More… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/cfq.
uoft-cs/cifar10
uoft-cs
Dataset Card for CIFAR-10 Dataset Summary The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may… See the full description on the dataset page: https://huggingface.co/datasets/uoft-cs/cifar10.
uoft-cs/cifar100
uoft-cs
Dataset Card for CIFAR-100 Dataset Summary The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images per class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses. There are two labels per image - fine label (actual class) and coarse label (superclass). Supported Tasks and Leaderboards… See the full description on the dataset page: https://huggingface.co/datasets/uoft-cs/cifar100.
google-research-datasets/circa
google-research-datasets
Dataset Card for CIRCA Dataset Summary The Circa (meaning ‘approximately’) dataset aims to help machine learning systems to solve the problem of interpreting indirect answers to polar questions. The dataset contains pairs of yes/no questions and indirect answers, together with annotations for the interpretation of the answer. The data is collected in 10 different social conversational situations (eg. food preferences of a friend). The following are the situational… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/circa.
google/civil_comments
google
Dataset Card for "civil_comments" Dataset Summary The comments in this dataset come from an archive of the Civil Comments platform, a commenting plugin for independent news sites. These public comments were created from 2015 - 2017 and appeared on approximately 50 English-language news sites across the world. When Civil Comments shut down in 2017, they chose to make the public comments available in a lasting open archive to enable future research. The original… See the full description on the dataset page: https://huggingface.co/datasets/google/civil_comments.
tdiggelm/climate_fever
tdiggelm
Dataset Card for ClimateFever Dataset Summary A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple… See the full description on the dataset page: https://huggingface.co/datasets/tdiggelm/climate_fever.
clinc/clinc_oos
clinc
Dataset Card for CLINC150 Dataset Summary Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope (OOS), i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at… See the full description on the dataset page: https://huggingface.co/datasets/clinc/clinc_oos.
clue/clue
clue
Dataset Card for "clue" Dataset Summary CLUE, A Chinese Language Understanding Evaluation Benchmark (https://www.cluebenchmarks.com/) is a collection of resources for training, evaluating, and analyzing Chinese language understanding systems. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed Dataset Structure Data Instances afqmc Size of downloaded… See the full description on the dataset page: https://huggingface.co/datasets/clue/clue.
jaredfern/codah
jaredfern
Dataset Card for COmmonsense Dataset Adversarially-authored by Humans Dataset Summary The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of SWAG. As opposed to other automatically generated NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model and use this information to design challenging commonsense questions.… See the full description on the dataset page: https://huggingface.co/datasets/jaredfern/codah.
google/code_x_glue_cc_clone_detection_big_clone_bench
google
Dataset Card for "code_x_glue_cc_clone_detection_big_clone_bench" Dataset Summary CodeXGLUE Clone-detection-BigCloneBench dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench Given two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score. The dataset we use is BigCloneBench and filtered following the… See the full description on the dataset page: https://huggingface.co/datasets/google/code_x_glue_cc_clone_detection_big_clone_bench.
google/code_x_glue_cc_clone_detection_poj104
google
Dataset Card for "code_x_glue_cc_clone_detection_poj_104" Dataset Summary CodeXGLUE Clone-detection-POJ-104 dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104 Given a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score. We use POJ-104 dataset on this task. Supported Tasks and Leaderboards… See the full description on the dataset page: https://huggingface.co/datasets/google/code_x_glue_cc_clone_detection_poj104.
google/code_x_glue_ct_code_to_text
google
Dataset Card for "code_x_glue_ct_code_to_text" Dataset Summary CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text The dataset we use comes from CodeSearchNet and we filter the dataset as the following: Remove examples that codes cannot be parsed into an abstract syntax tree. Remove examples that #tokens of documents is < 3 or >256 Remove examples that documents contain special tokens (e.g. <img… See the full description on the dataset page: https://huggingface.co/datasets/google/code_x_glue_ct_code_to_text.
speechbrain/common_language
speechbrain
This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems.
legacy-datasets/common_voice
legacy-datasets
Common Voice is Mozilla's initiative to help teach machines how real people speak. The dataset currently consists of 7,335 validated hours of speech in 60 languages, but we’re always adding more voices and languages.
conceptnet5/conceptnet5
conceptnet5
Dataset Card for Conceptnet5 Dataset Summary ConceptNet is a multilingual knowledge base, representing words and phrases that people use and the common-sense relationships between them. The knowledge in ConceptNet is collected from a variety of resources, including crowd-sourced resources (such as Wiktionary and Open Mind Common Sense), games with a purpose (such as Verbosity and nadya.jp), and expert-created resources (such as WordNet and JMDict). You can browse… See the full description on the dataset page: https://huggingface.co/datasets/conceptnet5/conceptnet5.
eriktks/conll2003
eriktks
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on a separate line and there is an empty line after each sentence. The first item on each line is a word, the second a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2 tagging scheme, whereas the original dataset uses IOB1. For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
CFPB/consumer-finance-complaints
CFPB
Dataset Card for Consumer Finance Complaints Dataset Summary This database is a collection of complaints about consumer financial products and services that we sent to companies for response. The Consumer Complaint Database is a collection of complaints about consumer financial products and services that we sent to companies for response. Complaints are published after the company responds, confirming a commercial relationship with the consumer, or after 15 days… See the full description on the dataset page: https://huggingface.co/datasets/CFPB/consumer-finance-complaints.
stanfordnlp/coqa
stanfordnlp
Dataset Card for "coqa" Dataset Summary CoQA is a large-scale dataset for building Conversational Question Answering systems. Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. Supported Tasks and Leaderboards More Information Needed… See the full description on the dataset page: https://huggingface.co/datasets/stanfordnlp/coqa.
cornell-movie-dialog/cornell_movie_dialog
cornell-movie-dialog
This corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts: - 220,579 conversational exchanges between 10,292 pairs of movie characters - involves 9,035 characters from 617 movies - in total 304,713 utterances - movie metadata included: - genres - release year - IMDB rating - number of IMDB votes - IMDB rating - character metadata included: - gender (for 3,774 characters) - position on movie credits (3,321 characters)
facebook/covost2
facebook
CoVoST 2, a large-scale multilingual speech translation corpus covering translations from 21 languages into English and from English into 15 languages. The dataset is created using Mozilla’s open source Common Voice database of crowdsourced voice recordings. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .mp3 format and is not converted to a float32 array. To convert, the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import torchaudio def map_to_array(batch): speech_array, _ = torchaudio.load(batch["file"]) batch["speech"] = speech_array.numpy() return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
rishitdagli/cppe-5
rishitdagli
Dataset Card for CPPE - 5 Dataset Summary CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories. Some features of this dataset are: high quality images and annotations (~4.6 bounding boxes per image) real-life images unlike any current such dataset… See the full description on the dataset page: https://huggingface.co/datasets/rishitdagli/cppe-5.
stanfordnlp/craigslist_bargains
stanfordnlp
We study negotiation dialogues where two agents, a buyer and a seller, negotiate over the price of an time for sale. We collected a dataset of more than 6K negotiation dialogues over multiple categories of products scraped from Craigslist. Our goal is to develop an agent that negotiates with humans through such conversations. The challenge is to handle both the negotiation strategy and the rich language for bargaining.
microsoft/crd3
microsoft
Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game. The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail, and semantic ties to the previous dialogues.
theatticusproject/cuad-qa
theatticusproject
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions.
li2017dailydialog/daily_dialog
li2017dailydialog
We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems.
community-datasets/definite_pronoun_resolution
community-datasets
Dataset Card for "definite_pronoun_resolution" Dataset Summary Composed by 30 students from one of the author's undergraduate classes. These sentence pairs cover topics ranging from real events (e.g., Iran's plan to attack the Saudi ambassador to the U.S.) to events/characters in movies (e.g., Batman) and purely imaginary situations, largely reflecting the pop culture as perceived by the American kids born in the early 90s. Each annotated example spans four lines:… See the full description on the dataset page: https://huggingface.co/datasets/community-datasets/definite_pronoun_resolution.
community-datasets/disaster_response_messages
community-datasets
Dataset Card for Disaster Response Messages Dataset Summary This dataset contains 30,000 messages drawn from events including an earthquake in Haiti in 2010, an earthquake in Chile in 2010, floods in Pakistan in 2010, super-storm Sandy in the U.S.A. in 2012, and news articles spanning a large number of years and 100s of different disasters. The data has been encoded with 36 different categories related to disaster response and has been stripped of messages with… See the full description on the dataset page: https://huggingface.co/datasets/community-datasets/disaster_response_messages.
google-research-datasets/disfl_qa
google-research-datasets
Dataset Card for DISFL-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering Dataset Summary Disfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 (Rajpurkar et al., 2018) dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as a source of distractors. The final… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/disfl_qa.
thunlp/docred
thunlp
Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features: - DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text. - DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document. - Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios.
ucinlp/drop
ucinlp
Dataset Card for "drop" Dataset Summary DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. . DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of… See the full description on the dataset page: https://huggingface.co/datasets/ucinlp/drop.
AUEB-NLP/ecthr_cases
AUEB-NLP
The ECtHR Cases dataset is designed for experimentation of neural judgment prediction and rationale extraction considering ECtHR cases.
SemEvalWorkshop/emo
SemEvalWorkshop
In this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others.
iamollas/ethos
iamollas
ETHOS: onlinE haTe speecH detectiOn dataSet. This repository contains a dataset for hate speech detection on social media platforms, called Ethos. There are two variations of the dataset: Ethos_Dataset_Binary: contains 998 comments in the dataset alongside with a label about hate speech presence or absence. 565 of them do not contain hate speech, while the rest of them, 433, contain. Ethos_Dataset_Multi_Label: which contains 8 labels for the 433 comments with hate speech content. These labels are violence (if it incites (1) or not (0) violence), directed_vs_general (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, gender, race, national_origin, disability, religion and sexual_orientation.
community-datasets/europa_eac_tm
community-datasets
Dataset Card for Europa Education and Culture Translation Memory (EAC-TM) Dataset Summary This dataset is a corpus of manually produced translations from english to up to 25 languages, released in 2012 by the European Union's Directorate General for Education and Culture (EAC). To load a language pair that is not part of the config, just specify the language code as language pair. For example, if you want to translate Czech to Greek: dataset =… See the full description on the dataset page: https://huggingface.co/datasets/community-datasets/europa_eac_tm.
Helsinki-NLP/europarl
Helsinki-NLP
Dataset Card for OPUS Europarl (European Parliament Proceedings Parallel Corpus) Dataset Summary A parallel corpus extracted from the European Parliament web site by Philipp Koehn (University of Edinburgh). The main intended use is to aid statistical machine translation research. More information can be found at http://www.statmt.org/europarl/ Supported Tasks and Leaderboards Tasks: Machine Translation, Cross Lingual Word Embeddings (CWLE) Alignment… See the full description on the dataset page: https://huggingface.co/datasets/Helsinki-NLP/europarl.
mhardalov/exams
mhardalov
Dataset Card for [Dataset Name] Dataset Summary EXAMS is a benchmark dataset for multilingual and cross-lingual question answering from high school examinations. It consists of more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others. Supported Tasks and Leaderboards [More Information Needed] Languages The… See the full description on the dataset page: https://huggingface.co/datasets/mhardalov/exams.
zalando-datasets/fashion_mnist
zalando-datasets
Dataset Card for FashionMNIST Dataset Summary Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and… See the full description on the dataset page: https://huggingface.co/datasets/zalando-datasets/fashion_mnist.
takala/financial_phrasebank
takala
The key arguments for the low utilization of statistical techniques in financial sentiment analysis have been the difficulty of implementation for practical applications and the lack of high quality training data for building such models. Especially in the case of finance and economic texts, annotated collections are a scarce resource and many are reserved for proprietary use only. To resolve the missing training data problem, we present a collection of ∼ 5000 sentences to establish human-annotated standards for benchmarking alternative modeling techniques. The objective of the phrase level annotation task was to classify each example sentence into a positive, negative or neutral category by considering only the information explicitly available in the given sentence. Since the study is focused only on financial and economic domains, the annotators were asked to consider the sentences from the view point of an investor only; i.e. whether the news may have positive, negative or neutral influence on the stock price. As a result, sentences which have a sentiment that is not relevant from an economic or financial perspective are considered neutral. This release of the financial phrase bank covers a collection of 4840 sentences. The selected collection of phrases was annotated by 16 people with adequate background knowledge on financial markets. Three of the annotators were researchers and the remaining 13 annotators were master’s students at Aalto University School of Business with majors primarily in finance, accounting, and economics. Given the large number of overlapping annotations (5 to 8 annotations per sentence), there are several ways to define a majority vote based gold standard. To provide an objective comparison, we have formed 4 alternative reference datasets based on the strength of majority agreement: all annotators agree, >=75% of annotators agree, >=66% of annotators agree and >=50% of annotators agree.
Efstathios/guardian_authorship
Efstathios
A dataset cross-topic authorship attribution. The dataset is provided by Stamatatos 2013. 1- The cross-topic scenarios are based on Table-4 in Stamatatos 2017 (Ex. cross_topic_1 => row 1:P S U&W ). 2- The cross-genre scenarios are based on Table-5 in the same paper. (Ex. cross_genre_1 => row 1:B P S&U&W). 3- The same-topic/genre scenario is created by grouping all the datasts as follows. For ex., to use same_topic and split the data 60-40 use: train_ds = load_dataset('guardian_authorship', name="cross_topic_<<#>>", split='train[:60%]+validation[:60%]+test[:60%]') tests_ds = load_dataset('guardian_authorship', name="cross_topic_<<#>>", split='train[-40%:]+validation[-40%:]+test[-40%:]') IMPORTANT: train+validation+test[:60%] will generate the wrong splits because the data is imbalanced * See https://huggingface.co/docs/datasets/splits.html for detailed/more examples
ImperialCollegeLondon/health_fact
ImperialCollegeLondon
PUBHEALTH is a comprehensive dataset for explainable automated fact-checking of public health claims. Each instance in the PUBHEALTH dataset has an associated veracity label (true, false, unproven, mixture). Furthermore each instance in the dataset has an explanation text field. The explanation is a justification for which the claim has been assigned a particular veracity label. The dataset was created to explore fact-checking of difficult to verify claims i.e., those which require expertise from outside of the journalistics domain, in this case biomedical and public health expertise. It was also created in response to the lack of fact-checking datasets which provide gold standard natural language explanations for verdicts/labels. NOTE: There are missing labels in the dataset and we have replaced them with -1.
hotpotqa/hotpot_qa
hotpotqa
HotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features: (1) the questions require finding and reasoning over multiple supporting documents to answer; (2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas; (3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervisionand explain the predictions; (4) we offer a new type of factoid comparison questions to testQA systems’ ability to extract relevant facts and perform necessary comparison.
iapp/iapp_wiki_qa_squad
iapp
`iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset) to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in 5761/742/739 questions from 1529/191/192 articles.
ai4bharat/indic_glue
ai4bharat
Dataset Card for "indic_glue" Dataset Summary IndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide variety of tasks and covers 11 major Indian languages - as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are… See the full description on the dataset page: https://huggingface.co/datasets/ai4bharat/indic_glue.
indonlp/indonlu
indonlp
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia.
jhu-clsp/jfleg
jhu-clsp
Dataset Card for JFLEG Dataset Summary JFLEG (JHU FLuency-Extended GUG) is an English grammatical error correction (GEC) corpus. It is a gold standard benchmark for developing and evaluating GEC systems with respect to fluency (extent to which a text is native-sounding) as well as grammaticality. For each source document, there are four human-written corrections. Supported Tasks and Leaderboards Grammatical error correction. Languages… See the full description on the dataset page: https://huggingface.co/datasets/jhu-clsp/jfleg.
thu-coai/kd_conv_with_kb
thu-coai
KdConv is a Chinese multi-domain Knowledge-driven Conversionsation dataset, grounding the topics in multi-turn conversations to knowledge graphs. KdConv contains 4.5K conversations from three domains (film, music, and travel), and 86K utterances with an average turn number of 19.0. These conversations contain in-depth discussions on related topics and natural transition between multiple topics, while the corpus can also used for exploration of transfer learning and domain adaptation.\
Helsinki-NLP/kde4
Helsinki-NLP
A parallel corpus of KDE4 localization files (v.2). 92 languages, 4,099 bitexts total number of files: 75,535 total number of tokens: 60.75M total number of sentence fragments: 8.89M
facebook/kilt_tasks
facebook
Dataset Card for KILT Dataset Summary KILT has been built from 11 datasets representing 5 types of tasks: Fact-checking Entity linking Slot filling Open domain QA Dialog generation All these datasets have been grounded in a single pre-processed Wikipedia dump, allowing for fairer and more consistent evaluation as well as enabling new task setups such as multitask and transfer learning with minimal effort. KILT also provides tools to analyze and understand the… See the full description on the dataset page: https://huggingface.co/datasets/facebook/kilt_tasks.
facebook/kilt_wikipedia
facebook
KILT-Wikipedia: Wikipedia pre-processed for KILT.
klue/klue
klue
Dataset Card for KLUE Dataset Summary KLUE is a collection of 8 tasks to evaluate natural language understanding capability of Korean language models. We delibrately select the 8 tasks, which are Topic Classification, Semantic Textual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking. Supported Tasks and Leaderboards Topic… See the full description on the dataset page: https://huggingface.co/datasets/klue/klue.
inmoonlight/kor_hate
inmoonlight
Human-annotated Korean corpus collected from a popular domestic entertainment news aggregation platform for toxic speech detection. Comments are annotated for gender bias, social bias and hate speech.
kakaobrain/kor_nli
kakaobrain
Dataset Card for "kor_nli" Dataset Summary Korean Natural Language Inference datasets. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed Dataset Structure Data Instances multi_nli Size of downloaded dataset files: 42.11 MB Size of the generated dataset: 84.72 MB Total amount of disk used: 126.85 MB An example of 'train' looks as follows.… See the full description on the dataset page: https://huggingface.co/datasets/kakaobrain/kor_nli.
facebook/lama
facebook
LAMA is a dataset used to probe and analyze the factual and commonsense knowledge contained in pretrained language models. See https://github.com/facebookresearch/LAMA.
cimec/lambada
cimec
Dataset Card for LAMBADA Dataset Summary The LAMBADA evaluates the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on… See the full description on the dataset page: https://huggingface.co/datasets/cimec/lambada.
coastalcph/lex_glue
coastalcph
Dataset Card for "LexGLUE" Dataset Summary Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset (Wang et al., 2018), the subsequent more difficult SuperGLUE (Wang et al., 2019), other previous multi-task NLP benchmarks (Conneau and Kiela, 2018; McCann et al., 2018), and similar initiatives in other domains (Peng et al., 2019), we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a benchmark dataset to… See the full description on the dataset page: https://huggingface.co/datasets/coastalcph/lex_glue.
lince-benchmark/lince
lince-benchmark
LinCE is a centralized Linguistic Code-switching Evaluation benchmark (https://ritual.uh.edu/lince/) that contains data for training and evaluating NLP systems on code-switching tasks.
keithito/lj_speech
keithito
This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books in English. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .wav format and is not converted to a float32 array. To convert the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
billion-word-benchmark/lm1b
billion-word-benchmark
A benchmark corpus to be used for measuring progress in statistical language modeling. This has almost one billion words in the training data.
cis-lmu/m_lama
cis-lmu
mLAMA: a multilingual version of the LAMA benchmark (T-REx and GoogleRE) covering 53 languages.
deepmind/math_dataset
deepmind
Mathematics database. This dataset code generates mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This is designed to test the mathematical learning and algebraic reasoning skills of learning models. Original paper: Analysing Mathematical Reasoning Abilities of Neural Models (Saxton, Grefenstette, Hill, Kohli). Example usage: train_examples, val_examples = datasets.load_dataset( 'math_dataset/arithmetic__mul', split=['train', 'test'], as_supervised=True)
allenai/math_qa
allenai
Our dataset is gathered by using a new representation language to annotate over the AQuA-RAT dataset. AQuA-RAT has provided the questions, options, rationale, and the correct options.
legacy-datasets/mc4
legacy-datasets
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org". This is the processed version of Google's mC4 dataset by AllenAI.
apple/mkqa
apple
We introduce MKQA, an open-domain question answering evaluation set comprising 10k question-answer pairs sampled from the Google Natural Questions dataset, aligned across 26 typologically diverse languages (260k question-answer pairs in total). For each query we collected new passage-independent answers. These queries and answers were then human translated into 25 Non-English languages.
microsoft/ms_marco
microsoft
Dataset Card for "ms_marco" Dataset Summary Starting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search. The first dataset was a question answering dataset featuring 100,000 real Bing questions and a human generated answer. Since then we released a 1,000,000 question dataset, a natural langauge generation dataset, a passage ranking dataset, keyphrase extraction dataset, crawling dataset, and a conversational… See the full description on the dataset page: https://huggingface.co/datasets/microsoft/ms_marco.
levow/msra_ner
levow
The Third International Chinese Language Processing Bakeoff was held in Spring 2006 to assess the state of the art in two important tasks: word segmentation and named entity recognition. Twenty-nine groups submitted result sets in the two tasks across two tracks and a total of five corpora. We found strong results in both tasks as well as continuing challenges. MSRA NER is one of the provided dataset. There are three types of NE, PER (person), ORG (organization) and LOC (location). The dataset is in the BIO scheme. For more details see https://faculty.washington.edu/levow/papers/sighan06.pdf
IWSLT/mt_eng_vietnamese
IWSLT
Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese.
pfb30/multi_woz_v22
pfb30
Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics. MultiWOZ 2.1 (Eric et al., 2019) identified and fixed many erroneous annotations and user utterances in the original version, resulting in an improved version of the dataset. MultiWOZ 2.2 is a yet another improved version of this dataset, which identifies and fizes dialogue state annotation errors across 17.3% of the utterances on top of MultiWOZ 2.1 and redefines the ontology by disallowing vocabularies of slots with a large number of possible values (e.g., restaurant name, time of booking) and introducing standardized slot span annotations for these slots.