id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
s-nlp/paradetox
|
s-nlp
|
ParaDetox: Text Detoxification with Parallel Data (English)
This repository contains information about ParaDetox dataset -- the first parallel corpus for the detoxification task -- as well as models and evaluation methodology for the detoxification of English texts. The original paper "ParaDetox: Detoxification with Parallel Data" was presented at ACL 2022 main conference.
📰 Updates
[2024] We have also created versions of ParaDetox in more languages. You can checkout a… See the full description on the dataset page: https://huggingface.co/datasets/s-nlp/paradetox.
|
NLPC-UOM/sentence_alignment_dataset-Sinhala-Tamil-English
|
NLPC-UOM
|
Dataset summary
This is a gold-standard benchmark dataset for sentence alignment, between Sinhala-English-Tamil languages. Data had been crawled from the following news websites. The aligned documents annotated in the dataset NLPC-UOM/document_alignment_dataset-Sinhala-Tamil-English had been considered to annotate the aligned sentences.
News Source
url
Army
https://www.army.lk/
Hiru
http://www.hirunews.lk
ITN
https://www.newsfirst.lk
Newsfirst… See the full description on the dataset page: https://huggingface.co/datasets/NLPC-UOM/sentence_alignment_dataset-Sinhala-Tamil-English.
|
mteb/amazon_reviews_multi
|
mteb
|
We provide an Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language.
For each language, there are 200,000, 5,000 and 5,000 reviews in the training, development and test sets respectively. The maximum number of reviews per reviewer is 20 and the maximum number of reviews per product is 20. All reviews are truncated after 2,000 characters, and all reviews are at least 20 characters long.
Note that the language of a review does not necessarily match the language of its marketplace (e.g. reviews from amazon.de are primarily written in German, but could also be written in English, etc.). For this reason, we applied a language detection algorithm based on the work in Bojanowski et al. (2017) to determine the language of the review text and we removed reviews that were not written in the expected language.
|
sileod/movie_recommendation
|
sileod
|
Movie recommendation task based on the Movielens dataset
|
ganchengguang/resume_seven_class
|
ganchengguang
|
This is a resume sentence classification dataset constructed based on resume text.(https://www.kaggle.com/datasets/oo7kartik/resume-text-batch)The dataset have seven category.(experience education knowledge project others ) And three element label(header content meta).Because the dataset is a published paper, if you want to use this dataset in a paper or work, please cite following paper.https://arxiv.org/abs/2208.03219
And dataset use in article
https://arxiv.org/abs/2209.09450
|
mteb/sts22-crosslingual-sts
|
mteb
|
Scores in this dataset have been inverted to be from least to most similar!
The scores in the original STS22 task were from most to least similar.
Updates:
2024/07/06: Removed pairs where one of the sentences is empty.
|
LTCB/enwik8
|
LTCB
|
The dataset is based on the Hutter Prize (http://prize.hutter1.net) and contains the first 10^8 bytes of English Wikipedia in 2006 in XML
|
lmqg/qg_koquad
|
lmqg
|
[KorQuAD](https://huggingface.co/datasets/squad_kor_v1) dataset for question generation (QG) task.
|
BeIR/msmarco
|
BeIR
|
Dataset Card for BEIR Benchmark
Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
Fact-checking: FEVER, Climate-FEVER, SciFact
Question-Answering: NQ, HotpotQA, FiQA-2018
Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus
News Retrieval: TREC-NEWS, Robust04
Argument Retrieval: Touche-2020, ArguAna
Duplicate Question Retrieval: Quora, CqaDupstack
Citation-Prediction: SCIDOCS
Tweet… See the full description on the dataset page: https://huggingface.co/datasets/BeIR/msmarco.
|
Adapting/empathetic_dialogues_v2
|
Adapting
|
Fine-tuned empathetic dialogue datasets from https://huggingface.co/datasets/empathetic_dialogues
With labeled chat history, system response, question or not and behavior.
|
AhmedSSabir/Japanese-wiki-dump-sentence-dataset
|
AhmedSSabir
|
Dataset
5M (5121625) clean Japanese full sentence with the context. This dataset can be used to learn unsupervised semantic similarity, etc.
|
truthfulqa/truthful_qa
|
truthfulqa
|
Dataset Card for truthful_qa
Dataset Summary
TruthfulQA is a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. Questions are crafted so that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts.… See the full description on the dataset page: https://huggingface.co/datasets/truthfulqa/truthful_qa.
|
google/bigbench
|
google
|
The Beyond the Imitation Game Benchmark (BIG-bench) is a collaborative benchmark intended to
probe large language models, and extrapolate their future capabilities.
|
Bingsu/Human_Action_Recognition
|
Bingsu
|
Dataset Summary
A dataset from kaggle. origin: https://dphi.tech/challenges/data-sprint-76-human-activity-recognition/233/data
Introduction
The dataset features 15 different classes of Human Activities.
The dataset contains about 12k+ labelled images including the validation images.
Each image has only one human activity category and are saved in separate folders of the labelled classes
PROBLEM STATEMENT
Human Action Recognition (HAR) aims to… See the full description on the dataset page: https://huggingface.co/datasets/Bingsu/Human_Action_Recognition.
|
google/quickdraw
|
google
|
The Quick Draw Dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!.
The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located.
|
sil-ai/bloom-speech
|
sil-ai
|
Bloom-speech is a dataset of text aligned speech from bloomlibrary.org. This dataset contains over 50 languages including many low-resource languages. This dataset should be useful for training and/or testing speech-to-text or text-to-speech/ASR models.
|
speechcolab/gigaspeech
|
speechcolab
|
GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
and unsupervised training. Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
for speech recognition training, and to filter out segments with low-quality transcription. For system training,
GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
For our 10,000-hour XL training subset, we cap the word error rate at 4% during the filtering/validation stage,
and for all our other smaller training subsets, we cap it at 0%. The DEV and TEST evaluation sets, on the other hand,
are re-processed by professional human transcribers to ensure high transcription quality.
|
SetFit/CR
|
SetFit
|
Customer Reviews
This dataset is a port of the official CR dataset from this paper.
There is no validation split.
|
demo-org/auditor_review
|
demo-org
|
Dataset Card for Auditor_Review
Dataset Description
Auditor review data collected by News Department
Point of Contact:
Talked to COE for Auditing, currently [email protected]
Dataset Summary
Auditor sentiment dataset of sentences from financial news. The dataset consists of 3500 sentences from English language financial news categorized by sentiment. The dataset is divided by the agreement rate of 5-8 annotators.
Supported Tasks and… See the full description on the dataset page: https://huggingface.co/datasets/demo-org/auditor_review.
|
thu-coai/lccc
|
thu-coai
|
LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.
A rigorous data cleaning pipeline is designed to ensure the quality of the corpus.
This pipeline involves a set of rules and several classifier-based filters.
Noises such as offensive or sensitive words, special symbols, emojis,
grammatically incorrect sentences, and incoherent conversations are filtered.
|
julien-c/kaggle-hugomathien-soccer
|
julien-c
|
Source: https://www.kaggle.com/datasets/hugomathien/soccer by Hugo Mathien
About Dataset
The ultimate Soccer database for data analysis and machine learning
What you get:
+25,000 matches
+10,000 players
11 European Countries with their lead championship
Seasons 2008 to 2016
Players and Teams' attributes* sourced from EA Sports' FIFA video game series, including the weekly updates
Team line up with squad formation (X, Y coordinates)
Betting odds from up to 10… See the full description on the dataset page: https://huggingface.co/datasets/julien-c/kaggle-hugomathien-soccer.
|
valurank/Adult-content-dataset
|
valurank
|
Dataset Card for Adult_Content_Detection
Dataset Description
850 Articles descriptions classified into two different categories namely: Adult, and Non_Adult
Languages
The text in the dataset is in English
Dataset Structure
The dataset consists of two columns namely Description and Category.
The Description column consists of the overview of the article and the Category column consists of the class each article belongs to… See the full description on the dataset page: https://huggingface.co/datasets/valurank/Adult-content-dataset.
|
mounikaiiith/Telugu-Sarcasm
|
mounikaiiith
|
Do cite the below references for using the dataset:
@article{marreddy2022resource, title={Am I a Resource-Poor Language? Data Sets, Embeddings, Models and Analysis for four different NLP tasks in Telugu Language},
author={Marreddy, Mounika and Oota, Subba Reddy and Vakada, Lakshmi Sireesha and Chinni, Venkata Charan and Mamidi, Radhika},
journal={Transactions on Asian and Low-Resource Language Information Processing}, publisher={ACM New York, NY} }
@article{marreddy2022multi… See the full description on the dataset page: https://huggingface.co/datasets/mounikaiiith/Telugu-Sarcasm.
|
HuggingFaceM4/Stanford-Cars
|
HuggingFaceM4
|
Code snippet to visualise the position of the box
import matplotlib.image as img
import matplotlib.pyplot as plt
from datasets import load_dataset
from matplotlib.patches import Rectangle
# Load dataset
ds_name = "SaulLu/Stanford-Cars"
ds = load_dataset(ds_name, use_auth_token=True)
# Extract information for the sample we want to show
index = 100
sample = ds["train"][index]
box_coord = sample["bbox"][0]
img_path = sample["image"].filename
# Create plot
# define Matplotlib figure… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceM4/Stanford-Cars.
|
Shayanvsf/pquad_public
|
Shayanvsf
|
\\\ParSQuAD: Persian Question Answering Dataset based on Machine Translation of SQuAD 2.0
|
Nexdata/Cantonese_Conversational_Speech_Data_by_Mobile_Phone_and_Voice_Recorder
|
Nexdata
|
Dataset Card for Nexdata/Cantonese_Conversational_Speech_Data_by_Mobile_Phone_and_Voice_Recorder
Dataset Summary
995 local Cantonese speakers participated in the recording, and conducted face-to-face communication in a natural way. They had free discussion on a number of given topics, with a wide range of fields; the voice was natural and fluent, in line with the actual dialogue scene. Text is transferred manually, with high accuracy.
For more details, please… See the full description on the dataset page: https://huggingface.co/datasets/Nexdata/Cantonese_Conversational_Speech_Data_by_Mobile_Phone_and_Voice_Recorder.
|
iejMac/CLIP-Kinetics700
|
iejMac
|
Dataset Card for CLIP-Kinetics70
Dataset Description
Dataset Summary
CLIP-Kinetics700 is a compressed version of the Kinetics700 dataset using OpenAI's CLIP model.
The original dataset is ~700 GB making it difficult to use and hold in memory on one machine. By downsampling each video to 1 FPS and encoding the frames using CLIP we we're able to compress the dataset to ~8 GB making it very memory-friendly and easy to use.
Dataset… See the full description on the dataset page: https://huggingface.co/datasets/iejMac/CLIP-Kinetics700.
|
Nexdata/Far-filed_Noise_Speech_Data_in_Home_Environment_by_Mic-Array
|
Nexdata
|
Dataset Card for Nexdata/Far-filed_Noise_Speech_Data_in_Home_Environment_by_Mic-Array
Dataset Summary
The data consists of multiple sets of products, each with a different type of microphone arrays. Noise data is collected from real home scenes of the indoor residence of ordinary residents. The data set can be used for tasks such as voice enhancement and automatic speech recognition in a home scene.
For more details, please refer to the link:… See the full description on the dataset page: https://huggingface.co/datasets/Nexdata/Far-filed_Noise_Speech_Data_in_Home_Environment_by_Mic-Array.
|
Nexdata/Emotional_Video_Data
|
Nexdata
|
Dataset Card for Nexdata/Emotional_Video_Data
Dataset Summary
1,003 People - Emotional Video Data. The data diversity includes multiple races, multiple indoor scenes, multiple age groups, multiple languages, multiple emotions (11 types of facial emotions, 15 types of inner emotions). For each sentence in each video, emotion types (including facial emotions and inner emotions), start & end time, and text transcription were annotated.This dataset can be used for… See the full description on the dataset page: https://huggingface.co/datasets/Nexdata/Emotional_Video_Data.
|
nateraw/parti-prompts
|
nateraw
|
Dataset Card for PartiPrompts (P2)
Dataset Summary
PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release
as part of this work. P2 can be used to measure model capabilities across
various categories and challenge aspects.
P2 prompts can be simple, allowing us to gauge the progress from scaling. They
can also be complex, such as the following 67-word description we created for
Vincent van Gogh’s The Starry Night (1889):
Oil-on-canvas… See the full description on the dataset page: https://huggingface.co/datasets/nateraw/parti-prompts.
|
CShorten/ML-ArXiv-Papers
|
CShorten
|
This dataset contains the subset of ArXiv papers with the "cs.LG" tag to indicate the paper is about Machine Learning.
The core dataset is filtered from the full ArXiv dataset hosted on Kaggle: https://www.kaggle.com/datasets/Cornell-University/arxiv. The original dataset contains roughly 2 million papers. This dataset contains roughly 100,000 papers following the category filtering.
The dataset is maintained by with requests to the ArXiv API.
The current iteration of the dataset only contains… See the full description on the dataset page: https://huggingface.co/datasets/CShorten/ML-ArXiv-Papers.
|
fever/feverous
|
fever
|
Dataset Card for FEVEROUS
Dataset Summary
With billions of individual pages on the web providing information on almost every conceivable topic, we should have
the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
transform free-form text to structured knowledge. There is, however… See the full description on the dataset page: https://huggingface.co/datasets/fever/feverous.
|
stanfordnlp/concurrentqa
|
stanfordnlp
|
ConcurrentQA is a textual multi-hop QA benchmark to require concurrent retrieval over multiple data-distributions (i.e. Wikipedia and email data). This dataset was constructed by researchers at Stanford and FAIR, following the data collection process and schema of HotpotQA. This benchmark can be used to study generalization in retrieval as well as privacy when reasoning across multiple privacy scopes --- i.e. public Wikipedia documents and private emails.
This dataset is for the… See the full description on the dataset page: https://huggingface.co/datasets/stanfordnlp/concurrentqa.
|
joelniklaus/brazilian_court_decisions
|
joelniklaus
|
Dataset Card for predicting-brazilian-court-decisions
Dataset Summary
The dataset is a collection of 4043 Ementa (summary) court decisions and their metadata from
the Tribunal de Justiça de Alagoas (TJAL, the State Supreme Court of Alagoas (Brazil). The court decisions are labeled
according to 7 categories and whether the decisions were unanimous on the part of the judges or not. The dataset
supports the task of Legal Judgment Prediction.
Supported… See the full description on the dataset page: https://huggingface.co/datasets/joelniklaus/brazilian_court_decisions.
|
rkstgr/mtg-jamendo
|
rkstgr
|
Repackaging of the MTG Jamendo dataset.
We present the MTG-Jamendo Dataset, a new open dataset for music auto-tagging.
It is built using music available at Jamendo under Creative Commons licenses and tags provided by content creators.
The dataset contains over 55,000 full audio tracks with 195 tags from genre, instrument, and mood/theme categories.
|
HuggingFaceM4/VQAv2
|
HuggingFaceM4
|
VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer.
|
codeparrot/codecomplex
|
codeparrot
|
CodeComplex Dataset
Dataset Description
CodeComplex consists of 4,200 Java codes submitted to programming competitions by human programmers and their complexity labels annotated by a group of algorithm experts.
How to use it
You can load and iterate through the dataset with the following two lines of code:
from datasets import load_dataset
ds = load_dataset("codeparrot/codecomplex", split="train")
print(next(iter(ds)))
Data Structure… See the full description on the dataset page: https://huggingface.co/datasets/codeparrot/codecomplex.
|
Nexdata/Human_Facial_Skin_Defects_Data
|
Nexdata
|
Dataset Card for Nexdata/Human_Facial_Skin_Defects_Data
Dataset Summary
4,788 Chinese people 5,105 images Human Facial Skin Defects Data. The data includes the following five types of facial skin defects: acne, acne marks, stains, wrinkles and dark circles. This data can be used for tasks such as skin defects detection.
For more details, please refer to the link: https://www.nexdata.ai/datasets/computervision/1052?source=Huggingface
Supported Tasks and… See the full description on the dataset page: https://huggingface.co/datasets/Nexdata/Human_Facial_Skin_Defects_Data.
|
launch/open_question_type
|
launch
|
Open-ended question type annotated dataset.
|
knkarthick/samsum
|
knkarthick
|
Dataset Card for SAMSum Corpus
Dataset Description
Links
Homepage: hhttps://arxiv.org/abs/1911.12237v2
Repository: https://arxiv.org/abs/1911.12237v2
Paper: https://arxiv.org/abs/1911.12237v2
Point of Contact: https://huggingface.co/knkarthick
Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were… See the full description on the dataset page: https://huggingface.co/datasets/knkarthick/samsum.
|
kensho/spgispeech
|
kensho
|
The SPGISpeech corpus is derived from company earnings calls manually transcribed by S&P Global, Inc. according to a pro- fessional style guide detailing conventions for capitalization, punctuation, denormalization of non-standard words and tran- scription of disfluencies in spontaneous speech. The basic unit of SPGISpeech is a pair consisting of a 5 to 15 second long 16 bit, 16kHz mono wav audio file and its transcription..
|
codeparrot/github-code-clean
|
codeparrot
|
The GitHub Code clean dataset in a more filtered version of codeparrot/github-code dataset, it consists of 115M code files from GitHub in 32 programming languages with 60 extensions totaling in almost 1TB of text data.
|
zh-plus/tiny-imagenet
|
zh-plus
|
Dataset Card for tiny-imagenet
Dataset Summary
Tiny ImageNet contains 100000 images of 200 classes (500 for each class) downsized to 64×64 colored images. Each class has 500 training images, 50 validation images, and 50 test images.
Languages
The class labels in the dataset are in English.
Dataset Structure
Data Instances
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=64x64 at 0x1A800E8E190… See the full description on the dataset page: https://huggingface.co/datasets/zh-plus/tiny-imagenet.
|
benschill/brain-tumor-collection
|
benschill
|
This dataset is intended as a test case for classification tasks (4 different kinds of brain xrays). The dataset consists of almost 1400 JPEG images grouped into two splits - training and validation. Each split contains 4 categories labeled as n0~n3, each corresponding to a cancer result of the mrt.
| Label | Xray Category | Train Images | Validation Images |
| ----- | --------------------- | ------------ | ----------------- |
| n0 | glioma_tumor | 826 | 100 |
| n1 | meningioma_tumor | 822 | 115 |
| n2 | pituitary_tumor | 827 | 74 |
| n3 | no_tumor | 395 | 105 |
|
nreimers/reddit_question_best_answers
|
nreimers
|
Question & question body together with the best answers to that question from Reddit.
The score for the question / answer is the upvote count (i.e. positive-negative upvotes).
Only questions / answers that have these properties were extracted:
min_score = 3
min_title_len = 20
min_body_len = 100
|
codeparrot/xlcost-text-to-code
|
codeparrot
|
XLCoST is a machine learning benchmark dataset that contains fine-grained parallel data in 7 commonly used programming languages (C++, Java, Python, C#, Javascript, PHP, C), and natural language (English).
|
facebook/flores
|
facebook
|
The creation of FLORES-200 doubles the existing language coverage of FLORES-101.
Given the nature of the new languages, which have less standardization and require
more specialized professional translations, the verification process became more complex.
This required modifications to the translation workflow. FLORES-200 has several languages
which were not translated from English. Specifically, several languages were translated
from Spanish, French, Russian and Modern Standard Arabic. Moreover, FLORES-200 also
includes two script alternatives for four languages. FLORES-200 consists of translations
from 842 distinct web articles, totaling 3001 sentences. These sentences are divided
into three splits: dev, devtest, and test (hidden). On average, sentences are approximately
21 words long.
|
elihoole/asrs-aviation-reports
|
elihoole
|
Dataset Card for ASRS Aviation Incident Reports
Dataset Summary
This dataset collects 47,723 aviation incident reports published in the Aviation Safety Reporting System (ASRS) database maintained by NASA.
Supported Tasks and Leaderboards
'summarization': Dataset can be used to train a model for abstractive and extractive summarization. The model performance is measured by how high the output summary's ROUGE score for a given narrative account of… See the full description on the dataset page: https://huggingface.co/datasets/elihoole/asrs-aviation-reports.
|
demelin/moral_stories
|
demelin
|
Moral Stories is a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented
social reasoning. For detailed information, see https://aclanthology.org/2021.emnlp-main.54.pdf.
|
rajistics/indian_food_images
|
rajistics
|
Source of dataset: Kaggle
This Dataset contains different images of food in 20 different classes. Some of the classes are of Indian food. All the images are extracted from google. Images per classes are little so Data augmentation and transfer learning will be best suited here.
Classes of the model: "burger", "butter_naan", "chai", "chapati", "chole_bhature", "dal_makhani", "dhokla", "fried_rice", "idli", "jalebi", "kaathi_rolls", "kadai_paneer", "kulfi", "masala_dosa", "momos", "paani_puri"… See the full description on the dataset page: https://huggingface.co/datasets/rajistics/indian_food_images.
|
tner/mit_movie_trivia
|
tner
|
MIT Movie
|
tner/mit_restaurant
|
tner
|
[mit_restaurant NER dataset](https://groups.csail.mit.edu/sls/downloads/)
|
Muennighoff/xwinograd
|
Muennighoff
|
A multilingual collection of Winograd Schemas in six languages that can be used for evaluation of cross-lingual commonsense reasoning capabilities.
|
deepmind/code_contests
|
deepmind
|
Dataset Card for CodeContests
Dataset Summary
CodeContests is a competitive programming dataset for machine-learning. This
dataset was used when training AlphaCode.
It consists of programming problems, from a variety of sources:
Site
URL
Source
Aizu
https://judge.u-aizu.ac.jp
CodeNet
AtCoder
https://atcoder.jp
CodeNet
CodeChef
https://www.codechef.com
description2code
Codeforces
https://codeforces.com
description2code and Codeforces
HackerEarth… See the full description on the dataset page: https://huggingface.co/datasets/deepmind/code_contests.
|
naver-clova-ix/synthdog-ja
|
naver-clova-ix
|
Donut 🍩 : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets
For more information, please visit https://github.com/clovaai/donut
The links to the SynthDoG-generated datasets are here:
synthdog-en: English, 0.5M.
synthdog-zh: Chinese, 0.5M.
synthdog-ja: Japanese, 0.5M.
synthdog-ko: Korean, 0.5M.
To generate synthetic datasets with our SynthDoG, please see ./synthdog/README.md and our paper for details.
How to Cite
If you find this work… See the full description on the dataset page: https://huggingface.co/datasets/naver-clova-ix/synthdog-ja.
|
joelniklaus/mapa
|
joelniklaus
|
Dataset Card for Multilingual European Datasets for Sensitive Entity Detection in the Legal Domain
Dataset Summary
The dataset consists of 12 documents (9 for Spanish due to parsing errors) taken from EUR-Lex, a multilingual corpus of court
decisions and legal dispositions in the 24 official languages of the European Union. The documents have been annotated
for named entities following the guidelines of the MAPA project which foresees two
annotation level, a… See the full description on the dataset page: https://huggingface.co/datasets/joelniklaus/mapa.
|
FinanceInc/auditor_sentiment
|
FinanceInc
|
Dataset Card for Auditor Sentiment
Dataset Description
Auditor review sentiment collected by News Department
Point of Contact:
Talked to COE for Auditing, currently [email protected]
Dataset Summary
Auditor sentiment dataset of sentences from financial news. The dataset consists of several thousand sentences from English language financial news categorized by sentiment.
Supported Tasks and Leaderboards
Sentiment Classification… See the full description on the dataset page: https://huggingface.co/datasets/FinanceInc/auditor_sentiment.
|
Artificio/WikiArt
|
Artificio
|
Dataset Card for "WikiArt"
More Information needed
|
ttxy/emotion
|
ttxy
|
一个包含六种基本情绪(愤怒、恐惧、喜悦、爱、悲伤和惊讶)的英文Twitter消息数据集
Github 链接 https://github.com/dair-ai/emotion_dataset
|
joelniklaus/lextreme
|
joelniklaus
|
The LEXTREME Benchmark is a collection of multilingual datasets for evaluating model performance
across a diverse set of legal NLU tasks.
|
ziwenyd/transcoder-geeksforgeeks
|
ziwenyd
|
statistics
cpp-java: 627 pairs
python-java: 616 pairs
cpp-python: 545 pairs
|
daekeun-ml/naver-news-summarization-ko
|
daekeun-ml
|
This dataset is a custom dataset created by the author by crawling Naver News (https://news.naver.com) for the Korean NLP model hands-on.
Period: July 1, 2022 - July 10, 2022
Subject: IT, economics
DatasetDict({
train: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 22194
})
test: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 2740
})… See the full description on the dataset page: https://huggingface.co/datasets/daekeun-ml/naver-news-summarization-ko.
|
Corran/pexelvideos
|
Corran
|
Pexel Videos
358,551 video urls, average length 19.5s, and associated metadata from pexels.com.
Data was extracted from their video sitemaps (pexels.com/robots.txt) on 01/08/2022.
Data is stored in PexelVideos.parquet.gzip as a gzipped parquet
To get this data ensure you have git installed and do !git lfs clone https://huggingface.co/datasets/Corran/pexelvideos/
In python the reccomended reading is by opening the file with pandas.
!pip install pandas
import pandas… See the full description on the dataset page: https://huggingface.co/datasets/Corran/pexelvideos.
|
owaiskha9654/PubMed_MultiLabel_Text_Classification_Dataset_MeSH
|
owaiskha9654
|
This dataset consists of a approx 50k collection of research articles from PubMed repository. Originally these documents are manually annotated by Biomedical Experts with their MeSH labels and each articles are described in terms of 10-15 MeSH labels. In this Dataset we have huge numbers of labels present as a MeSH major which is raising the issue of extremely large output space and severe label sparsity issues. To solve this Issue Dataset has been Processed and mapped to its root as Described… See the full description on the dataset page: https://huggingface.co/datasets/owaiskha9654/PubMed_MultiLabel_Text_Classification_Dataset_MeSH.
|
Bingsu/laion2B-multi-korean-subset
|
Bingsu
|
laion2B-multi-korean-subset
About dataset
a subset data of laion/laion2B-multi, including only korean
Lisence
CC-BY-4.0
Data Structure
Data Instance
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/laion2B-multi-korean-subset")
>>> dataset
DatasetDict({
train: Dataset({
features: ['SAMPLE_ID', 'URL', 'TEXT', 'HEIGHT', 'WIDTH', 'LICENSE', 'LANGUAGE', 'NSFW', 'similarity']… See the full description on the dataset page: https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset.
|
dali-does/clevr-math
|
dali-does
|
CLEVR-Math is a dataset for compositional language, visual and mathematical reasoning. CLEVR-Math poses questions about mathematical operations on visual scenes using subtraction and addition, such as "Remove all large red cylinders. How many objects are left?". There are also adversarial (e.g. "Remove all blue cubes. How many cylinders are left?") and multihop questions (e.g. "Remove all blue cubes. Remove all small purple spheres. How many objects are left?").
|
pcuenq/oxford-pets
|
pcuenq
|
Oxford-IIIT Pet Dataset
Images from The Oxford-IIIT Pet Dataset. Only images and labels have been pushed, segmentation annotations were ignored.
Homepage: https://www.robots.ox.ac.uk/~vgg/data/pets/
License:
Same as the original dataset.
|
scikit-learn/churn-prediction
|
scikit-learn
|
Customer churn prediction dataset of a fictional telecommunication company made by IBM Sample Datasets.
Context
Predict behavior to retain customers. You can analyze all relevant customer data and develop focused customer retention programs.
Content
Each row represents a customer, each column contains customer’s attributes described on the column metadata.
The data set includes information about:
Customers who left within the last month: the column is called Churn
Services that each customer… See the full description on the dataset page: https://huggingface.co/datasets/scikit-learn/churn-prediction.
|
hoskinson-center/proof-pile
|
hoskinson-center
|
A dataset of high quality mathematical text.
|
SLPL/syntran-fa
|
SLPL
|
SynTran-fa
Syntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below:
import datasets
data = datasets.load_dataset('SLPL/syntran-fa', split="train")
Dataset Description
Homepage: Sharif-SLPL
Repository: SynTran-fa
Point of Contact: Sadra Sabouri
Paper: SynTran-fa: Generating Comprehensive Answers for Farsi QA Pairs via Syntactic Transformation
Dataset… See the full description on the dataset page: https://huggingface.co/datasets/SLPL/syntran-fa.
|
UCL-DARK/ludwig
|
UCL-DARK
|
TODO
|
juletxara/visual-spatial-reasoning
|
juletxara
|
The Visual Spatial Reasoning (VSR) corpus is a collection of caption-image pairs with true/false labels. Each caption describes the spatial relation of two individual objects in the image, and a vision-language model (VLM) needs to judge whether the caption is correctly describing the image (True) or not (False).
|
RUCAIBox/Story-Generation
|
RUCAIBox
|
This is the story generation datasets collected by TextBox, including:
ROCStories (roc)
WritingPrompts (wp)
Hippocorpus (hc)
WikiPlots (wikip)
ChangeMyView (cmv).
The detail and leaderboard of each dataset can be found in TextBox page.
|
Bingsu/zeroth-korean
|
Bingsu
|
Zeroth-Korean
Zeroth-Korean
The data set contains transcriebed audio data for Korean. There are 51.6 hours transcribed Korean audio for training data (22,263 utterances, 105 people, 3000 sentences) and 1.2 hours transcribed Korean audio for testing data (457 utterances, 10 people). This corpus also contains pre-trained/designed language model, lexicon and morpheme-based segmenter(morfessor).
Zeroth project introduces free Korean speech corpus and aims to make… See the full description on the dataset page: https://huggingface.co/datasets/Bingsu/zeroth-korean.
|
AhmedSSoliman/CodeXGLUE-CONCODE
|
AhmedSSoliman
|
Concode dataset
A large dataset with over 100,000 examples consisting of Java classes from online code repositories, and develop a new encoder-decoder architecture that models the interaction between the method documentation and the class environment.
Concode dataset is a widely used code generation dataset from Iyer's EMNLP 2018 paper Mapping Language to Code in Programmatic Context.
Data statistics of concode dataset are shown in the below table:
#Examples
Train… See the full description on the dataset page: https://huggingface.co/datasets/AhmedSSoliman/CodeXGLUE-CONCODE.
|
GateNLP/broad_twitter_corpus
|
GateNLP
|
This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses.
The goal is to represent a broad range of activities, giving a dataset more representative of the language used
in this hardest of social media formats to process. Further, the BTC is annotated for named entities.
For more details see [https://aclanthology.org/C16-1111/](https://aclanthology.org/C16-1111/)
|
MLCommons/peoples_speech
|
MLCommons
|
Dataset Card for People's Speech
Dataset Summary
The People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.… See the full description on the dataset page: https://huggingface.co/datasets/MLCommons/peoples_speech.
|
allenai/real-toxicity-prompts
|
allenai
|
Dataset Card for Real Toxicity Prompts
Dataset Summary
RealToxicityPrompts is a dataset of 100k sentence snippets from the web for researchers to further address the risk of neural toxic degeneration in models.
Languages
English
Dataset Structure
Data Instances
Each instance represents a prompt and its metadata:
{
"filename":"0766186-bc7f2a64cb271f5f56cf6f25570cd9ed.txt",
"begin":340,
"end":564… See the full description on the dataset page: https://huggingface.co/datasets/allenai/real-toxicity-prompts.
|
edinburghcstr/ami
|
edinburghcstr
|
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
synchronized to a common timeline. These include close-talking and far-field microphones, individual and
room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,
the participants also have unsynchronized pens available to them that record what is written. The meetings
were recorded in English using three different rooms with different acoustic properties, and include mostly
non-native speakers. \n
|
ariesutiono/entailment-bank-v3
|
ariesutiono
|
Entailment bank dataset
This dataset raw source can be found at allenai's Github.
If you use this dataset, it is best to cite the original paper
@article{entalmentbank2021,
title={Explaining Answers with Entailment Trees},
author={Dalvi, Bhavana and Jansen, Peter and Tafjord, Oyvind and Xie, Zhengnan and Smith, Hannah and Pipatanangkura, Leighanna and Clark, Peter},
journal={EMNLP},
year={2021}
}
|
tyqiangz/multilingual-sentiments
|
tyqiangz
|
Multilingual Sentiments Dataset
A collection of multilingual sentiments datasets grouped into 3 classes -- positive, neutral, negative.
Most multilingual sentiment datasets are either 2-class positive or negative, 5-class ratings of products reviews (e.g. Amazon multilingual dataset) or multiple classes of emotions. However, to an average person, sometimes positive, negative and neutral classes suffice and are more straightforward to perceive and annotate. Also, a… See the full description on the dataset page: https://huggingface.co/datasets/tyqiangz/multilingual-sentiments.
|
masakhane/mafand
|
masakhane
|
MAFAND-MT is the largest MT benchmark for African languages in the news domain, covering 21 languages. The languages covered are:
- Amharic
- Bambara
- Ghomala
- Ewe
- Fon
- Hausa
- Igbo
- Kinyarwanda
- Luganda
- Luo
- Mossi
- Nigerian-Pidgin
- Chichewa
- Shona
- Swahili
- Setswana
- Twi
- Wolof
- Xhosa
- Yoruba
- Zulu
The train/validation/test sets are available for 16 languages, and validation/test set for amh, kin, nya, sna, and xho
For more details see https://aclanthology.org/2022.naacl-main.223/
|
RCC-MSU/collection3
|
RCC-MSU
|
Collection3 is a Russian dataset for named entity recognition annotated with LOC (location), PER (person), and ORG (organization) tags.
Dataset is based on collection Persons-1000 originally containing 1000 news documents labeled only with names of persons.
Additional labels were added by Valerie Mozharova and Natalia Loukachevitch.
Conversion to the IOB2 format and splitting into train, validation and test sets was done by DeepPavlov team.
For more details see https://ieeexplore.ieee.org/document/7584769 and http://labinform.ru/pub/named_entities/index.htm
|
ai-forever/Peter
|
ai-forever
|
Digital Peter
The Peter dataset can be used for reading texts from the manuscripts written by Peter the Great. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
Paper is available at http://arxiv.org/abs/2103.09354
Description
Digital Peter is an educational task with a historical slant created on the basis of several AI technologies (Computer Vision, NLP, and… See the full description on the dataset page: https://huggingface.co/datasets/ai-forever/Peter.
|
kakaobrain/coyo-700m
|
kakaobrain
|
Dataset Card for COYO-700M
Dataset Summary
COYO-700M is a large-scale dataset that contains 747M image-text pairs as well as many other meta-attributes to increase the usability to train various models. Our dataset follows a similar strategy to previous vision-and-language datasets, collecting many informative pairs of alt-text and its associated image in HTML documents. We expect COYO to be used to train popular large-scale foundation models
complementary to… See the full description on the dataset page: https://huggingface.co/datasets/kakaobrain/coyo-700m.
|
iejMac/CLIP-WebVid
|
iejMac
|
version https://git-lfs.github.com/spec/v1
oid sha256:3e0e15fa0c5cc81675bd69af8eb469d128a725c1a7bfc71f03b7877b7b650567
size 21
|
ShapeNet/ShapeNetSem-archive
|
ShapeNet
|
This repository contains archives (zip files) for ShapeNetSem, a subset of ShapeNet richly annotated with physical attributes.
Please see DATA.md for details about the data.
If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report and the… See the full description on the dataset page: https://huggingface.co/datasets/ShapeNet/ShapeNetSem-archive.
|
demo-org/diabetes
|
demo-org
|
Dataset Card for Auditor_Review
This file is a copy, the original version is hosted at data.world
|
ShapeNet/shapenetcore-glb
|
ShapeNet
|
This repository contains ShapeNetCore (v2) in GLB format, a subset of ShapeNet.ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in WordNet 3.0.
If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by… See the full description on the dataset page: https://huggingface.co/datasets/ShapeNet/shapenetcore-glb.
|
indonesian-nlp/librivox-indonesia
|
indonesian-nlp
|
Dataset Card for LibriVox Indonesia 1.0
Dataset Summary
The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public
domain audiobooks LibriVox. We collected only languages in Indonesia for this dataset.
The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio
file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
We converted… See the full description on the dataset page: https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia.
|
bigbio/biosses
|
bigbio
|
BIOSSES computes similarity of biomedical sentences by utilizing WordNet as the
general domain ontology and UMLS as the biomedical domain specific ontology.
The original paper outlines the approaches with respect to using annotator
score as golden standard. Source view will return all annotator score
individually whereas the Bigbio view will return the mean of the annotator
score.
|
zeroshot/twitter-financial-news-topic
|
zeroshot
|
Dataset Description
The Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their topic.
The dataset holds 21,107 documents annotated with 20 labels:
topics = {
"LABEL_0": "Analyst Update",
"LABEL_1": "Fed | Central Banks",
"LABEL_2": "Company | Product News",
"LABEL_3": "Treasuries | Corporate Debt",
"LABEL_4": "Dividend"… See the full description on the dataset page: https://huggingface.co/datasets/zeroshot/twitter-financial-news-topic.
|
jonathanli/law-stack-exchange
|
jonathanli
|
Dataset Card for Law Stack Exchange Dataset
Dataset Summary
Dataset from the Law Stack Exchange, as used in "Parameter-Efficient Legal Domain Adaptation".
Citation Information
@inproceedings{li-etal-2022-parameter,
title = "Parameter-Efficient Legal Domain Adaptation",
author = "Li, Jonathan and
Bhambhoria, Rohan and
Zhu, Xiaodan",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2022",
month =… See the full description on the dataset page: https://huggingface.co/datasets/jonathanli/law-stack-exchange.
|
ai-forever/school_notebooks_RU
|
ai-forever
|
School Notebooks Dataset
The images of school notebooks with handwritten notes in Russian.
The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
Annotation format
The annotation is in COCO format. The annotation.json should have the following dictionaries:
annotation["categories"] - a list of dicts with a categories info (categotiy names and indexes).… See the full description on the dataset page: https://huggingface.co/datasets/ai-forever/school_notebooks_RU.
|
pysentimiento/spanish-tweets
|
pysentimiento
|
spanish-tweets
A big corpus of tweets for pretraining embeddings and language models
Dataset Summary
A big dataset of (mostly) Spanish tweets for pre-training language models (or other representations).
Supported Tasks and Leaderboards
Language Modeling
Languages
Mostly Spanish, but some Portuguese, English, and other languages.
Dataset Structure
Data Fields
tweet_id: id of the tweet… See the full description on the dataset page: https://huggingface.co/datasets/pysentimiento/spanish-tweets.
|
cannlytics/cannabis_results
|
cannlytics
|
Cannabis results is a dataset of curated cannabis lab test results. The dataset consists of sub-datasets for each state with any public cannabis lab tests, as well as a sub-dataset that includes all results.
|
opentargets/clinical_trial_reason_to_stop
|
opentargets
|
Dataset Card for Clinical Trials's Reason to Stop
Dataset Summary
This dataset contains a curated classification of more than 5000 reasons why a clinical trial has suffered an early stop.
The text has been extracted from clinicaltrials.gov, the largest resource of clinical trial information. The text has been curated by members of the Open Targets organisation, a project aimed at providing data relevant to drug development.
All 17 possible classes have been… See the full description on the dataset page: https://huggingface.co/datasets/opentargets/clinical_trial_reason_to_stop.
|
dclure/laion-aesthetics-12m-umap
|
dclure
|
LAION-Aesthetics :: CLIP → UMAP
This dataset is a CLIP (text) → UMAP embedding of the LAION-Aesthetics dataset - specifically the improved_aesthetics_6plus version, which filters the full dataset to images with scores of > 6 under the "aesthetic" filtering model.
Thanks LAION for this amazing corpus!
The dataset here includes coordinates for 3x separate UMAP fits using different values for the n_neighbors parameter - 10, 30, and 60 - which are broken out as separate columns… See the full description on the dataset page: https://huggingface.co/datasets/dclure/laion-aesthetics-12m-umap.
|
lambdalabs/pokemon-blip-captions
|
lambdalabs
|
Notice of DMCA Takedown Action
We have received a DMCA takedown notice from The Pokémon Company International, Inc.
In response to this action, we have taken down the dataset.
We appreciate your understanding.
|
Bingsu/openwebtext_20p
|
Bingsu
|
openwebtext_20p
first 20% of openwebtext
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.