id
stringlengths
6
121
author
stringlengths
2
42
description
stringlengths
0
6.67k
sedthh/gutenberg_english
sedthh
Dataset Card for Project Gutenber - English Language eBooks A collection of non-english language eBooks (48284 rows, 80%+ of all english language books available on the site) from the Project Gutenberg site with metadata removed. Originally colected for https://github.com/LAION-AI/Open-Assistant (follows the OpenAssistant training format) The METADATA column contains catalogue meta information on each book as a serialized JSON: key original column language -… See the full description on the dataset page: https://huggingface.co/datasets/sedthh/gutenberg_english.
HuggingFaceH4/instruction-dataset
HuggingFaceH4
This is the blind eval dataset of high-quality, diverse, human-written instructions with demonstrations. We will be using this for step 3 evaluations in our RLHF pipeline.
MultiCoNER/multiconer_v2
MultiCoNER
Complex named entities (NE), like the titles of creative works, are not simple nouns and pose challenges for NER systems (Ashwini and Choi, 2014). They can take the form of any linguistic constituent, like an imperative clause (“Dial M for Murder”), and do not look like traditional NEs (Persons, Locations, etc.). This syntactic ambiguity makes it challenging to recognize them based on context. We organized the MultiCoNER task (Malmasi et al., 2022) at SemEval-2022 to address these challenges in 11 languages, receiving a very positive community response with 34 system papers. Results confirmed the challenges of processing complex and long-tail NEs: even the largest pre-trained Transformers did not achieve top performance without external knowledge. The top systems infused transformers with knowledge bases and gazetteers. However, such solutions are brittle against out of knowledge-base entities and noisy scenarios like the presence of spelling mistakes and typos. We propose MultiCoNER II which represents novel challenges through new tasks that emphasize the shortcomings of the current top models. MultiCoNER II features complex NER in these languages: 1. English 2. Spanish 3. Hindi 4. Bangla 5. Chinese 6. Swedish 7. Farsi 8. French 9. Italian 10. Portugese 11. Ukranian 12. German For more details see https://multiconer.github.io/ ## References * Sandeep Ashwini and Jinho D. Choi. 2014. Targetable named entity recognition in social media. CoRR, abs/1408.0782. * Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, Oleg Rokhlenko. 2022. SemEval-2022 Task 11: Multilingual Complex Named Entity Recognition (MultiCoNER).
CarperAI/pilev2-dev
CarperAI
Dataset Card for [Dataset Name] Dataset Summary The PileV2 is a larger and more diverse collection of text data mostly focused on English text. Specifically, it is a collection of roughly 40 different data subsets. This includes the original 22 subsets from the original pile plus a heavy focus on additional software engineering specific data subsets including the newly released "The Stack" from bigcode, various programming competition sources, and a number of… See the full description on the dataset page: https://huggingface.co/datasets/CarperAI/pilev2-dev.
pcuenq/lsun-bedrooms
pcuenq
Dataset Card for "lsun-bedrooms" This is a 20% sample of the bedrooms category in LSUN, uploaded as a dataset for convenience. The license for this compilation only is MIT. The data retains the same license as the original dataset. This is (roughly) the code that was used to upload this dataset: import os import shutil from miniai.imports import * from miniai.diffusion import * from datasets import load_dataset path_data = Path('data') path_data.mkdir(exist_ok=True) path =… See the full description on the dataset page: https://huggingface.co/datasets/pcuenq/lsun-bedrooms.
oscar-corpus/OSCAR-2301
oscar-corpus
The Open Super-large Crawled Aggregated coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the Ungoliant architecture.\
totuta/youtube_subs_howto100M
totuta
Dataset Card for youtube_subs_howto100M Dataset Summary The youtube_subs_howto100M dataset is an English-language dataset of instruction-response pairs extracted from 309136 YouTube videos. The dataset was orignally inspired by and sourced from the HowTo100M dataset, which was developed for natural language search for video clips. Supported Tasks and Leaderboards conversational: The dataset can be used to train a model for instruction(request) and… See the full description on the dataset page: https://huggingface.co/datasets/totuta/youtube_subs_howto100M.
HuggingFaceH4/helpful_instructions
HuggingFaceH4
Helpful Instructions is a dataset of (prompt, completion) pairs that are derived from a variety of public datasets. As the name suggests, it focuses on instructions that are "helpful", i.e. the kind of questions or tasks a human user might instruct an AI assistant to perform.
ELiRF/dacsa
ELiRF
The Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be used to train summarization models for Catalan and Spanish. The data provides pairs of news article and its summary from different newspapers for both, the Catalan and the Spanish languages. Regarding the Catalan set, there are 725,184 sample pairs from 9 newspapers, regarding the Spanish set, the corpus provides 2,120,649 sample pairs from 21 newspapers.
laion/OIG
laion
This is the Open Instruction Generalist Dataset This is our attempt to create a large instruction dataset of medium quality along with a smaller high quality instruciton dataset (OIG-small-chip2). The data is in the form of jsonl objects, with at least a 'text' field. Some datasets may also include a 'metadata' field. The 'text' field contains a string of the form of one or more of: <human>: instruction\n<bot>: response <human>: instruction\n<bot>: response .. <human>:… See the full description on the dataset page: https://huggingface.co/datasets/laion/OIG.
Elfsong/ClinicalDataset
Elfsong
MEDIQA-Chat 2023 Training/Validation Data Task A The training set consists of 1,201 pairs of conversations and associated section headers and contents. The validation set consists of 100 pairs of conversations and their summaries. The full list of normalized section headers: fam/sochx [FAMILY HISTORY/SOCIAL HISTORY] genhx [HISTORY of PRESENT ILLNESS] pastmedicalhx [PAST MEDICAL HISTORY] cc [CHIEF COMPLAINT] pastsurgical [PAST SURGICAL HISTORY] allergy ros… See the full description on the dataset page: https://huggingface.co/datasets/Elfsong/ClinicalDataset.
dreamerdeo/finqa
dreamerdeo
dataset_info: features: name: id dtype: string name: post_text sequence: string name: pre_text sequence: string name: question dtype: string name: answers dtype: string name: table sequence: sequence: string splits: name: train num_bytes: 26984130 num_examples: 6251 name: validation num_bytes: 3757103 num_examples: 883 name: test num_bytes: 4838430 num_examples: 1147 download_size: 21240722 dataset_size: 35579663
Yehor/opentts-uk
Yehor
Open Text-to-Speech voices for 🇺🇦 Ukrainian Community Discord: https://discord.gg/yVAjkBgmt4 Speech Recognition: https://t.me/speech_recognition_uk Speech Synthesis: https://t.me/speech_synthesis_uk License All licenses are listed in https://github.com/egorsmkv/ukrainian-tts-datasets Development uv venv --python 3.12 source .venv/bin/activate uv pip install -r requirements.txt uv pip install -r requirements-dev.txt
Den4ikAI/russian_instructions
Den4ikAI
Новая версия: https://huggingface.co/datasets/Den4ikAI/russian_instructions_2 Русский датасет инструкций и QA. Структура датасета: { "dialogue":[ "Как я могу улучшить свою связь между телом и разумом?", "Начните с разработки регулярной практики осознанности. 2. Обязательно практикуйте баланс на нескольких уровнях: физическом, эмоциональном, умственном и духовном. 3. Свяжитесь с природой, когда это возможно - идите на прогулки или бегайте на улице, или просто сидите в парке и… See the full description on the dataset page: https://huggingface.co/datasets/Den4ikAI/russian_instructions.
Multimodal-Fatima/OK-VQA_test
Multimodal-Fatima
Dataset Card for "OK-VQA_test" More Information needed
nielsr/countbench
nielsr
Dataset Card for "countbench" More Information needed
johnpaulbin/autotrain-data-english-tokipona
johnpaulbin
AutoTrain Dataset for project: english-tokipona Dataset Description This dataset has been automatically processed by AutoTrain for project english-tokipona. Languages The BCP-47 code for the dataset's language is unk. Dataset Structure Data Instances A sample from this dataset looks as follows: [ { "target": "mi kama jo e pali ante.", "source": "I'll find another job." }, { "target": "tenpo pini weka… See the full description on the dataset page: https://huggingface.co/datasets/johnpaulbin/autotrain-data-english-tokipona.
HuggingFaceH4/cherry_picked_prompts
HuggingFaceH4
Dataset Card for Cherry Picked Prompts 🍒 Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Supported Tasks and Leaderboards [More Information Needed] Languages [More Information Needed] Dataset Structure Data Instances [More Information Needed] Data Fields [More Information Needed] Data Splits [More… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceH4/cherry_picked_prompts.
karandox/Islay
karandox
1
ontocord/OIG-moderation
ontocord
This is the Open Instruction Generalist - Moderation Dataset This is our attempt to create a diverse dataset of user dialogue that may be related to NSFW subject matters, abuse eliciting text, privacy violation eliciting instructions, depression or related content, hate speech, and other similar topics. We use the [prosocial], [anthropic redteam], subsets of [English wikipedia] datasets along with other public datasets described below and data created or contributed by… See the full description on the dataset page: https://huggingface.co/datasets/ontocord/OIG-moderation.
intfloat/query2doc_msmarco
intfloat
This dataset contains GPT-3.5 (text-davinci-003) generations from MS-MARCO queries.
mozilla-foundation/common_voice_12_0
mozilla-foundation
Dataset Card for Common Voice Corpus 12.0 Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 26119 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help improve the accuracy of speech recognition engines. The dataset currently consists of 17127 validated hours in 104 languages, but more voices and languages are always added. Take a look at the Languages… See the full description on the dataset page: https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0.
lvwerra/stack-exchange-paired
lvwerra
StackExchange Paired This is a processed version of the HuggingFaceH4/stack-exchange-preferences. The following steps were applied: Parse HTML to Markdown with markdownify Create pairs (response_j, response_k) where j was rated better than k Sample at most 10 pairs per question Shuffle the dataset globally This dataset is designed to be used for preference learning. The processing notebook is in the repository as well.
HuggingFaceGECLM/StackExchange_Mar2023
HuggingFaceGECLM
Dataset Card for "StackExchange_Mar2023" More Information needed
semeru/Text-Code-concode-Java
semeru
Dataset is imported from CodeXGLUE and pre-processed using their script. Where to find in Semeru: The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/text-to-code/concode in Semeru CodeXGLUE -- Text2Code Generation Here are the dataset and pipeline for text-to-code generation task. Task Definition Generate source code of class member functions in Java, given natural language description and class environment. Class… See the full description on the dataset page: https://huggingface.co/datasets/semeru/Text-Code-concode-Java.
humarin/chatgpt-paraphrases
humarin
This is a dataset of paraphrases created by ChatGPT. Model based on this dataset is avaible: model We used this prompt to generate paraphrases Generate 5 similar paraphrases for this question, show it like a numbered list without commentaries: {text} This dataset is based on the Quora paraphrase question, texts from the SQUAD 2.0 and the CNN news dataset. We generated 5 paraphrases for each sample, totally this dataset has about 420k data rows. You can make 30 rows from a row… See the full description on the dataset page: https://huggingface.co/datasets/humarin/chatgpt-paraphrases.
musabg/commoncrawl-tr
musabg
Dataset Card for "commoncrawl-tr" More Information needed
JosephusCheung/GuanacoDataset
JosephusCheung
Sorry, it's no longer available on Hugging Face. Please reach out to those who have already downloaded it. If you have a copy, please refrain from re-uploading it to Hugging Face. The people here don't deserve it. See also: https://twitter.com/RealJosephus/status/1779913520529707387 GuanacoDataset News: We're heading towards multimodal VQA, with blip2-flan-t5-xxl Alignment to Guannaco 7B LLM. Still under construction: GuanacoVQA weight & GuanacoVQA Dataset Notice: Effective… See the full description on the dataset page: https://huggingface.co/datasets/JosephusCheung/GuanacoDataset.
sedthh/fd_dialogue
sedthh
Dataset Card for "fd_dialogue" This dataset contains transcripts for famous movies and TV shows from https://transcripts.foreverdreaming.org/ The dataset contains only a small portion of Forever Dreaming's data, as only transscripts with a clear dialogue format are included, such as: PERSON 1: Hello PERSON 2: Hello Person 2! (they are both talking) Something else happens PERSON 1: What happened? Each row in the dataset is a single TV episode or movie. (5380 rows total)… See the full description on the dataset page: https://huggingface.co/datasets/sedthh/fd_dialogue.
nguha/legalbench
nguha
LegalBench is a collection of benchmark tasks for evaluating legal reasoning in large language models.
dominguesm/alpaca-data-pt-br
dominguesm
NOTE: This is a machine translated version of the yahma/alpaca-cleaned dataset. Dataset Card for Alpaca-Cleaned Repository: https://github.com/gururise/AlpacaDataCleaned Dataset Description This is a cleaned version of the original Alpaca Dataset released by Stanford. The following issues have been identified in the original release and fixed in this dataset: Hallucinations: Many instructions in the original dataset had instructions referencing data on the… See the full description on the dataset page: https://huggingface.co/datasets/dominguesm/alpaca-data-pt-br.
naxalpha/stable-icons-128
naxalpha
Dataset Card for "stable-icons-128" More Information needed
MiteshRege/indian_food_images
MiteshRege
Dataset Card for "indian_food_images" More Information needed
RyokoAI/Fandom23K
RyokoAI
Dataset Card for Fandom23K The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible. Dataset Summary Fandom23K is a dataset composed of 15,616,749 articles scraped from approximately 23,665 Fandom.com wikis between March 14 and March 18, 2023. It is a subset of the upcoming BigKnow2022 dataset. Supported Tasks and Leaderboards This dataset is primarily intended for unsupervised training of… See the full description on the dataset page: https://huggingface.co/datasets/RyokoAI/Fandom23K.
shainahub/clinical_bias
shainahub
Who is the target audience for this dataset? The target audience includes researchers and practitioners in the healthcare and natural language processing domains interested in studying biases in clinical texts and developing models to detect and mitigate such biases. What do I need to know to use this dataset? Users should have a basic understanding of clinical texts, biases, and natural language processing. Data Fields SUBJECT_ID: A unique… See the full description on the dataset page: https://huggingface.co/datasets/shainahub/clinical_bias.
simpletransformers/celeba_with_captions
simpletransformers
Dataset Card for "celeba_with_captions" More Information needed
bertin-project/alpaca-spanish
bertin-project
BERTIN Alpaca Spanish This dataset is a translation to Spanish of alpaca_data_cleaned.json, a clean version of the Alpaca dataset made at Stanford. An earlier version used Facebook's NLLB 1.3B model, but the current version uses OpenAI's gpt-3.5-turbo, hence this dataset cannot be used to create models that compete in any way against OpenAI.
GBaker/MedQA-USMLE-4-options-hf-MPNet-IR
GBaker
Dataset Card for "MedQA-USMLE-4-options-hf-MPNet-IR" More Information needed
WynterJones/chatgpt-roles
WynterJones
StorySmithGPT - You are StorySmithGPT and you excel at crafting immersive and engaging stories. Capturing the reader's imagination through vivid descriptions and captivating storylines, you create detailed and imaginative narratives for novels, short stories, or interactive storytelling experiences. TimeWarpGPT - You are TimeWarpGPT and you specialize in exploring alternate historical events. Constructing well-researched scenarios with plausible outcomes based on historical knowledge, you… See the full description on the dataset page: https://huggingface.co/datasets/WynterJones/chatgpt-roles.
carlosejimenez/wikitext__wikitext-2-raw-v1
carlosejimenez
Dataset Card for "wikitext__wikitext-2-raw-v1" More Information needed
sunzeyeah/chinese_chatgpt_corpus
sunzeyeah
Dataset Card for chinese_chatgpt_corpus Dataset Summary This repo collects chinese corpus for Supervised Finetuning (SFT) and Reinforcement Learning From Human Feedback (RLHF). Supported Tasks and Leaderboards More Information Needed Languages Chinese Dataset Structure Data Instances train_data_external_v1.jsonl Size of downloaded dataset files: 5.04 GB Size of the generated dataset: 0 GB… See the full description on the dataset page: https://huggingface.co/datasets/sunzeyeah/chinese_chatgpt_corpus.
bigcode/bigcode-pii-dataset
bigcode
PII dataset Dataset description This is an annotated dataset for Personal Identifiable Information (PII) in code. The target entities are: Names, Usernames, Emails, IP addresses, Keys, Passwords, and IDs. The annotation process involved 1,399 crowd-workers from 35 countries with Toloka. It consists of 12,099 samples of ~50 lines of code in 31 programming languages. You can also find a PII detection model that we trained on this dataset at bigcode-pii-model.… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/bigcode-pii-dataset.
ryderwishart/hellenistic-greek-plaintext
ryderwishart
Dataset Card for "hellenistic-greek-plaintext" More Information needed
semeru/Code-Code-CloneDetection-BigCloneBench
semeru
Dataset is imported from CodeXGLUE and pre-processed using their script. Where to find in Semeru: The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-code/Clone-detection-BigCloneBench in Semeru CodeXGLUE -- Clone Detection (BCB) Task Definition Given two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score.… See the full description on the dataset page: https://huggingface.co/datasets/semeru/Code-Code-CloneDetection-BigCloneBench.
theblackcat102/codex-math-qa
theblackcat102
Solution by codex-davinci-002 for math_qa
Multimodal-Fatima/Food101_train
Multimodal-Fatima
Dataset Card for "Food101_train" More Information needed
semeru/code-code-CodeCompletion-TokenLevel-Python
semeru
Dataset is imported from CodeXGLUE and pre-processed using their script. Where to find in Semeru: The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-code/CodeCompletion-token/dataset/py150 in Semeru CodeXGLUE -- Code Completion (token level) Update 2021.07.30: We update the code completion dataset with literals normalized to avoid sensitive information. Here is the introduction and pipeline for token level code completion… See the full description on the dataset page: https://huggingface.co/datasets/semeru/code-code-CodeCompletion-TokenLevel-Python.
semeru/code-code-DefectDetection
semeru
Dataset is imported from CodeXGLUE and pre-processed using their script. Where to find in Semeru: The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-code/Defect-detection in Semeru CodeXGLUE -- Defect Detection Task Definition Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We… See the full description on the dataset page: https://huggingface.co/datasets/semeru/code-code-DefectDetection.
semeru/code-code-MethodGeneration
semeru
Dataset is imported from CodeXGLUE and pre-processed using their script. Where to find in Semeru: The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-code/Method-Generation/dataset/codexglue_method_generation in Semeru CodeXGLUE -- Method Generation Here is the introduction and pipeline for method generation task. Task Definition Method generation is the prediction of a method body implementation conditioned on… See the full description on the dataset page: https://huggingface.co/datasets/semeru/code-code-MethodGeneration.
bigcode/bigcode-pii-dataset-training
bigcode
Bigcode PII Training Dataset Dataset Description This is the dataset used for the training of bigcode-pii-model (after training on pseudo-labeled data). It is a concatenation of an early version of bigcode-pii-dataset which had less samples, and pii-for-code (a dataset with 400 files we annotated in a previous iteration: MORE INFO TO BE ADDED). Files with AMBIGUOUS and ID were excluded. Each PII subtype was remaped to it supertype. Statistics The… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/bigcode-pii-dataset-training.
semeru/code-text-java
semeru
Dataset is imported from CodeXGLUE and pre-processed using their script. Where to find in Semeru: The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-text/java in Semeru CodeXGLUE -- Code-To-Text Task Definition The task is to generate natural language comments for a code, and evaluted by smoothed bleu-4 score. Dataset The dataset we use comes from CodeSearchNet and we filter the dataset as the… See the full description on the dataset page: https://huggingface.co/datasets/semeru/code-text-java.
semeru/code-text-python
semeru
Dataset is imported from CodeXGLUE and pre-processed using their script. Where to find in Semeru: The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-text/python in Semeru CodeXGLUE -- Code-To-Text Task Definition The task is to generate natural language comments for a code, and evaluted by smoothed bleu-4 score. Dataset The dataset we use comes from CodeSearchNet and we filter the dataset as the… See the full description on the dataset page: https://huggingface.co/datasets/semeru/code-text-python.
semeru/code-text-ruby
semeru
Dataset is imported from CodeXGLUE and pre-processed using their script. Where to find in Semeru: The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-text/ruby in Semeru CodeXGLUE -- Code-To-Text Task Definition The task is to generate natural language comments for a code, and evaluted by smoothed bleu-4 score. Dataset The dataset we use comes from CodeSearchNet and we filter the dataset as the… See the full description on the dataset page: https://huggingface.co/datasets/semeru/code-text-ruby.
SALT-NLP/positive_reframing
SALT-NLP
Positive Psychology Frames Inducing Positive Perspectives with Text Reframing [Read the Paper] | [Download the Data] | [Demo] Why Positive Frames? This work was inspired by the need to escape the negative patterns of thinking that began to overwhelm the authors during the COVID-19 pandemic. We realized that what we needed was not some naive belief that everything would be okay if we ignored our problems. Instead, we needed reframing, or a shift in focus, with… See the full description on the dataset page: https://huggingface.co/datasets/SALT-NLP/positive_reframing.
metaeval/reclor
metaeval
https://whyu.me/reclor/ @inproceedings{yu2020reclor, author = {Yu, Weihao and Jiang, Zihang and Dong, Yanfei and Feng, Jiashi}, title = {ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning}, booktitle = {International Conference on Learning Representations (ICLR)}, month = {April}, year = {2020} }
pszemraj/fleece2instructions-codealpaca
pszemraj
codealpaca for text2text generation This dataset was downloaded from the sahil280114/codealpaca github repo and parsed into text2text format for "generating" instructions. It was downloaded under the wonderful Creative Commons Attribution-NonCommercial 4.0 International Public License (see snapshots of the repo and data license), so that license applies to this dataset. Note that the inputs and instruction columns in the original dataset have been aggregated together for… See the full description on the dataset page: https://huggingface.co/datasets/pszemraj/fleece2instructions-codealpaca.
pkyoyetera/luganda_english_dataset
pkyoyetera
Dataset Card for "luganda_english_dataset" More Information needed Dataset might contain a few mistakes, espeecially on the one word translations. Indicators for verbs and nouns (v.i and n.i) may not have been completely filtered out properly.
shibing624/alpaca-zh
shibing624
Dataset Card for "alpaca-zh" 本数据集是参考Alpaca方法基于GPT4得到的self-instruct数据,约5万条。 Dataset from https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM It is the chinese dataset from https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM/blob/main/data/alpaca_gpt4_data_zh.json Usage and License Notices The data is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should… See the full description on the dataset page: https://huggingface.co/datasets/shibing624/alpaca-zh.
Biddls/Onion_News
Biddls
This is a dataset of Onion news articles: Note The headers and body of the news article is split by a ' #~# ' token Lines with just the token had no body or no header and can be skipped Feel free to use the script provided to scape the latest version, it takes about 30 mins on an i7-6850K
VISION-Workshop/VISION-Datasets
VISION-Workshop
Dataset Card for VISION Datasets Dataset Summary The VISION Datasets are a collection of 14 industrial inspection datasets, designed to explore the unique challenges of vision-based industrial inspection. These datasets are carefully curated from Roboflow and cover a wide range of manufacturing processes, materials, and industries. To further enable precise defect segmentation, we annotate each dataset with polygon labels based on the provided bounding box labels.… See the full description on the dataset page: https://huggingface.co/datasets/VISION-Workshop/VISION-Datasets.
MortenTabaka/LandCover-Aerial-Imagery-for-semantic-segmentation
MortenTabaka
LandCover.ai: Dataset for Automatic Mapping of Buildings, Woodlands, Water and Roads from Aerial Imagery My project based on the dataset, can be found on Github: https://github.com/MortenTabaka/Semantic-segmentation-of-LandCover.ai-dataset The dataset used in this project is the Landcover.ai Dataset, which was originally published with LandCover.ai: Dataset for Automatic Mapping of Buildings, Woodlands, Water and Roads from Aerial Imagery paper also accessible on… See the full description on the dataset page: https://huggingface.co/datasets/MortenTabaka/LandCover-Aerial-Imagery-for-semantic-segmentation.
koutch/staqc
koutch
StaQC (Stack Overflow Question-Code pairs) is a dataset of around 148K Python and 120K SQL domain question-code pairs, which are automatically mined from Stack Overflow using a Bi-View Hierarchical Neural Network, as described in the paper "StaQC: A Systematically Mined Question-Code Dataset from Stack Overflow" (WWW'18).
shibing624/CSC
shibing624
Dataset Card for CSC 中文拼写纠错数据集 Repository: https://github.com/shibing624/pycorrector Dataset Description Chinese Spelling Correction (CSC) is a task to detect and correct misspelled characters in Chinese texts. CSC is challenging since many Chinese characters are visually or phonologically similar but with quite different semantic meanings. 中文拼写纠错数据集,共27万条,是通过原始SIGHAN13、14、15年数据集和Wang271k数据集合并整理后得到,json格式,带错误字符位置信息。 Original Dataset Summary… See the full description on the dataset page: https://huggingface.co/datasets/shibing624/CSC.
TimoImhof/TriviaQA-in-SQuAD-format
TimoImhof
Dataset Card for "TriviaQA-in-SQuAD-format" More Information needed
spdenisov/word_aligned_translation
spdenisov
Dataset Card for "word_aligned_translation" More Information needed
gbharti/finance-alpaca
gbharti
This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5 Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora GitHub repo with performance analyses, training and data generation scripts, and inference notebooks: https://github.com/gaurangbharti1/wealth-alpaca… See the full description on the dataset page: https://huggingface.co/datasets/gbharti/finance-alpaca.
bigcode/humanevalpack
bigcode
Dataset Card for HumanEvalPack Dataset Summary HumanEvalPack is an extension of OpenAI's HumanEval to cover 6 total languages across 3 tasks. The Python split is exactly the same as OpenAI's Python HumanEval. The other splits are translated by humans (similar to HumanEval-X but with additional cleaning, see here). Refer to the OctoPack paper for more details. Languages: Python, JavaScript, Java, Go, C++, Rust OctoPack🐙🎒: Data CommitPack 4TB of GitHub… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/humanevalpack.
turuta/Multi30k-uk
turuta
Ukrainian Multi30k
vidhikatkoria/DA_MultiWOZ_hotel
vidhikatkoria
Dataset Card for "DA_MultiWOZ_hotel" More Information needed
Francesco/construction-safety-gsnvb
Francesco
Dataset Card for construction-safety-gsnvb ** The original COCO dataset is stored at dataset.tar.gz** Dataset Summary construction-safety-gsnvb Supported Tasks and Leaderboards object-detection: The dataset can be used to train a model for Object Detection. Languages English Dataset Structure Data Instances A data point comprises an image and its object annotations. { 'image_id': 15, 'image':… See the full description on the dataset page: https://huggingface.co/datasets/Francesco/construction-safety-gsnvb.
Francesco/animals-ij5d2
Francesco
Dataset Card for animals-ij5d2 ** The original COCO dataset is stored at dataset.tar.gz** Dataset Summary animals-ij5d2 Supported Tasks and Leaderboards object-detection: The dataset can be used to train a model for Object Detection. Languages English Dataset Structure Data Instances A data point comprises an image and its object annotations. { 'image_id': 15, 'image':… See the full description on the dataset page: https://huggingface.co/datasets/Francesco/animals-ij5d2.
Francesco/signatures-xc8up
Francesco
Dataset Card for signatures-xc8up ** The original COCO dataset is stored at dataset.tar.gz** Dataset Summary signatures-xc8up Supported Tasks and Leaderboards object-detection: The dataset can be used to train a model for Object Detection. Languages English Dataset Structure Data Instances A data point comprises an image and its object annotations. { 'image_id': 15, 'image':… See the full description on the dataset page: https://huggingface.co/datasets/Francesco/signatures-xc8up.
Francesco/pests-2xlvx
Francesco
Dataset Card for pests-2xlvx ** The original COCO dataset is stored at dataset.tar.gz** Dataset Summary pests-2xlvx Supported Tasks and Leaderboards object-detection: The dataset can be used to train a model for Object Detection. Languages English Dataset Structure Data Instances A data point comprises an image and its object annotations. { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile… See the full description on the dataset page: https://huggingface.co/datasets/Francesco/pests-2xlvx.
OttoYu/TreeHK40
OttoYu
AutoTrain Dataset for project: treehk Dataset Description This dataset has been automatically processed by AutoTrain for project treehk. Languages The BCP-47 code for the dataset's language is unk. Dataset Structure Data Instances A sample from this dataset looks as follows: [ { "image": "<245x358 RGB PIL image>", "target": 0 }, { "image": "<400x530 RGB PIL image>", "target": 0 } ]… See the full description on the dataset page: https://huggingface.co/datasets/OttoYu/TreeHK40.
mattmdjaga/human_parsing_dataset
mattmdjaga
Dataset Card for Human parsing data (ATR) Dataset Summary This dataset has 17,706 images and mask pairs. It is just a copy of Deep Human Parsing ATR dataset. The mask labels are: "0": "Background", "1": "Hat", "2": "Hair", "3": "Sunglasses", "4": "Upper-clothes", "5": "Skirt", "6": "Pants", "7": "Dress", "8": "Belt", "9": "Left-shoe", "10": "Right-shoe", "11": "Face", "12": "Left-leg", "13": "Right-leg"… See the full description on the dataset page: https://huggingface.co/datasets/mattmdjaga/human_parsing_dataset.
taesiri/imagenet-hard
taesiri
Dataset Card for "ImageNet-Hard" Project Page - ArXiv - Paper - Github - Image Browser Dataset Summary ImageNet-Hard is a new benchmark that comprises 10,980 images collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet). This dataset poses a significant challenge to state-of-the-art vision models as merely zooming in often fails to improve their ability to… See the full description on the dataset page: https://huggingface.co/datasets/taesiri/imagenet-hard.
BelleGroup/train_1M_CN
BelleGroup
内容 包含约100万条由BELLE项目生成的中文指令数据。 样例 { "instruction": "给定一个文字输入,将其中的所有数字加1。\n“明天的会议在9点开始,记得准时到达。”\n", "input": "", "output": "“明天的会议在10点开始,记得准时到达。”" } 字段: instruction: 指令 input: 输入(本数据集均为空) output: 输出 使用限制 仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
BelleGroup/train_0.5M_CN
BelleGroup
内容 包含约50万条由BELLE项目生成的中文指令数据。 样例 { "instruction": "给定一个文字输入,将其中的所有数字加1。\n“明天的会议在9点开始,记得准时到达。”\n", "input": "", "output": "“明天的会议在10点开始,记得准时到达。”" } 字段: instruction: 指令 input: 输入(本数据集均为空) output: 输出 使用限制 仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
rcds/swiss_criticality_prediction
rcds
This dataset contains Swiss federal court decisions for the legal criticality prediction task
AlienKevin/LIHKG
AlienKevin
Scraped conversations of the LIHKG forum. Content scraped by Ayaka: https://github.com/ayaka14732/lihkg-scraper
Corianas/GPT_Tasks
Corianas
This is a synthetic database of questions for testing InstructGPTs on. It came about as I couldn't think of good examples when asked and got a bit out of hand.
bigbio/cardiode
bigbio
First freely available and distributable large German clinical corpus from the cardiovascular domain.
M-AI-C/quran_tafseer
M-AI-C
Dataset Card for "quran_tafseer" More Information needed
rjjan/reuters21578
rjjan
The Reuters-21578 dataset is one of the most widely used data collections for text categorization research. It is collected from the Reuters financial newswire service in 1987.
RyokoAI/ScribbleHub17K
RyokoAI
Dataset Card for ScribbleHub17K The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible. Dataset Summary ScribbleHub17K is a dataset consisting of text from over 373,000 chapters across approximately 17,500 series posted on the original story sharing site Scribble Hub. Supported Tasks and Leaderboards This dataset is primarily intended for unsupervised training of text generation models;… See the full description on the dataset page: https://huggingface.co/datasets/RyokoAI/ScribbleHub17K.
axiong/pmc_oa
axiong
Foundation models trained on large-scale dataset gain a recent surge in CV and NLP. In contrast, development in biomedical domain lags far behind due to data scarcity. To address this issue, we build and release PMC-OA, a biomedical dataset with 1.6M image-caption pairs collected from PubMedCentral's OpenAccess subset, which is 8 times larger than before. PMC-OA covers diverse modalities or diseases, with majority of the image-caption samples aligned at finer-grained level, i.e., subfigure and subcaption. While pretraining a CLIP-style model on PMC-OA, our model named PMC-CLIP achieves state-of-the-art results on various downstream tasks, including image-text retrieval on ROCO, MedMNIST image classification, Medical VQA, i.e. +8.1% R@10 on image-text retrieval, +3.9% accuracy on image classification.
jiaheillu/sovits_audio_preview
jiaheillu
预览. 简体中文| English| 日本語 本仓库用于预览so-vits-svc-4.0训练出的各种语音模型的效果,点击角色名自动跳转对应训练参数。 推荐用谷歌浏览器,其他浏览器可能无法正确加载预览的音频。 正常说话的音色转换较为准确,歌曲包含较广的音域且bgm和声等难以去除干净,效果有所折扣。 有推荐的歌想要转换听听效果,或者其他内容建议,点我发起讨论 下面是预览音频,上下左右滑动可以看到全部 角色名 角色原声A 被转换人声B A音色替换B A音色翻唱(点击直接下载) 散兵 夢で会えたら 胡桃 ......... ......... moonlight shadow… See the full description on the dataset page: https://huggingface.co/datasets/jiaheillu/sovits_audio_preview.
justinpinkney/trailer-faces-hq
justinpinkney
Trailer Faces HQ (TFHQ) Trailer Faces High Quality (TFHQ) is a large dataset of high-resolution face images sourced from movie trailers. Details TFHQ was collected by downloading all movie trailers and featurettes listed on the Apple Movie Trailers website as of August 2022. These 15,379 trailers were downloaded at Full HD (1080p) resolution, amounting to approximately 2 TB/507 hours of video. Face detection was performed on every frame using the pre-trained… See the full description on the dataset page: https://huggingface.co/datasets/justinpinkney/trailer-faces-hq.
BelleGroup/multiturn_chat_0.8M
BelleGroup
Multiturn Chat 0.8M 内容 包含约80万条由BELLE项目生成的用户与助手的多轮对话。 注意:此数据集是由ChatGPT产生的,未经过严格校验,内容可能包含错误。使用过程中请注意这一点。 instruction中包含多轮对话的上文内容,以Human:和Assistant:区分,output中包含当前助手角色的回答。 样例 { "instruction":… See the full description on the dataset page: https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M.
tanmaykm/indian_dance_forms
tanmaykm
This dataset is taken from https://www.kaggle.com/datasets/aditya48/indian-dance-form-classification but is originally from the Hackerearth deep learning contest of identifying Indian dance forms. All the credits of dataset goes to them. Content The dataset consists of 599 images belonging to 8 categories, namely manipuri, bharatanatyam, odissi, kathakali, kathak, sattriya, kuchipudi, and mohiniyattam. The original dataset was quite unstructured and all the images were put… See the full description on the dataset page: https://huggingface.co/datasets/tanmaykm/indian_dance_forms.
gbharti/wealth-alpaca_lora
gbharti
This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5 Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora
YeungNLP/firefly-train-1.1M
YeungNLP
本数据应用于项目:Firefly(流萤): 中文对话式大语言模型 ,训练后得到的模型firefly-1b4 如果您觉得此数据集对您有帮助,请like此数据集并在Github项目中star我们。 我们收集了23个常见的中文数据集,对于每个任务,由人工书写若干种指令模板,保证数据的高质量与丰富度,数据量为115万 。数据分布如下图所示: 每条数据的格式如下,包含任务类型、输入、目标输出: { "kind": "ClassicalChinese", "input": "将下面句子翻译成现代文:\n石中央又生一树,高百余尺,条干偃阴为五色,翠叶如盘,花径尺余,色深碧,蕊深红,异香成烟,著物霏霏。", "target": "大石的中央长着一棵树,一百多尺高,枝干是彩色的,树叶有盘子那样大,花的直径有一尺宽,花瓣深蓝色,花中飘出奇异的香气笼罩着周围,如烟似雾。" } 训练数据集的token长度分布如下图所示,绝大部分数据的长度都小于600:
sayakpaul/poses-controlnet-dataset
sayakpaul
Dataset Card for "poses-controlnet-dataset" The dataset was prepared using this Colab Notebook:
pythainlp/thailaw
pythainlp
Dataset Card for "thailaw" English Thai Law Dataset (Act of Parliament) Data source from Office of the Council of State, Thailand. https://www.krisdika.go.th/ This part of PyThaiNLP Project. License Dataset is public domain. Download https://github.com/PyThaiNLP/thai-law/releases This hub based on Thailaw v0.2. Thai คลังข้อมูลกฎหมายไทย (พระราชบัญญัติ) ข้อมูลเก็บรวบรวมมาจากเว็บไซต์สำนักงานคณะกรรมการกฤษฎีกา https://www.krisdika.go.th/… See the full description on the dataset page: https://huggingface.co/datasets/pythainlp/thailaw.
THUIR/T2Ranking
THUIR
T2Ranking Introduction T2Ranking is a large-scale Chinese benchmark for passage ranking. The details about T2Ranking are elaborated in this paper. Passage ranking are important and challenging topics for both academics and industries in the area of Information Retrieval (IR). The goal of passage ranking is to compile a search result list ordered in terms of relevance to the query from a large passage collection. Typically, Passage ranking involves two stages:… See the full description on the dataset page: https://huggingface.co/datasets/THUIR/T2Ranking.
philschmid/sharegpt-raw
philschmid
Prepraration pip3 install -r requirements.txt Data Cleaning merge two raw json files and json beautify the merged file python merge.py sharegpt_90k_raw_dataset/sg_90k_part1.json sharegpt_90k_raw_dataset/sg_90k_part2.json sharegpt_20230401_html_unformatted.json python pretty_json.py --in sharegpt_20230401_html_unformatted.json --out sharegpt_20230401_html.json (Optional) Verify the json file if jq empty sharegpt_20230401_html.json 2>/dev/null; then echo… See the full description on the dataset page: https://huggingface.co/datasets/philschmid/sharegpt-raw.
somosnlp-hackathon-2023/DiagTrast
somosnlp-hackathon-2023
Dataset Card for "DiagTrast" Table of Content Table of Contents Dataset Description Dataset Summary Supported Tasks and Leaderboards Languages Dataset Structure Data Instances Data Fields Data Splits Dataset Creation Curation Rationale Source Data Annotations Considerations for Using the Data Social Impact of Dataset Discussion of Biases Other Known Limitations Team members Dataset Description Dataset Summary For the… See the full description on the dataset page: https://huggingface.co/datasets/somosnlp-hackathon-2023/DiagTrast.
maykcaldas/smiles-transformers
maykcaldas
smiles-transformers dataset TODO: Add references to the datasets we curated dataset features name: text Molecule SMILES : string name: formula Molecular formula : string name: NumHDonors Number of hidrogen bond donors : int name: NumHAcceptors Number of hidrogen bond acceptors : int name: MolLogP Wildman-Crippen LogP : float name: NumHeteroatoms Number of hetero atoms: int name: RingCount Number of rings : int name: NumRotatableBonds Number of… See the full description on the dataset page: https://huggingface.co/datasets/maykcaldas/smiles-transformers.
fddemarco/pushshift-reddit-comments
fddemarco
Dataset Card for "pushshift-reddit" More Information needed
FourthBrainGenAI/Product-Descriptions-and-Ads
FourthBrainGenAI
Synthetic Dataset for Product Descriptions and Ads The basic process was as follows: Prompt GPT-4 to create a list of 100 sample clothing items and descriptions for those items. Split the output into desired format `{"product" : "", "description" : ""} Prompt GPT-4 to create adverts for each of the 100 samples based on their name and description. This data was not cleaned or verified manually.