id
stringlengths
6
121
author
stringlengths
2
42
description
stringlengths
0
6.67k
rd124/samanantar_100K_hindi
rd124
Dataset Card for "samanantar_100K_hindi" More Information needed
twrightsman/phytozome_genomes
twrightsman
Phytozome Genomes This dataset consists of 100 current (as of 2023-06-20) unrestricted Phytozome genome assemblies, listed in genomes.tsv. Currently, the training, validation, and test splits are a random 90/5/5% split, but this will change before version 1.0. Individual observations are entire chromosomes/contigs, so it is up to the user to chunk them up further to their appropriate size. Quickstart $ conda env create --file environment.yml $ conda activate… See the full description on the dataset page: https://huggingface.co/datasets/twrightsman/phytozome_genomes.
anujsahani01/English-Marathi
anujsahani01
This Dataset was prepared by collecting english-marathi translation from different resources. Happy Fine-tuning😀
cryptom/ceval-exam
cryptom
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels.
Xenova/quickdraw
Xenova
Dataset Card for Quick, Draw! This is a processed version of Google's Quick, Draw dataset to be compatible with the latest versions of 🤗 Datasets that support .parquet files. NOTE: this dataset only contains the "preprocessed_bitmaps" subset of the original dataset.
Xenova/quickdraw-small
Xenova
Dataset Card for "quickdraw-small" More Information needed
pankajmathur/alpaca_orca
pankajmathur
Explain tuned Alpaca dataset ~52K created using approaches from Orca Research Paper. We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. This helps student models like orca_mini_13b to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). Please see how the System prompt is added before each instruction.
musabg/wizard_vicuna_70k_unfiltered_tr
musabg
Dataset Card for "wizard_vicuna_70k_unfiltered_tr" More Information needed
umarbutler/open-australian-legal-corpus
umarbutler
Open Australian Legal Corpus ‍⚖️ The Open Australian Legal Corpus is the first and only multijurisdictional open corpus of Australian legislative and judicial documents. Comprised of 229,122 texts totalling over 80 million lines and 1.4 billion tokens, the Corpus includes every in force statute and regulation in the Commonwealth, New South Wales, Queensland, Western Australia, South Australia, Tasmania and Norfolk Island, in addition to thousands of bills and hundreds of… See the full description on the dataset page: https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus.
hezarai/lscp-pos-500k
hezarai
This is a 500 thousand sample version of the original LSCP dataset that only contains the text and part-of-speech tags and is used for sequence labeling. Citation @InProceedings{abdikhojasteh:2020:LREC, author = {Abdi Khojasteh, Hadi and Ansari, Ebrahim and Bohlouli, Mahdi}, title = {LSCP: Enhanced Large Scale Colloquial Persian Language Understanding}, booktitle = {Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)}… See the full description on the dataset page: https://huggingface.co/datasets/hezarai/lscp-pos-500k.
Iess/chinese_modern_poetry
Iess
简介 数据集包括了近现代的中国诗人及外国诗人(中译版)作品,所有作品著作权归原作者所有,侵删请联系[email protected] chinese_poems.jsonl为原数据,training_imagery2-5_maxlen256.json 分别是根据2-5个关键意象生成诗歌的相关数据集 数据来源于网络,包括但不限于 https://github.com/sheepzh/poetry https://bedtimepoem.com/ https://poemwiki.org/ baidu、google、zhihu等 一些作品 使用此数据集训练ChatGLM、LLaMA7b模型生成的诗歌,更多诗歌查看poems目录
haonan-li/cmmlu
haonan-li
CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context.
ControlNet/LAV-DF
ControlNet
Localized Audio Visual DeepFake Dataset (LAV-DF) This repo is the dataset for the DICTA paper Do You Really Mean That? Content Driven Audio-Visual Deepfake Dataset and Multimodal Method for Temporal Forgery Localization (Best Award), and the journal paper "Glitch in the Matrix!": A Large Scale Benchmark for Content Driven Audio-Visual Forgery Detection and Localization submitted to CVIU. LAV-DF Dataset Download To use this LAV-DF dataset, you… See the full description on the dataset page: https://huggingface.co/datasets/ControlNet/LAV-DF.
nisaar/Constitution_of_India
nisaar
Hellp World
Anthropic/llm_global_opinions
Anthropic
Dataset Card for GlobalOpinionQA Dataset Summary The data contains a subset of survey questions about global issues and opinions adapted from the World Values Survey and Pew Global Attitudes Survey. The data is further described in the paper: Towards Measuring the Representation of Subjective Global Opinions in Language Models. Purpose In our paper, we use this dataset to analyze the opinions that large language models (LLMs) reflect on complex… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/llm_global_opinions.
FreedomIntelligence/alpaca-gpt4-korean
FreedomIntelligence
The dataset is used in the research related to MultilingualSIFT.
FreedomIntelligence/alpaca-gpt4-portuguese
FreedomIntelligence
The dataset is used in the research related to MultilingualSIFT.
wendlerc/RenderedText
wendlerc
This dataset has been created by Stability AI and LAION. This dataset contains 12 million 1024x1024 images of handwritten text written on a digital 3D sheet of paper generated using Blender geometry nodes and rendered using Blender Cycles. The text has varying font size, color, and rotation, and the paper was rendered under random lighting conditions. Note that, the first 10 million examples are in the root folder of this dataset repository and the remaining 2 million are in ./remaining (due… See the full description on the dataset page: https://huggingface.co/datasets/wendlerc/RenderedText.
globis-university/aozorabunko-clean
globis-university
Overview This dataset provides a convenient and user-friendly format of data from Aozora Bunko (青空文庫), a website that compiles public-domain books in Japan, ideal for Machine Learning applications. [For Japanese] 日本語での概要説明を Qiita に記載しました: https://qiita.com/akeyhero/items/b53eae1c0bc4d54e321f Methodology The code to reproduce this dataset is made available on GitHub: globis-org/aozorabunko-exctractor. 1. Data collection We firstly downloaded the CSV… See the full description on the dataset page: https://huggingface.co/datasets/globis-university/aozorabunko-clean.
rafaelpadilla/interior-cgi
rafaelpadilla
This new dataset contains CG interior images representing interior of houses in 5 classes, with 1000 images per class.
MLCommons/speech-wikimedia
MLCommons
Dataset Card for Speech Wikimedia Dataset Summary The Speech Wikimedia Dataset is a compilation of audiofiles with transcriptions extracted from wikimedia commons that is licensed for academic and commercial usage under CC and Public domain. It includes 2,000+ hours of transcribed speech in different languages with a diverse set of speakers. Each audiofile should have one or more transcriptions in different languages. Transcription languages… See the full description on the dataset page: https://huggingface.co/datasets/MLCommons/speech-wikimedia.
nampdn-ai/tiny-webtext
nampdn-ai
Tiny WebText The Tiny WebText dataset is designed to help models learn about perception on web text while neutralizing the bias of the source text using critical thinking methods. By providing a rich and diverse set of texts, I aim to improve the ability of models to understand and analyze information in a more objective and unbiased manner. This dataset can be used to train and evaluate natural language processing and machine learning models, with the goal of improving their… See the full description on the dataset page: https://huggingface.co/datasets/nampdn-ai/tiny-webtext.
wisenut-nlp-team/namu
wisenut-nlp-team
from datasets import load_dataset raw_dataset = load_dataset( "wisenut-nlp-team/namu", "raw", use_auth_token="<your personal/api token>" ) processed_dataset = load_dataset( "wisenut-nlp-team/namu", "processed", use_auth_token="<your personal/api token>" )
bigcode/commitpackft
bigcode
CommitPackFT is is a 2GB filtered version of CommitPack to contain only high-quality commit messages that resemble natural language instructions.
ecnu-icalk/educhat-sft-002-data-osm
ecnu-icalk
每条数据由一个存放对话的list和与数据对应的system_prompt组成。list中按照Q,A顺序存放对话。 数据来源为开源数据,使用CleanTool数据清理工具去重。
fimu-docproc-research/dataset_easy_ocr_v0.3.0_multipage_cleaned
fimu-docproc-research
Dataset Card for "dataset_easy_ocr_v0.3.0_multipage_cleaned" More Information needed
CAiRE/YueMotion
CAiRE
YueMotion is a Cantonese speech emotion dataset.
causal-lm/instructions
causal-lm
Merged Instructions Dataset Merged Dataset for the response of instructions.
Einstellung/wiki_art
Einstellung
Este dataset fue creado para el workshop de Medellin AI y Bancolombia con fines educativos.
BAAI/COIG-PC-Lite
BAAI
COIG Prompt Collection License Default Licensing for Sub-Datasets Without Specific License Declaration: In instances where sub-datasets within the COIG-PC Dataset do not have a specific license declaration, the Apache License 2.0 (Apache-2.0) will be the applicable licensing terms by default. Precedence of Declared Licensing for Sub-Datasets: For any sub-dataset within the COIG-PC Dataset that has an explicitly declared license, the terms and conditions of the… See the full description on the dataset page: https://huggingface.co/datasets/BAAI/COIG-PC-Lite.
DataHammer/emotional_dialog
DataHammer
Scientific Emotional Dialogue Dataset Summary This is a dataset for emotional multi-turn dialogue on scientific research personnels. It consists of 1069 dialogues with 2709 turns. The Dialogue was first written by NLP practitioners and then expanded by GPT4. Supported Tasks and Leaderboards Emotional Dialogue: The dataset can be used to instruction tuning for emotional dialogue. Languages Chinese Dataset Structure… See the full description on the dataset page: https://huggingface.co/datasets/DataHammer/emotional_dialog.
numind/NuSentiment
numind
250k (40k after balancing the classes) sentences from C4 dataset (clean version of Common Crawl) with sentiment annotation (Positive, Negative, Neutral) automatically annotated with GPT3.5. Can be used to train generic (no domain) sentiment analysis model. Labels: 0: Positive 1:Negative 2:Neutral
Waterhorse/chess_data
Waterhorse
The Chess Dataset Dataset Summary The dataset consists of three sources of dataset described in the paper, including: ChessCLIP dataset: Annotated PGNs for training CLIP. ChessGPT Base dataset: Game dataset, language dataset and mixed dataset for training ChessGPT-Base. ChessGPT Chat dataset: Conversational dataset for training ChessGPT-Chat. Because of the legal issue, for ChessGPT dataset, we do not open-source the chess-book, chess-forum, chess-blog, and… See the full description on the dataset page: https://huggingface.co/datasets/Waterhorse/chess_data.
shi3z/anthropic_hh_rlhf_japanese
shi3z
https://huggingface.co/datasets/Anthropic/hh-rlhf Japanese Translation
commaai/comma2k19
commaai
comma2k19 is a dataset of over 33 hours of commute in California's 280 highway. This means 2019 segments, 1 minute long each, on a 20km section of highway driving between California's San Jose and San Francisco. comma2k19 is a fully reproducible and scalable dataset. The data was collected using comma EONs that has sensors similar to those of any modern smartphone including a road-facing camera, phone GPS, thermometers and 9-axis IMU. Additionally, the EON captures raw GNSS measurements and all CAN data sent by the car with a comma grey panda.
jiaqianjing/PatentData
jiaqianjing
数据来源 中国专利信息中心 字段解释 patent_id:专利编号 patent_pub_date:专利公布日期 title:专利名称 applicant:申请人/单位 application_date:申请日期 inventors:发明人 summary:摘要 description:说明书全文 claim:专利权利要求书全文 使用限制 仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
allenai/peS2o
allenai
Pretraining Effectively on S2ORC! The peS2o dataset is a collection of ~40M creative open-access academic papers, cleaned, filtered, and formatted for pre-training of language models. It is derived from the Semantic Scholar Open Research Corpus(Lo et al, 2020), or S2ORC. We release multiple version of peS2o, each with different processing and knowledge cutoff date. We recommend you to use the latest version available. If you use this dataset, please cite: @techreport{peS2o, author =… See the full description on the dataset page: https://huggingface.co/datasets/allenai/peS2o.
searde/dataset-financial-documents-3
searde
Financial documents
tungdop2/pokemon
tungdop2
Dataset Card Pokemon caption dataset This dataset contain artwork, name, type, species, and caption of all pokemons till 07/07/2023. Caption: generated by BLIP Artwork and other infomations: crawled from pokemondb More Information needed
FreedomIntelligence/evol-instruct-hindi
FreedomIntelligence
The dataset is used in the research related to MultilingualSIFT.
FreedomIntelligence/evol-instruct-indonesian
FreedomIntelligence
The dataset is used in the research related to MultilingualSIFT.
FreedomIntelligence/evol-instruct-portuguese
FreedomIntelligence
The dataset is used in the research related to MultilingualSIFT.
Falah/eye-disease-dataset
Falah
Eye Disease Dataset Description The Eye Disease Dataset is a collection of images related to various eye diseases. It provides a valuable resource for training and evaluating computer vision models for eye disease detection and classification. The dataset includes images representing five different eye disease classes: Bulging Eyes, Cataracts, Crossed Eyes, Glaucoma, and Uveitis. Dataset Details Dataset Name: Falah/eye-disease-dataset Number of… See the full description on the dataset page: https://huggingface.co/datasets/Falah/eye-disease-dataset.
cognitivecomputations/dolphin
cognitivecomputations
Dolphin 🐬 https://erichartford.com/dolphin Dataset details This dataset is an attempt to replicate the results of Microsoft's Orca Our dataset consists of: ~1 million of FLANv2 augmented with GPT-4 completions (flan1m-alpaca-uncensored.jsonl) ~3.5 million of FLANv2 augmented with GPT-3.5 completions (flan5m-alpaca-uncensored.jsonl) We followed the submix and system prompt distribution outlined in the Orca paper. With a few exceptions. We included all 75k of CoT in the FLAN-1m… See the full description on the dataset page: https://huggingface.co/datasets/cognitivecomputations/dolphin.
rdpahalavan/network-packet-flow-header-payload
rdpahalavan
Each row contains the information of a network packet and its label. The format is given below:
alasdairforsythe/text-english-code-fiction-nonfiction
alasdairforsythe
TokenMonster Datasets: English, Code, Fiction, Non-fiction Included are datasets that were used to generate the TokenMonster pre-built vocabularies. All are raw text files. The training data mostly came from Red Pajamas 1B Token Sample. However, to reduce formal English and emphasize other languages, informal writing and code, c4_sample & cc_sample were cropped to 100MB, and Reddit conversations data were added (also cropped to 100MB.) Additionally, equally weighted code samples… See the full description on the dataset page: https://huggingface.co/datasets/alasdairforsythe/text-english-code-fiction-nonfiction.
UmaDiffusion/ULTIMA
UmaDiffusion
ULTIMA Dataset - Uma Musume Labeled Text-Image Multimodal Alignment Dataset
tasksource/context_toxicity
tasksource
https://github.com/ipavlopoulos/context_toxicity/ @inproceedings{xenos-etal-2021-context, title = "Context Sensitivity Estimation in Toxicity Detection", author = "Xenos, Alexandros and Pavlopoulos, John and Androutsopoulos, Ion", booktitle = "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url =… See the full description on the dataset page: https://huggingface.co/datasets/tasksource/context_toxicity.
Veucci/lyric-to-3genre
Veucci
Song Lyrics Dataset Description This dataset contains a collection of song lyrics from various artists and genres in english. It is intended to be used for research, analysis, and other non-commercial purposes. Dataset Details The dataset is organized in a tabular format with the following columns: Genre (int): Genre of the lyrics Lyrics (str): The lyrics of the song. Pop: 979 rows Rock: 995 rows Hip-Hop: 1040 rows Usage Feel… See the full description on the dataset page: https://huggingface.co/datasets/Veucci/lyric-to-3genre.
Lurunchik/WikiHowNFQA
Lurunchik
Dataset Card for WikiHowQA WikiHowQA is a unique collection of 'how-to' content from WikiHow, transformed into a rich dataset featuring 11,746 human-authored answers and 74,527 supporting documents. Designed for researchers, it presents a unique opportunity to tackle the challenges of creating comprehensive answers from multiple documents, and grounding those answers in the real-world context provided by the supporting documents. Dataset Structure… See the full description on the dataset page: https://huggingface.co/datasets/Lurunchik/WikiHowNFQA.
santoshtyss/indian_courts_cases
santoshtyss
Dataset Card for "indian_courts_cases" More Information needed
nazimali/quran-question-answer-context
nazimali
Dataset Card for "quran-question-answer-context" Dataset Summary Translated the original dataset from Arabic to English and added the Surah ayahs to the context column. Usage from datasets import load_dataset dataset = load_dataset("nazimali/quran-question-answer-context") DatasetDict({ train: Dataset({ features: ['q_id', 'question', 'answer', 'q_word', 'q_topic', 'fine_class', 'class', 'ontology_concept', 'ontology_concept2', 'source'… See the full description on the dataset page: https://huggingface.co/datasets/nazimali/quran-question-answer-context.
breadlicker45/bread-midi-dataset
breadlicker45
this midi dataset has 851,313 midi files, making it the biggest midi dataset on the web. anyone can use for any use case
HuggingFaceH4/mt_bench_prompts
HuggingFaceH4
MT Bench by LMSYS This set of evaluation prompts is created by the LMSYS org for better evaluation of chat models. For more information, see the paper. Dataset loading To load this dataset, use 🤗 datasets: from datasets import load_dataset data = load_dataset(HuggingFaceH4/mt_bench_prompts, split="train") Dataset creation To create the dataset, we do the following for our internal tooling. rename turns to prompts, add empty reference to… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceH4/mt_bench_prompts.
Falah/Alzheimer_MRI
Falah
Alzheimer_MRI Disease Classification Dataset The Falah/Alzheimer_MRI Disease Classification dataset is a valuable resource for researchers and health medicine applications. This dataset focuses on the classification of Alzheimer's disease based on MRI scans. The dataset consists of brain MRI images labeled into four categories: '0': Mild_Demented '1': Moderate_Demented '2': Non_Demented '3': Very_Mild_Demented Dataset Information Train split: Name: train… See the full description on the dataset page: https://huggingface.co/datasets/Falah/Alzheimer_MRI.
lmsys/mt_bench_human_judgments
lmsys
Content This dataset contains 3.3K expert-level pairwise human preferences for model responses generated by 6 models in response to 80 MT-bench questions. The 6 models are GPT-4, GPT-3.5, Claud-v1, Vicuna-13B, Alpaca-13B, and LLaMA-13B. The annotators are mostly graduate students with expertise in the topic areas of each of the questions. The details of data collection can be found in our paper. Agreement Calculation This Colab notebook shows how to compute the… See the full description on the dataset page: https://huggingface.co/datasets/lmsys/mt_bench_human_judgments.
FredZhang7/all-scam-spam
FredZhang7
This is a large corpus of 42,619 preprocessed text messages and emails sent by humans in 43 languages. is_spam=1 means spam and is_spam=0 means ham. 1040 rows of balanced data, consisting of casual conversations and scam emails in ≈10 languages, were manually collected and annotated by me, with some help from ChatGPT. Some preprcoessing algorithms spam_assassin.js, followed by spam_assassin.py enron_spam.py Data composition Description To… See the full description on the dataset page: https://huggingface.co/datasets/FredZhang7/all-scam-spam.
NomaDamas/Ko-StrategyQA
NomaDamas
Ko-StrategyQA 이 데이터셋은 StrategyQA의 한국어 버전입니다. 기존 데이터셋의 모든 질문과 단락들을 DeepL을 사용하여 번역했습니다. 데이터셋 설명 이 데이터셋은 StrategyQA의 한국어 버전입니다. StrategyQA는 오픈 도메인 질의 응답 태스크 분야에서 multi-hop 질문들만을 모아 놓은 데이터셋입니다. 오픈 도메인 질의 응답(ODQA)은 특정한 도메인 없이, 일반적인 지식 분야에서 질문에 대한 올바른 응답을 하는 인공지능 모델을 만드는 태스크입니다. multi-hop 질문들은 한 질문에 답하기 위하여 두 가지 이상의 사실을 두 가지 이상의 단락들에서 알아내야만 하는 질문들입니다. 이 데이터셋을 활용하여 multi-hop 질문들을 해결하기 위해 복수 개의 단락을 자동으로 단락 뭉치에서 검색하고 찾아낼 수 있는 성능을 측정할 수 있습니다. 또한, 거대 언어 모델 (LLM) 등의 언어… See the full description on the dataset page: https://huggingface.co/datasets/NomaDamas/Ko-StrategyQA.
beyond/rlhf-reward-single-round-trans_chinese
beyond
Dataset Card for "rlhf-reward-single-round-trans_chinese" More Information needed
xiyuez/red-dot-design-award-product-description
xiyuez
Red Dot Design Award Dataset This dataset contains information about the products that have won the Red Dot Design Award, a prestigious international design competition. The data was extracted from the official website of the award: https://www.red-dot.org/. Task The task for this dataset is text generation, specifically product description generation. Given a product name and category, the goal is to generate a concise and informative description that highlights… See the full description on the dataset page: https://huggingface.co/datasets/xiyuez/red-dot-design-award-product-description.
imageomics/rare-species
imageomics
Dataset Card for Rare Species Dataset Dataset Description Repository: Imageomics/bioclip Paper: BioCLIP: A Vision Foundation Model for the Tree of Life (arXiv) Dataset Summary This dataset was generated alongside TreeOfLife-10M; data (images and text) were pulled from Encyclopedia of Life (EOL) to generate a dataset consisting of rare species for zero-shot-classification and more refined image classification tasks. Here, we use "rare species" to… See the full description on the dataset page: https://huggingface.co/datasets/imageomics/rare-species.
izumi-lab/mc4-ja
izumi-lab
Dataset Card for "mc4-ja" More Information needed
lunarlist/edited_common_voice
lunarlist
Dataset Card for "edited_common_voice" More Information needed This dataset is a Thai TTS dataset that use the voice from Common Voice dataset and modify the voice to not to sound like the original. Medium: Text-To-Speech ภาษาไทยด้วย Tacotron2
CreativeLang/ColBERT_Humor_Detection
CreativeLang
ColBERT_Humor Dataset Summary ColBERT Humor contains 200,000 labeled short texts, equally distributed between humorous and non-humorous content. The dataset was created to overcome the limitations of prior humor detection datasets, which were characterized by inconsistencies in text length, word count, and formality, making them easy to predict with simple models without truly understanding the nuances of humor. The two sources for this dataset are the News… See the full description on the dataset page: https://huggingface.co/datasets/CreativeLang/ColBERT_Humor_Detection.
CreativeLang/scope_simile_generation
CreativeLang
SCOPE Simile Dataset Summary This dataset has been created for the purpose of generating similes from literal descriptive sentences. The process involves a two-step approach: firstly, self-labeled similes are converted into literal sentences using structured common sense knowledge, and secondly, a seq2seq model is fine-tuned on these [literal sentence, simile] pairs to generate similes. The dataset was collected from Reddit, specifically from the subreddits… See the full description on the dataset page: https://huggingface.co/datasets/CreativeLang/scope_simile_generation.
InfImagine/FakeImageDataset
InfImagine
Fake Image Dataset Fake Image Dataset is now open-sourced at huggingface (InfImagine Organization) and openxlab. ↗ It consists of two folders, ImageData and MetaData. ImageData contains the compressed packages of the Fake Image Dataset, while MetaData contains the labeling information of the corresponding data indicating whether they are real or fake. Sentry-Image is now open-sourced at Sentry-Image (github repository) which provides the SOTA fake image detection models in… See the full description on the dataset page: https://huggingface.co/datasets/InfImagine/FakeImageDataset.
jtatman/databricks-dolly-4k-brainstorm-summary-creative
jtatman
this is a parse down of the esoteric categories in dolly 15k dataset the size is intentional for processing here on the hub .::modification of the databricks 15k dataset for on hub processing::.
MightyStudent/Egyptian-ASR-MGB-3
MightyStudent
Egyptian Arabic dialect automatic speech recognition Dataset Summary This dataset was collected, cleaned and adjusted for huggingface hub and ready to be used for whisper finetunning/training. From MGB-3 website: The MGB-3 is using 16 hours multi-genre data collected from different YouTube channels. The 16 hours have been manually transcribed. The chosen Arabic dialect for this year is Egyptian. Given that dialectal Arabic has no orthographic rules, each program… See the full description on the dataset page: https://huggingface.co/datasets/MightyStudent/Egyptian-ASR-MGB-3.
dyvapandhu/molecul-datasets
dyvapandhu
Dataset Card for "molecul-datasets" More Information needed
hongrui/mimic_chest_xray_v_1
hongrui
Dataset Card for "mimic_chest_xray_v_1" More Information needed
haitengzhao/molecule_property_instruction
haitengzhao
Dataset Card for "molecule_property_instruction" More Information needed
GHonem/fashion_image_caption-3500
GHonem
Dataset Card for "fashion_image_caption-3500" More Information needed
zxbsmk/webnovel_cn
zxbsmk
内容 包含从12560本网文提取的约21.7M条可用于训练小说生成的中文指令数据(novel_json_tokens512.zip)。下载链接:https://pan.baidu.com/s/1TorBMbrqxrn6odRF0PJBVw 提取码:jlh3 以及从中提取出的包含50k条数据的子集(novel_cn_token512_50k.json)。其中输入和输出都不多于 512 tokens。 样例 在原有小说文本基础上,依据下列五种指令生成数据。 其中,文本由小说中随机抽取的连续句子组成。 给定标题,直接生成简介。 给定标题和简介,生成开头。 给定简介和一段文本,生成后续文本。 给定标题和一段文本,生成后续文本。 给定一段文本,生成后续文本。 { "instruction":… See the full description on the dataset page: https://huggingface.co/datasets/zxbsmk/webnovel_cn.
huuuyeah/MeetingBank_Audio
huuuyeah
Overview MeetingBank, a benchmark dataset created from the city councils of 6 major U.S. cities to supplement existing datasets. It contains 1,366 meetings with over 3,579 hours of video, as well as transcripts, PDF documents of meeting minutes, agenda, and other metadata. On average, a council meeting is 2.6 hours long and its transcript contains over 28k tokens, making it a valuable testbed for meeting summarizers and for extracting structure from meeting videos. The datasets… See the full description on the dataset page: https://huggingface.co/datasets/huuuyeah/MeetingBank_Audio.
jorgeortizfuentes/universal_spanish_chilean_corpus
jorgeortizfuentes
Universal Chilean Spanish Corpus Este dataset se compone de 37_213_992 textos correspondientes a español de Chile y a español multidialectal. Los textos en español multidialectal provienen del spanish books. Los textos en español de Chile vienen de los dominios .cl del mc4 dataset y de tweets, noticias y reclamos de l chilean-spanish-corpus Name Count Source books 87967 spanish books mc4 8706681 from mc4 (.cl domains) in chilean-spanish-corpus twitter 27306583… See the full description on the dataset page: https://huggingface.co/datasets/jorgeortizfuentes/universal_spanish_chilean_corpus.
Falah/medium_articles_posts
Falah
Medium Articles Posts Dataset Description The Medium Articles Posts dataset contains a collection of articles published on the Medium platform. Each article entry includes information such as the article's title, main content or text, associated URL or link, authors' names, timestamps, and tags or categories. Dataset Info The dataset consists of the following features: title: (string) The title of the Medium article. text: (string) The main content… See the full description on the dataset page: https://huggingface.co/datasets/Falah/medium_articles_posts.
Gregor/mblip-train
Gregor
mBLIP Instruct Mix Dataset Card Important! This dataset currently does not work directly with datasets.load_dataset(Gregor/mblip-train)! Please download the data files you need and load them with datasets.load_dataset("json", data_files="filename"). Dataset details Dataset type: This is the instruction mix used to train mBLIP. See https://github.com/gregor-ge/mBLIP/data/README.md for more information on how to reproduce the data. Dataset date: The… See the full description on the dataset page: https://huggingface.co/datasets/Gregor/mblip-train.
FredZhang7/malicious-website-features-2.4M
FredZhang7
Important Notice: A subset of the URL dataset is from Kaggle, and the Kaggle datasets contained 10%-15% mislabelled data. See this dicussion I opened for some false positives. I have contacted Kaggle regarding their erroneous "Usability" score calculation for these unreliable datasets. The feature extraction methods shown here are not robust at all in 2023, and there're even silly mistakes in 3 functions: not_indexed_by_google, domain_registration_length, and age_of_domain. The features… See the full description on the dataset page: https://huggingface.co/datasets/FredZhang7/malicious-website-features-2.4M.
NumbersStation/NSText2SQL
NumbersStation
Dataset Summary NSText2SQL dataset used to train NSQL models. The data is curated from more than 20 different public sources across the web with permissable licenses (listed below). All of these datasets come with existing text-to-SQL pairs. We apply various data cleaning and pre-processing techniques including table schema augmentation, SQL cleaning, and instruction generation using existing LLMs. The resulting dataset contains around 290,000 samples of text-to-SQL pairs. For… See the full description on the dataset page: https://huggingface.co/datasets/NumbersStation/NSText2SQL.
BAAI/SVIT
BAAI
Scale up visual instruction tuning to millions by GPT-4.
oscar-corpus/colossal-oscar-1.0
oscar-corpus
Dataset Card for Colossal OSCAR 1 IMPORTANT NOTE: THIS DATASET CARD IS STILL BEING WRITTEN, PLEASE BE PATIENT WHILE WE COMPLETE ALL THE INFORMATION ABOUT THE CORPUS Dataset Summary The OSCAR project (Open Super-large Crawled Aggregated coRpus) is an Open Source project aiming to provide web-based multilingual resources and datasets for Machine Learning (ML) and Artificial Intelligence (AI) applications. The project focuses specifically in providing… See the full description on the dataset page: https://huggingface.co/datasets/oscar-corpus/colossal-oscar-1.0.
AlexZigma/msr-vtt
AlexZigma
Dataset Card for "msr-vtt" More Information needed
izumi-lab/oscar2301-ja-filter-ja-normal
izumi-lab
Dataset Card for "oscar2301-ja-filter-ja-normal" More Information needed
HAERAE-HUB/csatqa
HAERAE-HUB
CSAT-QA
Norquinal/claude_multiround_chat_30k
Norquinal
This dataset is the result of 50k instruction/response pairs generated by Claude and two additional follow-up instructions for each base instruction (for a total of 150k instructions), with instances of blatant alignment removed. 32170 (96510) instructions remain. The instructions were generated synethically using a method that can be tenatively described as "multi-instruct." These instructions consist of numerous discrete tasks that the AI has to work its way through, thereby hopefully… See the full description on the dataset page: https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k.
mrtoy/mobile-ui-design
mrtoy
Dataset: Mobile UI Design Detection Introduction This dataset is designed for object detection tasks with a focus on detecting elements in mobile UI designs. The targeted objects include text, images, and groups. The dataset contains images and object detection boxes, including class labels and location information. Dataset Content Load the dataset and take a look at an example: >>> from datasets import load_dataset >>>> ds =… See the full description on the dataset page: https://huggingface.co/datasets/mrtoy/mobile-ui-design.
vargr/private_instagram
vargr
Dataset Card for "private_instagram" More Information needed
HuggingFaceM4/M3IT
HuggingFaceM4
Dataset Card for "M3IT" More Information needed
vargr/main_instagram
vargr
Dataset Card for "main_instagram" More Information needed
OpenGVLab/InternVid
OpenGVLab
InternVid InternVid-10M-FLT We present InternVid-10M-FLT, a subset of this dataset, consisting of 10 million video clips, with generated high-quality captions for publicly available web videos. Download The 10M samples are provided in jsonlines file. Columns include the videoID, timestamps, generated caption and their UMT similarity scores.\ How to Use from datasets import load_dataset dataset = load_dataset("OpenGVLab/InternVid")… See the full description on the dataset page: https://huggingface.co/datasets/OpenGVLab/InternVid.
PedroCJardim/QASports
PedroCJardim
Dataset Summary QASports is the first large sports-themed question answering dataset counting over 1.5 million questions and answers about 54k preprocessed wiki pages, using as documents the wiki of 3 of the most popular sports in the world, Soccer, American Football and Basketball. Each sport can be downloaded individually as a subset, with the train, test and validation splits, or all 3 can be downloaded together. 🎲 Complete dataset: https://osf.io/n7r23/ 🔧 Processing… See the full description on the dataset page: https://huggingface.co/datasets/PedroCJardim/QASports.
lytang/MeetingBank-transcript
lytang
This dataset consists of transcripts from the MeetingBank dataset. Overview MeetingBank, a benchmark dataset created from the city councils of 6 major U.S. cities to supplement existing datasets. It contains 1,366 meetings with over 3,579 hours of video, as well as transcripts, PDF documents of meeting minutes, agenda, and other metadata. On average, a council meeting is 2.6 hours long and its transcript contains over 28k tokens, making it a valuable testbed for meeting summarizers and for… See the full description on the dataset page: https://huggingface.co/datasets/lytang/MeetingBank-transcript.
buddhist-nlp/daizhige
buddhist-nlp
Dataset Card for "daizhige" More Information needed
tet550/jawiki_sentences
tet550
Jawiki Sentences Dataset このデータセットは、日本語版Wikipediaの記事を元に作成されました。原文からできる限り不要なタグや表など文章にならないものを取り除いています。各エントリーには、その文が含まれる記事タイトル、セクションタイトルを含めています。 データの構造 各エントリーは以下の3つのフィールドからなります: article_title: 記事のタイトルを表す文字列。 topic_title: 記事のセクションタイトルを表す文字列。 text: セクションのテキストを表す文字列。 データの生成 このデータセットは、下記スクリプトで日本語Wikipediaダンプファイルから生成しています。 https://github.com/tet550/jawiki_sentences ライセンス ウィキペディアのコンテンツは Creative Commons Attribution-ShareAlike 4.0… See the full description on the dataset page: https://huggingface.co/datasets/tet550/jawiki_sentences.
jerryjalapeno/nart-100k-synthetic
jerryjalapeno
Keep in mind that this dataset is entirely synthetic. It is not fully representative of real therapy situations. If you are training an LLM therapist keep in mind the limitations of LLMs and highlight those limitations to users in a responsible manner.
nampdn-ai/tiny-codes
nampdn-ai
Reasoning with Language and Code This synthetic dataset is a collection of 1.6 millions short and clear code snippets that can help LLM models learn how to reason with both natural and programming languages. The dataset covers a wide range of programming languages, such as Python, TypeScript, JavaScript, Ruby, Julia, Rust, C++, Bash, Java, C#, and Go. It also includes two database languages: Cypher (for graph databases) and SQL (for relational databases) in order to study the… See the full description on the dataset page: https://huggingface.co/datasets/nampdn-ai/tiny-codes.
izumi-lab/mc4-ja-filter-ja-normal
izumi-lab
Dataset Card for "mc4-ja-filter-ja-normal" More Information needed
Salesforce/dialogstudio
Salesforce
DialogStudio: Unified Dialog Datasets and Instruction-Aware Models for Conversational AI Author: Jianguo Zhang, Kun Qian Paper|Github|[GDrive] 🎉 March 18, 2024: Update for AI Agent. Check xLAM for the latest data and models relevant to AI Agent! 🎉 March 10 2024: Update for dataset viewer issues: Please refer to https://github.com/salesforce/DialogStudio for view of each dataset, where we provide 5 converted examples along with 5 original examples under each data folder.… See the full description on the dataset page: https://huggingface.co/datasets/Salesforce/dialogstudio.
sidhq/email-thread-summary
sidhq
Dataset Card for "email-thread-summary" More Information needed
samhog/psychology-RLHF
samhog
Psychology RLHF This data set was used to train a LLaMA-7B reward model.