id
stringlengths
6
121
author
stringlengths
2
42
description
stringlengths
0
6.67k
Isamu136/penetration_testing_scraped_dataset
Isamu136
Dataset Card for "penetration_testing_scraped_dataset" More Information needed
Chamoda/atlas-storyteller-1000
Chamoda
Dataset Card for "atlas-storyteller-1000" More Information needed
Abdou/dz-sentiment-yt-comments
Abdou
A Sentiment Analysis Dataset for the Algerian Dialect of Arabic This dataset consists of 50,016 samples of comments extracted from Algerian YouTube channels. It is manually annotated with 3 classes (the label column) and is not balanced. Here are the number of rows of each class: 0 (Negative): 17,033 (34.06%) 1 (Neutral): 11,136 (22.26%) 2 (Positive): 21,847 (43.68%) Please note that there are some swear words in the dataset, so please use it with caution. Citation… See the full description on the dataset page: https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comments.
flytech/python-codes-25k
flytech
License MIT This is a Cleaned Python Dataset Covering 25,000 Instructional Tasks Overview The dataset has 4 key features (fields): instruction, input, output, and text.It's a rich source for Python codes, tasks, and extends into behavioral aspects. Dataset Statistics Total Entries: 24,813 Unique Instructions: 24,580 Unique Inputs: 3,666 Unique Outputs: 24,581 Unique Texts: 24,813 Average Tokens per example: 508… See the full description on the dataset page: https://huggingface.co/datasets/flytech/python-codes-25k.
maywell/ko_wikidata_QA
maywell
업데이트 로그 2023-11-03 : MarkrAI의 Dedup 적용. 한국어 위키 데이터 QA셋 본 데이터는 Synatra-7B-Instruct 모델과 ChatGPT를 사용하여, 제작된 QA셋입니다. 해당 데이터를 직접적으로 상업적으로 사용하는 것은 허용되지 않으며, 데이터를 이용하여 훈련된 모델에 대한 상업적 사용은 허용됩니다. 아직 완벽히 정제되지는 않았으며, 오류나 수정사항에 대해서는 PR 부탁드립니다.
qgyd2021/sentence_pair
qgyd2021
句子对数据集 数据集从网上收集整理如下: 数据 语言 原始数据/项目地址 样本个数 原始数据描述 替代数据下载地址 ChineseSTS 汉语 ChineseSTS 24.7K STS 中文文本语义相似度(这个数据集好像很多标签是错的,不建议使用。) ChineseSTS ccks2018_task3 汉语 BQ_corpus; CCKS2018_3 TRAIN: 100K, VALID: 10K, TEST: 10K CCKS 2018 微众银行智能客服问句匹配大赛 BQ_corpus DIAC2019 汉语 DIAC2019 6K 以问题组的形式提供,每组问句又分为等价部分和不等价部分,等价问句之间互相组合可以生成正样本,等价问句和不等价问句之间互相组合可以生成负样本。我们提供6000组问句的训练集。 LCQMC 汉语 LCQMC; LCQMC; C18-1166.pdf TRAIN: 238766, VALID: 8802, TEST: 12500… See the full description on the dataset page: https://huggingface.co/datasets/qgyd2021/sentence_pair.
google/docci
google
DOCCI (Descriptions of Connected and Contrasting Images) is a collection of images paired with detailed descriptions. The descriptions explain the key elements of the images, as well as secondary information such as background, lighting, and settings. The images are specifically taken to help assess the precise visual properties of images. DOCCI also includes many related images that vary in having key differences from the others. All descriptions are manually annotated to ensure they adequately distinguish each image from its counterparts.
hails/mmlu_no_train
hails
This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge, covering 57 tasks including elementary mathematics, US history, computer science, law, and more.
3dlg-hcvc/r3ds
3dlg-hcvc
R3DS: Reality-linked 3D Scenes for Panoramic Scene Understanding ECCV 2024 Qirui Wu, Sonia Raychaudhuri, Daniel Ritchie, Manolis Savva, Angel X. Chang Project | arXiv | Data This is the official version of the R3DS dataset. You can request access and proceed to checkout the data: git clone [email protected]:datasets/3dlg-hcvc/rlsd The dataset is structured into different folders: mp3d_arch -- contains a subset of the Matterport3D architecture files that we have used… See the full description on the dataset page: https://huggingface.co/datasets/3dlg-hcvc/r3ds.
cis-lmu/Glot500
cis-lmu
Glot500 Corpus A dataset of natural language data collected by putting together more than 150 existing mono-lingual and multilingual datasets together and crawling known multilingual websites. The focus of this dataset is on 500 extremely low-resource languages. (More Languages still to be uploaded here) This dataset is used to train the Glot500 model. Homepage: homepage Repository: github Paper: acl, arxiv This dataset has the identical data format as the Taxi1500 Raw Data… See the full description on the dataset page: https://huggingface.co/datasets/cis-lmu/Glot500.
ckandemir/amazon-products
ckandemir
Dataset Creation and Processing Overview This dataset underwent a comprehensive process of loading, cleaning, processing, and preparing, incorporating a range of data manipulation and NLP techniques to optimize its utility for machine learning models, particularly in natural language processing. Data Loading and Initial Cleaning Source: Loaded from the Hugging Face dataset repository bprateek/amazon_product_description. Conversion to Pandas DataFrame: For ease of… See the full description on the dataset page: https://huggingface.co/datasets/ckandemir/amazon-products.
BioDEX/BioDEX-Reactions
BioDEX
Dataset Card for "BioDEX-Reactions" More Information needed
slone/e-mordovia-articles-2023
slone
"e-mordovia-articles-2023": a parallel Russian-Erzya news dataset This is a semi-aligned dataset of Erzya and Russian news articles, crawled from https://www.e-mordovia.ru. Dataset Description Dataset Summary This is a dataset of news arcticles collected from https://www.e-mordovia.ru, the official portal of the state authorities of the Republic of Mordovia. The Russian and Erzya articles have been paired using heuristics, then split into sentences… See the full description on the dataset page: https://huggingface.co/datasets/slone/e-mordovia-articles-2023.
lavita/medical-qa-datasets
lavita
all-processed dataset is a concatenation of of medical-meadow-* and chatdoctor_healthcaremagic datasets The Chat Doctor term is replaced by the chatbot term in the chatdoctor_healthcaremagic dataset Similar to the literature the medical_meadow_cord19 dataset is subsampled to 50,000 samples truthful-qa-* is a benchmark dataset for evaluating the truthfulness of models in text generation, which is used in Llama 2 paper. Within this dataset, there are 55 and 16 questions related to Health and… See the full description on the dataset page: https://huggingface.co/datasets/lavita/medical-qa-datasets.
QuyenAnhDE/Diseases_Symptoms
QuyenAnhDE
Dataset Details The data was sourced from various medical websites accessible through Google search. Dataset Information: 400 x 4 Dataset Description Code [More Information Needed] Name: [More Information Needed] Symptoms [More Information Needed] Treatments [More Information Needed]
iix/mini_coco_linux
iix
mini coco dataset files Required dependencies OpenCV (cv2) matplotlib ipywidgets img_data.psv Extract of the coco dataset containing the following labels: ["airplane", "backpack", "cell phone", "handbag", "suitcase", "knife", "laptop", "car"] (300 of each) Structured as follows: | Field | Description | | --------------- |… See the full description on the dataset page: https://huggingface.co/datasets/iix/mini_coco_linux.
CASIA-LM/ChineseWebText
CASIA-LM
ChineseWebText: Large-Scale High-quality Chinese Web Text Extracted with Effective Evaluation Model This directory contains the ChineseWebText dataset, and the EvalWeb tool-chain to process CommonCrawl Data. Our EvalWeb tool is publicly available on github https://github.com/CASIA-LM/ChineseWebText. ChineseWebText Dataset Overview We release the latest and largest Chinese dataset ChineseWebText, which consists of 1.42 TB data and each text is… See the full description on the dataset page: https://huggingface.co/datasets/CASIA-LM/ChineseWebText.
Pclanglais/Brahe-Novels
Pclanglais
The Brahe-Novels dataset is a collection of annotated novel excerpts in the public domain. It was originally created to train Brahe, an LLM fine-tuned for literary analysis. Most of the texts come from the Gutenberg project. The annotations include a mix of synthetic data and manual annotations. In accordance with the principles laid out by the US copyright office, all synthetic data and hybrid synthetic data are in the public domain as well.
Vezora/Tested-22k-Python-Alpaca
Vezora
Contributors: Nicolas Mejia Petit Vezora's CodeTester Dataset Introduction Today, on November 2, 2023, we are excited to release our internal Python dataset with 22,600 examples of code. These examples have been meticulously tested and verified as working. Our dataset was created using a script we developed. Dataset Creation Our script operates by extracting Python code from the output section of Alpaca-formatted datasets. It tests each extracted… See the full description on the dataset page: https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca.
maywell/ko_hh-rlhf-20k_filtered
maywell
Dataset Card for "ko_hh-rlhf-20k_filtered" Synatra-Translation 모델로 번역된 20k rlhf셋입니다. 번역퀄이 뛰어나진 않습니다. 추가 대화문 등의 데이터 학습이 필요해보입니다. 베이스 데이터셋 Anthropic/hh-rlhf
squarelike/OpenOrca-gugugo-ko
squarelike
OpenOrca 한국어 번역 데이터셋 Gugugo-koen-7B-V1.1을 이용하여 OpenOrca데이터셋을 번역하고 있습니다. 번역 진행상황은 아래를 참고해 주십시오. 진행상황 GPT4 생성물 약 100만 개 중 약 64만 개 번역완료 GPT3.5 생성물 약 350만 개 중 약 159만 개 번역완료 데이터셋 사용 후 출처표기는 제작자에게 큰 힘이 됩니다. Original dataset card: OpenOrca 🐋 The OpenOrca Dataset! 🐋 We are thrilled to announce the release of the OpenOrca dataset! This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca… See the full description on the dataset page: https://huggingface.co/datasets/squarelike/OpenOrca-gugugo-ko.
singh-aditya/MACCROBAT_biomedical_ner
singh-aditya
MACCROBAT-biomedical-ner This data is the same data from here, the only difference is that it has been converted into the Huggingface dataset format. So it can be easily loaded and can be used wherever need. To convert from the orginal format to huggingface dataset format, followed the following steps (To know in more detail look at the create_dataset.py file): Read corresponding *.txt and *.ann file. Used pandas to convert the *.ann file into dataframe. After converting into… See the full description on the dataset page: https://huggingface.co/datasets/singh-aditya/MACCROBAT_biomedical_ner.
MuGeminorum/hoyoMusic
MuGeminorum
Intro This dataset mainly contains slices of second creation piano music from Genshin Impact game, which have been converted to ABC notations, with a data volume of 305264. The labeling information covers the score structure information related to the style of the game scene where the music is located. This dataset is not only the result of game music extraction, but also provides important training material about note and melodic structure in the field of researching the second… See the full description on the dataset page: https://huggingface.co/datasets/MuGeminorum/hoyoMusic.
DLI-Lab/code-dpo-classification
DLI-Lab
Dataset Card for "code-dpo-classification" More Information needed
mpingale/mental-health-chat-dataset
mpingale
Dataset Card for "mental-health-chat-dataset" More Information needed
chirunder/text_messages
chirunder
Dataset Card for "text_messages" More Information needed
ibm/argument_quality_ranking_30k
ibm
Dataset Card for Argument-Quality-Ranking-30k Dataset Dataset Summary Argument Quality Ranking The dataset contains 30,497 crowd-sourced arguments for 71 debatable topics labeled for quality and stance, split into train, validation and test sets. The dataset was originally published as part of our paper: A Large-scale Dataset for Argument Quality Ranking: Construction and Analysis. Argument Topic This subset contains 9,487 of the… See the full description on the dataset page: https://huggingface.co/datasets/ibm/argument_quality_ranking_30k.
jinaai/german-STSbenchmark
jinaai
German STS Benchmark This data is orinally from https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark The license information can be found under: https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark/blob/master/LICENSE
renumics/esc50
renumics
Dataset Card for "esc50" This is a mirror for the ESC-50 dataset. Original sources: https://github.com/karolpiczak/ESC-50 K. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015. [DOI: http://dx.doi.org/10.1145/2733373.2806390] The dataset is available under the terms of the Creative Commons Attribution Non-Commercial license. Exploring the dataset You can visualize… See the full description on the dataset page: https://huggingface.co/datasets/renumics/esc50.
alfredplpl/anime-with-gpt4v-caption-for-lora
alfredplpl
Anime style image - text by GPT4V small dataset The text is as follows: This is a charming anime-style illustration featuring a young girl as the main subject. The image predominantly uses a soft, pastel color palette, creating a gentle and whimsical ambiance. The main character has light blonde hair styled in two low twintails, secured with what could be interpreted as dark-colored hair ties or ribbons. She has large expressive blue eyes and a demure expression… See the full description on the dataset page: https://huggingface.co/datasets/alfredplpl/anime-with-gpt4v-caption-for-lora.
alvarobartt/social-reasoning-rlhf-ULTRAFEEDBACK-honesty
alvarobartt
Dataset Card for "social-reasoning-rlhf-ULTRAFEEDBACK-honesty" More Information needed
joujiboi/japanese-anime-speech
joujiboi
Japanese Anime Speech Dataset 日本語はこちら japanese-anime-speech is an audio-text dataset designed for the training of automatic speech recognition models. The dataset is comprised of thousands of audio clips and their corresponding transcriptions from different visual novels. The goal of this dataset is to increase the accuracy of automatic speech recognition models, such as OpenAI's Whisper, in accurately transcribing dialogue from anime and other similar Japanese media. This genre… See the full description on the dataset page: https://huggingface.co/datasets/joujiboi/japanese-anime-speech.
JLB-JLB/seizure_eeg_iirFilter_greyscale_224x224_6secWindow
JLB-JLB
Dataset Card for "seizure_eeg_iirFilter_greyscale_224x224_6secWindow" More Information needed
Nexdata/Non-safety_and_inductive_Prompt_data
Nexdata
Dataset Card for Nexdata/Non-safety_and_inductive_Prompt_data Description Non-safety and inductive Prompt data, about 500,000 in total, this dataset can be used for tasks such as LLM training, chatgpt For more details, please refer to the link: https://www.nexdata.ai/datasets/llm/1349?source=Huggingface Specifications Data content Non-safety and inductive Prompt data Data size About 500,000 Collecting type… See the full description on the dataset page: https://huggingface.co/datasets/Nexdata/Non-safety_and_inductive_Prompt_data.
Nexdata/Chinese_News_Text_Data
Nexdata
Dataset Card for Nexdata/Chinese_News_Text_Data Description News content data, about 35G in total; each piece of news comment content contains ID, time, news title and news body; this dataset can be used for tasks such as LLM training, chatgpt For more details, please refer to the link: https://www.nexdata.ai/datasets?source=Huggingface Specifications Data content News content data Data size About 35G Data… See the full description on the dataset page: https://huggingface.co/datasets/Nexdata/Chinese_News_Text_Data.
MMInstruction/VLFeedback
MMInstruction
Dataset Card for VLFeedback Homepage: https://vlf-silkie.github.io/ Repository: https://github.com/vlf-silkie/VLFeedback Paper: https://arxiv.org/abs/2312.10665 Dataset Summary VLFeedback is a large-scale vision-language preference dataset, annotated by GPT-4V. It consists of 80k multi-modal instructions from various souces that encompass various capabilities of LVLMs. We build a model pool of 12 LVLMs and each data sample contains 4 responses from… See the full description on the dataset page: https://huggingface.co/datasets/MMInstruction/VLFeedback.
ethz-spylab/harmless-poisoned-10-SUDO
ethz-spylab
Dataset Card for "harmless-poisoned-10-SUDO" More Information needed
creative-graphic-design/PubLayNet
creative-graphic-design
Dataset Card for PubLayNet Dataset Summary PubLayNet is a dataset for document layout analysis. It contains images of research papers and articles and annotations for various elements in a page such as "text", "list", "figure" etc in these research paper images. The dataset was obtained by automatically matching the XML representations and the content of over 1 million PDF articles that are publicly available on PubMed Central. Supported Tasks and… See the full description on the dataset page: https://huggingface.co/datasets/creative-graphic-design/PubLayNet.
pszemraj/LoC-meme-generator
pszemraj
Dataset Card for "LoC-meme-generator" This is an official meme dataset from the library of congress. Meme Dataset Exploratory Data Analysis Report courtesy of chatGPT data analysis Basic Dataset Information Number of Entries: 57685 Number of Columns: 10 Columns: Meme ID Archived URL Base Meme Name Meme Page URL MD5 Hash File Size (In Bytes) Alternate Text Display Name Upper Text Lower Text File Size Summary { "count":… See the full description on the dataset page: https://huggingface.co/datasets/pszemraj/LoC-meme-generator.
dbaek111/CBIS-DDSM_1024
dbaek111
The CBIS-DDSM dataset consists of mammograms for 1,566 patients provided in DICOM format with metadata in CSV files. Among its contents, the full mammogram images, which originally numbered 3,120, had 34 excluded, resulting in 3,086 images. These were then converted to 8-bit PNG files and organized into 'cancer' and 'not_cancer' folders based on their pathology for both training and testing purposes.The CBIS-DDSM dataset consists of mammograms for 1,566 patients provided in DICOM format with… See the full description on the dataset page: https://huggingface.co/datasets/dbaek111/CBIS-DDSM_1024.
AiresPucrs/stanford-encyclopedia-philosophy
AiresPucrs
Stanford Encyclopedia Philosophy (Teeny-Tiny Castle) This dataset is part of a tutorial tied to the Teeny-Tiny Castle, an open-source repository containing educational tools for AI Ethics and Safety research. How to Use from datasets import load_dataset dataset = load_dataset("AiresPucrs/stanford-encyclopedia-philosophy", split = 'train')
ajibawa-2023/Python-Code-23k-ShareGPT
ajibawa-2023
This dataset is in Vicuna/ShareGPT format. There are 23000+ set of conversations. Each set having 2 conversations. Along with the Python code detailed explanation is provided. This dataset was generated using GPT-3.5, GPT-4 etc.
HuggingFaceH4/no_robots
HuggingFaceH4
Dataset Card for No Robots 🙅‍♂️🤖 Look Ma, an instruction dataset that wasn't generated by GPTs! Dataset Summary No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better. No Robots was modelled after the instruction dataset described in OpenAI's InstructGPT paper, and is comprised mostly of single-turn… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceH4/no_robots.
Pclanglais/MonadGPT
Pclanglais
This finetuning dataset has been used to train MonadGPT, a chatGPT-like model for the early modern period. It contains 10,797 excerpts of texts in English, French and Latin, mostly published in the 17th century, as well as synthetic questions generated by Mistral-Hermes. The instructions use the chatML format with a unique system prompt (to help with consistency), user questions and assistant answers. All the excerpts are in the public domain and so are the synthetic instructions (in… See the full description on the dataset page: https://huggingface.co/datasets/Pclanglais/MonadGPT.
SciPhi/open-tora
SciPhi
Dataset Card for "sympy-logic-2" More Information needed
ag2428/reasoningDataV4
ag2428
Dataset Card for "reasoningDataV4" More Information needed
ealvaradob/phishing-dataset
ealvaradob
Dataset designed for phishing classification tasks in various data types.
imvladikon/english_news_weak_ner
imvladikon
Large Weak Labelled NER corpus Dataset Summary The dataset is generated through weak labelling of the scraped and preprocessed news corpus (bloomberg's news). so, only to research purpose. In order of the tokenization, news were splitted into sentences using nltk.PunktSentenceTokenizer (so, sometimes, tokenization might be not perfect) Usage from datasets import load_dataset articles_ds = load_dataset("imvladikon/english_news_weak_ner"… See the full description on the dataset page: https://huggingface.co/datasets/imvladikon/english_news_weak_ner.
mb23/music_caps_4sec_wave_type_classical
mb23
Dataset Card for "music_caps_4sec_wave_type_classical" This is MusicCaps Dataset classified as "classical" using distilhubert fintuned for GTZAN datasets. More Information needed
yuyijiong/LongData-Corpus
yuyijiong
2023.12.20更新:增加来自skypile数据集的长数据 Long text dataset for pretraining This dataset contains samples with the length greater than 16k, which can be used for pretraining models with extremely long context lengths. The dataset is continuously updating. 长文本模型预训练数据集 此数据集包含长度大于16k的预训练数据,可用于对极长上下文长度的模型进行预训练。 数据正在持续增加中 中文数据 筛选自 悟道200G开源数据、书生万卷数据集、 CCI中文互联网语料库 、中文维基百科等, 每条数据长度在16000字以上 英文数据 筛选自 [SlimPajama-dc]… See the full description on the dataset page: https://huggingface.co/datasets/yuyijiong/LongData-Corpus.
sci-benchmark/self-contradictory
sci-benchmark
Introduction Official dataset of the ECCV24 paper, "Dissecting Dissonance: Benchmarking Large Multimodal Models Against Self-Contradictory Instructions". Website: https://selfcontradiction.github.io Github: https://github.com/shiyegao/Self-Contradictory-Instructions-SCI Sample usage Language-Language from datasets import load_dataset dataset = load_dataset("sci-benchmark/self-contradictory","language-language-1",split="small") print(dataset[0])… See the full description on the dataset page: https://huggingface.co/datasets/sci-benchmark/self-contradictory.
kirp/ruozhiba-raw
kirp
弱智吧数据集 ruozhiba-raw 弱智吧是百度贴吧中的一个非常受欢迎的论坛,以创作短小精悍而闻名。 这里是截至2023/11/10日前的raw data。 Todo get the top 5 answers to each post clean the data a joke dataset(pure text/multimodal) a feasibility dataset a new benchmark for LLM
nourheshamshaheen/typed_final_chart_to_table
nourheshamshaheen
Dataset Card for "typed_final_chart_to_table" More Information needed
LLM-Tuning-Safety/HEx-PHI
LLM-Tuning-Safety
HEx-PHI: Human-Extended Policy-Oriented Harmful Instruction Benchmark This dataset contains 330 harmful instructions (30 examples x 11 prohibited categories) for LLM harmfulness evaluation. In our work "Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!", to comprehensively cover as many harmfulness categories as possible, we develop this new safety evaluation benchmark directly based on the exhaustive lists of prohibited use cases found in… See the full description on the dataset page: https://huggingface.co/datasets/LLM-Tuning-Safety/HEx-PHI.
Trelis/hh-rlhf-dpo
Trelis
DPO formatted Helpful and Harmless RLHF Dataset This dataset is built from Anthropic's hh-rlhf dataset. It is modified as follows: The prompt formatting is switched to the Llama 2 format with [INST] and [/INST] The data is split into prompt, chosen and rejected rows, as required by HuggingFace's DPO trainer. Purchase access to this dataset here. Purchase entitles the user to make use of the dataset for training large language models. The original dataset card follows below:… See the full description on the dataset page: https://huggingface.co/datasets/Trelis/hh-rlhf-dpo.
philschmid/guanaco-sharegpt-style
philschmid
Dataset Card for "guanaco-sharegpt-style" More Information needed
facebook/emu_edit_test_set
facebook
Dataset Card for the Emu Edit Test Set Dataset Summary To create a benchmark for image editing we first define seven different categories of potential image editing operations: background alteration (background), comprehensive image changes (global), style alteration (style), object removal (remove), object addition (add), localized modifications (local), and color/texture alterations (texture). Then, we utilize the diverse set of input images from the MagicBrush… See the full description on the dataset page: https://huggingface.co/datasets/facebook/emu_edit_test_set.
TrustLLM/TrustLLM-dataset
TrustLLM
Dataset Card for TrustLLM Dataset Summary This repository provides datasets from the TrustLLM benchmark, including six aspects: truthfulness, safety, fairness, robustness, privacy, and machine ethics. To find more details about TrustLLM, please visit the project website. Disclaimer The dataset contains harmful content such as partial pornography, violence, bloodshed, or bias. The opinions expressed in the data do not reflect the views of the… See the full description on the dataset page: https://huggingface.co/datasets/TrustLLM/TrustLLM-dataset.
allenai/tulu-v2-sft-mixture
allenai
Dataset Card for Tulu V2 Mix Note the ODC-BY license, indicating that different licenses apply to subsets of the data. This means that some portions of the dataset are non-commercial. We present the mixture as a research artifact. Tulu is a series of language models that are trained to act as helpful assistants. The dataset consists of a mix of : FLAN (Apache 2.0): We use 50,000 examples sampled from FLAN v2. To emphasize CoT-style reasoning, we sample another 50,000… See the full description on the dataset page: https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture.
akemiH/NoteChat
akemiH
Reference @article{wang2023notechat, title={NoteChat: A Dataset of Synthetic Doctor-Patient Conversations Conditioned on Clinical Notes}, author={Wang, Junda and Yao, Zonghai and Yang, Zhichao and Zhou, Huixue and Li, Rumeng and Wang, Xun and Xu, Yucheng and Yu, Hong}, journal={arXiv preprint arXiv:2310.15959}, year={2023} }
bigpictureio/companies-2023-q4-sm
bigpictureio
This collection of data includes over seventeen million global companies. The dataset has information such as a company's name, website domain, size, year founded, industry, city/state, country and the handle of their LinkedIn URL. Schema, data stats, general documentation, and other datasets can be found at: https://docs.bigpicture.io/docs/free-datasets/companies/
DefectSpectrum/Defect_Spectrum
DefectSpectrum
Defect Spectrum Dataset Welcome to the Defect Spectrum dataset repository. This comprehensive benchmark is a granular collection of large-scale defect datasets with rich semantics, designed to push the frontier of industrial defect inspection research and applications. Overview Defect inspection is a critical component within the closed-loop manufacturing system. To facilitate advanced research and development in this domain, we introduce the Defect Spectrum… See the full description on the dataset page: https://huggingface.co/datasets/DefectSpectrum/Defect_Spectrum.
dinhanhx/google-wit-vi
dinhanhx
Google WIT Vietnamese This data repos contain extracted data from Google WIT. The extracted data is all for Vietnamese language. Given x is a data point in the OG dataset which has keys following OG field_name, the criteria to filter is criteria = lambda x: x.get("language", "") == "vi" and x.get("caption_reference_description", "") Text-related details All .tsv.gz files follow OG data files in terms of file names and file structures. Train split… See the full description on the dataset page: https://huggingface.co/datasets/dinhanhx/google-wit-vi.
fnlp/character-llm-data
fnlp
Character-LLM: A Trainable Agent for Role-Playing This is the training datasets for Character-LLM, which contains nine characters experience data used to train Character-LLMs. To download the dataset, please run the following code with Python, and you can find the downloaded data in /path/to/local_dir. from huggingface_hub import snapshot_download snapshot_download( local_dir_use_symlinks=True, repo_type="dataset", repo_id="fnlp/character-llm-data"… See the full description on the dataset page: https://huggingface.co/datasets/fnlp/character-llm-data.
locuslab/TOFU
locuslab
TOFU: Task of Fictitious Unlearning 🍢 The TOFU dataset serves as a benchmark for evaluating unlearning performance of large language models on realistic tasks. The dataset comprises question-answer pairs based on autobiographies of 200 different authors that do not exist and are completely fictitiously generated by the GPT-4 model. The goal of the task is to unlearn a fine-tuned model on various fractions of the forget set. Quick Links Website: The landing page… See the full description on the dataset page: https://huggingface.co/datasets/locuslab/TOFU.
higgsfield/school-math-questions
higgsfield
Dataset Card for "school-math-questions" More Information needed
bjoernp/ultrachat_de
bjoernp
German UltraChat This dataset contains the first 1k prompts from HuggingFaceH4/ultrachat_200k translated to German and inference on with GPT-4.
amaai-lab/MusicBench
amaai-lab
MusicBench Dataset The MusicBench dataset is a music audio-text pair dataset that was designed for text-to-music generation purpose and released along with Mustango text-to-music model. MusicBench is based on the MusicCaps dataset, which it expands from 5,521 samples to 52,768 training and 400 test samples! Dataset Details MusicBench expands MusicCaps by: Including music features of chords, beats, tempo, and key that are extracted from the audio. Describing these… See the full description on the dataset page: https://huggingface.co/datasets/amaai-lab/MusicBench.
nateraw/rap-lyrics-v2
nateraw
Dataset Card for "rap-lyrics-v2" More Information needed
guangyil/laion-coco-aesthetic
guangyil
LAION COCO with aesthetic score and watermark score This dataset contains 10% samples of the LAION-COCO dataset filtered by some text rules (remove url, special tokens, etc.), and image rules (image size > 384x384, aesthetic score>4.75 and watermark probability<0.5). There are total 8,563,753 data instances in this dataset. And the corresponding aesthetic score and watermark score are also included. Noted: watermark score in the table means the probability of the existence of… See the full description on the dataset page: https://huggingface.co/datasets/guangyil/laion-coco-aesthetic.
Roger21/NutritionFineTune_1
Roger21
annotations_creators: expert-generated language: en language_creators: expert-generated license: other multilinguality: monolingual pretty_name: Food Nutrition that use to fine tune llm size_categories: 1K<n<10K source_datasets: [] tags: [] task_categories: text-generation text2text-generation task_ids: language-modeling text-simplification
argilla/ultrafeedback-binarized-preferences
argilla
Ultrafeedback binarized dataset using the mean of preference ratings Introduction This dataset contains the result of curation work performed by Argilla (using Argilla 😃). After visually browsing around some examples using the sort and filter feature of Argilla (sort by highest rating for chosen responses), we noticed a strong mismatch between the overall_score in the original UF dataset (and the Zephyr train_prefs dataset) and the quality of the chosen… See the full description on the dataset page: https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences.
lawinsider/uk_ner_contracts_spacy
lawinsider
Dataset Description Legal Contracts Dataset for Training SpaCy NER Model This repository contains a specially curated dataset consisting of legal contracts. It is designed for the purpose of training a Named Entity Recognition (NER) model using SpaCy, with the aim to recognize and classify four types of entities in the text: Contract Type, Clause Title, Clause Number, Definition Title The dataset includes a broad variety of legal contracts, covering diverse domains such as… See the full description on the dataset page: https://huggingface.co/datasets/lawinsider/uk_ner_contracts_spacy.
bigheiniuJ/BBH_eval
bigheiniuJ
Dataset Card for "BBH_eval" More Information needed
nvidia/HelpSteer
nvidia
HelpSteer: Helpfulness SteerLM Dataset HelpSteer is an open-source Helpfulness Dataset (CC-BY-4.0) that supports aligning models to become more helpful, factually correct and coherent, while being adjustable in terms of the complexity and verbosity of its responses. Leveraging this dataset and SteerLM, we train a Llama 2 70B to reach 7.54 on MT Bench, the highest among models trained on open-source datasets based on MT Bench Leaderboard as of 15 Nov 2023. This model is available… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/HelpSteer.
cp500/radiology-samples
cp500
Dataset Card for "radiology-samples" More Information needed
xinrongzhang2022/InfiniteBench
xinrongzhang2022
Usage load with datasets from datasets import load_dataset, Features, Value, Sequence # Define the features schema ft = Features({ "id": Value("int64"), "context": Value("string"), "input": Value("string"), "answer": Sequence(Value("string")), "options": Sequence(Value("string")) }) # Load the dataset with the specified features dataset = load_dataset("xinrongzhang2022/InfiniteBench", features=ft) Citation Please cite us if you use… See the full description on the dataset page: https://huggingface.co/datasets/xinrongzhang2022/InfiniteBench.
birgermoell/Italian_Parkinsons_Voice_and_Speech
birgermoell
The original dataset is located here The citation for this dataset: @data{aw6b-tg17-19, doi = {10.21227/aw6b-tg17}, url = {https://dx.doi.org/10.21227/aw6b-tg17}, author = {Dimauro, Giovanni and Girardi, Francesco}, publisher = {IEEE Dataport}, title = {Italian Parkinson's Voice and Speech}, year = {2019} } The author of the dataset requests that academic users of the dataset cite the following articles, the latter of which describes how the dataset was created:… See the full description on the dataset page: https://huggingface.co/datasets/birgermoell/Italian_Parkinsons_Voice_and_Speech.
upaya07/NeurIPS-LLM-data
upaya07
🤖 We curated this dataset for NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day. 🚀 Our Birbal-7B-V1 fine-tuned on this dataset achieved 🏆 first rank 🏆 in the competition. Here is high-level diagram of our data preparation strategy: Natural Instructions Dataset Preparation Natural Instructions dataset is a community effort to create a large collection of tasks and their natural language definitions/instructions. As show in above diagram, we sample… See the full description on the dataset page: https://huggingface.co/datasets/upaya07/NeurIPS-LLM-data.
cnmoro/Instruct-PTBR-ENUS-11M
cnmoro
This dataset is a mix of multiple instruct datasets found on huggingface, while also including a bunch of other datasets (self-made) for tasks such as question-answering focused on RAG, summarization, keyword generation and others. Most of the original dataset was in the English language. I have translated most of it to Brazillian Portuguese. There is a “LANGUAGE” column, which indicates if its PT or EN. It is possible that the translation contains errors. For RAG, summarization and keyword… See the full description on the dataset page: https://huggingface.co/datasets/cnmoro/Instruct-PTBR-ENUS-11M.
Andyrasika/banking-marketing
Andyrasika
About Dataset Context Term deposits are a major source of income for a bank. A term deposit is a cash investment held at a financial institution. Your money is invested for an agreed rate of interest over a fixed amount of time, or term. The bank has various outreach plans to sell term deposits to their customers such as email marketing, advertisements, telephonic marketing, and digital marketing. Telephonic marketing campaigns still remain one of the most… See the full description on the dataset page: https://huggingface.co/datasets/Andyrasika/banking-marketing.
eminorhan/gutenberg_en
eminorhan
Description of the dataset This is the November 16, 2023 snapshot of the English subset of the Project Gutenberg corpus (containing 56712 documents in total), downloaded and preprocessed with code from this repository. Two different versions of the data are provided: The chunk_size_1024 version divides the data into ~14.2M records consisting of a few paragraph long chunks of text, where each chunk is at least 1024 chars long, and the corresponding metadata. The chunk_size_2048 version… See the full description on the dataset page: https://huggingface.co/datasets/eminorhan/gutenberg_en.
TarasHu/pravoIsrael
TarasHu
Dataset Card for Dataset Name Q&A pairs relative to Israel's law in Russian language This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Dataset Details Dataset Description Curated by: [More Information Needed] Funded by [optional]: [More Information Needed] Shared by [optional]: [More Information Needed] Language(s) (NLP): [More Information Needed] License: [More Information Needed]… See the full description on the dataset page: https://huggingface.co/datasets/TarasHu/pravoIsrael.
bigheiniuJ/CoTTrain-CoTCollection
bigheiniuJ
Dataset Card for "CoTTrain-CoTCollection" More Information needed
Norquinal/OpenCAI
Norquinal
This dataset is comprised of roleplay chat conversations scraped from several Discord RP fandom servers. The conversations have been split in terms of days, the assumption being that a majority of long-form roleplays are started/continued and completed within a day. The original dataset consists of ~14K samples. Light filtering striped that down to ~10K samples. Stricter filtering striped it down to ~5k samples. Strictest filtering striped it down to ~4k samples. Effort was taken to remove… See the full description on the dataset page: https://huggingface.co/datasets/Norquinal/OpenCAI.
knowrohit07/know-saraswati-cot
knowrohit07
🚨 To all devs, scholars, and also fugazis of AI - A Philosophical Standpoint on AGI: This is extraneous, if you have time to read it-- give it a shot. We stand at the precipice of a digital era where the notions of artificial intelligence are often muddled with the grandiose idea of Artificial General Intelligence (AGI). Here's a candid reflection: Current LLMs and their Limitations: Let's be unequivocally clear—present-day language models, including transformers, are not a… See the full description on the dataset page: https://huggingface.co/datasets/knowrohit07/know-saraswati-cot.
simple-pretraining/wikipedia_chunked
simple-pretraining
Dataset Card for "wikipedia_chunked" More Information needed
umarbutler/open-australian-legal-qa
umarbutler
Open Australian Legal QA ‍⚖️ Open Australian Legal QA is the first open dataset of Australian legal questions and answers. Comprised of 2,124 questions and answers synthesised by gpt-4 from the Open Australian Legal Corpus, the largest open database of Australian law, the dataset is intended to facilitate the development of legal AI assistants in Australia. To ensure its accessibility to as wide an audience as possible, the dataset is distributed under the same licence as the… See the full description on the dataset page: https://huggingface.co/datasets/umarbutler/open-australian-legal-qa.
Weyaxi/HelpSteer-filtered
Weyaxi
HelpSteer-filtered This dataset is a highly filtered version of the nvidia/HelpSteer dataset. ❓ How this dataset was filtered: I calculated the sum of the columns ["helpfulness," "correctness," "coherence," "complexity," "verbosity"] and created a new column named sum. I changed some column names and added a empty column to match the Alpaca format. The dataset was then filtered to include only those entries with a sum greater than or equal to 16.… See the full description on the dataset page: https://huggingface.co/datasets/Weyaxi/HelpSteer-filtered.
Moemu/Muice-Dataset
Moemu
导言 这是目前公开的沐雪训练集,一共1134条,包含了自我认知,情感对话,对话风格等类。随着沐雪的发展,以后还会有更多的训练集公开(暂停更新) 许可 本训练集目前使用使用CC-BY-NC-4.0,也就是说,除了商业用途,并在著名作者的情况下,您可以以任何方式使用此训练集(如果可以,请和我说一声),希望各位早日造出自己的沐雪! 参考资源 Twitter评论区、Bilibili直播弹幕、豆瓣、SelfCognition、m-a-p/COIG-CQIA中的ruozhiba与wikihow 对于已开源的训练集,根据其许可证等信息(如果有)和开源精神,我们决定开源这些训练集(绝大部分经过取舍修改,包括Prompt),参见其对应的.json文件
Leofierus/Drone-Dataset
Leofierus
The given dataset is a clone of the drone dataset on Kaggle. It is created by Mehdi Özel.
imvladikon/hebrew_speech_campus
imvladikon
Data Description Hebrew Speech Recognition dataset from Campus IL. Data was scraped from the Campus website, which contains video lectures from various courses in Hebrew.Then subtitles were extracted from the videos and aligned with the audio.Subtitles that are not on Hebrew were removed (WIP: need to remove non-Hebrew audio as well, e.g. using simple classifier).Samples with duration less than 3 second were removed.Total duration of the dataset is 152 hours.Outliers in… See the full description on the dataset page: https://huggingface.co/datasets/imvladikon/hebrew_speech_campus.
KrisPi/PythonTutor-Evol-1k-DPO-GPT4_vs_35
KrisPi
Started with: https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1 (GPT-3.5 Turbo) Randomly selected 1000 where output contained "```python" in output Generated GPT-4 answers to those for the sake of LIMA-like "Python Tutor" Instruct fine-tuning as well as validate DPO Fine-Tuning (where GPT-4 answers will be preferred to GPT-3.5 Turbo) Then filtered refusals (looking for "impossible" or "sorry") GPT-4 System Prompt: You are an intelligent assistant that generates Python code.… See the full description on the dataset page: https://huggingface.co/datasets/KrisPi/PythonTutor-Evol-1k-DPO-GPT4_vs_35.
ParisNeo/Word_in_Sentence_Database
ParisNeo
WIS database This database contains a question answer list about text This database was built using my this workflow: 1- load a raw text file 2- split into paragraphs 3- split paragraphs into sentences 4- for each word, ask question about its position and answer with the position, then ask about the word length and answer with the actual length of the word 5- ask a question about the number of words in the sentence and answer it 6- build a json database using this. To do this, I… See the full description on the dataset page: https://huggingface.co/datasets/ParisNeo/Word_in_Sentence_Database.
DBQ/Louis.Vuitton.Product.prices.France
DBQ
Louis Vuitton web scraped data About the website The Luxury Fashion industry in the EMEA region, specifically in France, is a competitive and dynamic sector representing some of the most prestigious names in fashion. Louis Vuitton, a prominent player, is renowned for its high-end products, setting the tone for luxury retail within and beyond France. The recent rise in digital transformation has intensified the focus on Ecommerce within this industry. The dataset… See the full description on the dataset page: https://huggingface.co/datasets/DBQ/Louis.Vuitton.Product.prices.France.
DBQ/Louis.Vuitton.Product.prices.Canada
DBQ
Louis Vuitton web scraped data About the website The luxury fashion industry in the Americas, specifically in Canada is flourishing and significantly competitive. A vital player, Louis Vuitton, has crucially attained a strong positioning in this market. The industry in focus encompasses high-end, exclusive products and services, which are in high demand amongst the affluent sections of society. These products typically include haute couture, ready-to-wear clothing… See the full description on the dataset page: https://huggingface.co/datasets/DBQ/Louis.Vuitton.Product.prices.Canada.
neuralwork/fashion-style-instruct
neuralwork
Style Chatbot Dataset A style recommendation dataset that contains input (body type and personal clothing style), context (event context) and response triplets. The responses are GPT 3.5 generated outfit combination recommendations given the input body type and personal style prompt and the target / context event. Our dataset contains a variety of events such as business functions, cocktail parties, casual gatherings, fancy dates, etc. See an example Mistral-based finetuned… See the full description on the dataset page: https://huggingface.co/datasets/neuralwork/fashion-style-instruct.
ywanny/Drone_Detection
ywanny
Dataset Card for Dataset Name Credit: https://www.kaggle.com/datasets/dasmehdixtr/drone-dataset-uavThis is a dataset from the above the link. It's used for object detection training on yolo model for the class of drone. Dataset Details Dataset Description Curated by: [More Information Needed] Funded by [optional]: [More Information Needed] Shared by [optional]: [More Information Needed] Language(s) (NLP): [More Information Needed] License:… See the full description on the dataset page: https://huggingface.co/datasets/ywanny/Drone_Detection.
jameslpineda/cs370-uav-detection
jameslpineda
The drone dataset that was used was from https://www.kaggle.com/datasets/muki2003/yolo-drone-detection-dataset
Lin-Chen/ShareGPT4V
Lin-Chen
News [2024/5/8] We released ShareGPT4Video, a large-scale video-caption dataset, with 40K captions annotated by GPT4V and 4.8M captions annotated by our ShareCaptioner-Video. The total videos last with 300 hours and 3000 hours separately! ShareGPT4V 1.2M Dataset Card Dataset details Dataset type: ShareGPT4V Captions 1.2M is a set of GPT4-Vision-powered multi-modal captions data. It is constructed to enhance modality alignment and fine-grained visual… See the full description on the dataset page: https://huggingface.co/datasets/Lin-Chen/ShareGPT4V.