id
stringlengths
6
121
author
stringlengths
2
42
description
stringlengths
0
6.67k
sandersaarond/Grafana-Community-Dashboards
sandersaarond
This is a raw dump of the dashboard json hosted at https://grafana.com/grafana/dashboards/, taken on 06-06-23. Dashboards themselves are json; related metadata is retained for filtering purposes (e.g., by number of downloads) to help in identifying useful data. Dashboards may contain many different query languages, may range across many versions of Grafana, and may be completely broken (since anyone can upload one). JSON structure varies considerably between different dashboards, and finding… See the full description on the dataset page: https://huggingface.co/datasets/sandersaarond/Grafana-Community-Dashboards.
lmsys/chatbot_arena_conversations
lmsys
Chatbot Arena Conversations Dataset This dataset contains 33K cleaned conversations with pairwise human preferences. It is collected from 13K unique IP addresses on the Chatbot Arena from April to June 2023. Each sample includes a question ID, two model names, their full conversation text in OpenAI API JSON format, the user vote, the anonymized user ID, the detected language tag, the OpenAI moderation API tag, the additional toxic tag, and the timestamp. To ensure the safe… See the full description on the dataset page: https://huggingface.co/datasets/lmsys/chatbot_arena_conversations.
dkoterwa/kor-sts
dkoterwa
Korean Semantic Textual Similarity (KorSTS) Dataset For a better dataset description, please visit this GitHub repository prepared by the authors of the article: LINK This dataset was prepared by converting tsv files from this repository. The idea was to share the dataset for broader audience. I am not an original author of it. Because of the specifity of read_csv method from Pandas library, there are couple of observations, which had to be deleted because of the formatting… See the full description on the dataset page: https://huggingface.co/datasets/dkoterwa/kor-sts.
elsaEU/ELSA1M_track1
elsaEU
ELSA - Multimedia use case ELSA Multimedia is a large collection of Deep Fake images, generated using diffusion models Dataset Summary This dataset was developed as part of the EU project ELSA. Specifically for the Multimedia use-case. Official webpage: https://benchmarks.elsa-ai.eu/ This dataset aims to develop effective solutions for detecting and mitigating the spread of deep fake images in multimedia content. Deep fake images, which are highly realistic and… See the full description on the dataset page: https://huggingface.co/datasets/elsaEU/ELSA1M_track1.
ivrit-ai/audio-transcripts
ivrit-ai
ivrit.ai is a database of Hebrew audio and text content. audio-base contains the raw, unprocessed sources. audio-vad contains audio snippets generated by applying Silero VAD (https://github.com/snakers4/silero-vad) to the base dataset. audio-transcripts contains transcriptions for each snippet in the audio-vad dataset. The audio-base dataset contains data from the following sources: Geekonomy (Podcast, https://geekonomy.net) HaCongress (Podcast, https://hacongress.podbean.com/) Idan Eretz's… See the full description on the dataset page: https://huggingface.co/datasets/ivrit-ai/audio-transcripts.
MichaelR207/MultiSim
MichaelR207
MultiSim is a growing collection of Text Simplfication datasets in multiple languages. Each dataset is a set of complex and simple sentence pairs.
raptorkwok/cantonese-traditional-chinese-parallel-corpus
raptorkwok
This is a dataset of Cantonese-Written Chinese Parallel Corpus, containing 130k+ pairs of Cantonese and Traditional Chinese parallel sentences.
FunDialogues/healthcare-minor-consultation
FunDialogues
This Dialogue Comprised of fictitious examples of dialogues between a doctor and a patient during a minor medical consultation.. Check out the example below: "id": 1, "description": "Discussion about a common cold", "dialogue": "Patient: Doctor, I've been feeling congested and have a runny nose. What can I do to relieve these symptoms?\n\nDoctor: It sounds like you have a common cold. You can try over-the-counter decongestants to relieve congestion and saline nasal sprays to… See the full description on the dataset page: https://huggingface.co/datasets/FunDialogues/healthcare-minor-consultation.
nahyeon00/mixsnips_clean
nahyeon00
Dataset Card for "mixsnips_clean" More Information needed
jxu9001/tagged_addresses
jxu9001
Dataset Card for "tagged_addresses" More Information needed
rafaelpadilla/coco2017
rafaelpadilla
This dataset contains all COCO 2017 images and annotations split in training (118287 images) and validation (5000 images).
lavita/medical-qa-shared-task-v1-toy
lavita
Dataset Card for "medical-qa-shared-task-v1-toy" More Information needed
FreedomIntelligence/CMB
FreedomIntelligence
CMB: A Comprehensive Medical Benchmark in Chinese 🌐 Github • 🌐 Website • 🤗 HuggingFace 🌈 Update [2024.02.21] The answers to the CMB-Exam test has been updated and some errors caused by omissions in version management have been fixed. [2024.01.08] In order to facilitate testing, we disclose the answers to the CMB-Exam test [2023.09.22] CMB is included in OpenCompass. [2023.08.21] Paper released. [2023.08.01] 🎉🎉🎉 CMB is published!🎉🎉🎉… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/CMB.
opsci/Astree
opsci
Aſtrée† is a repository of 2000 early modern instructions in French drawn from 162 French novels published between 1600 and 1700. Aſtrée can be used to fine-tuned any LLM on early modern French. All the instructions have been created from one page excerpts extracting from public domain works with historical writing and typography. They may include OCR errors that should not affect significantly the quality of text generation. Beyond their cultural relevance, Aſtrée provides a very good sample… See the full description on the dataset page: https://huggingface.co/datasets/opsci/Astree.
jslin09/wikipedia_tw
jslin09
要搞自己的大型語言模型,最基本的基本,就是需要一大堆文字資料,從 Common Crawl 上頭抓回來慢慢清洗是一條路,清洗維基百科網站的週期性下載檔也是一個方法。本資料集是解析自維基百科於 20240420 發布的繁體中文版打包檔 bz2 檔案的內容,在解析出所需內容後,利用 wikitextparser 移除 Wiki 標記。解析後保留的欄位有兩個:條目名稱(title),條目內容(page article)。 原始的打包檔條目內容簡繁混雜,所以有利用 OpenCC 進行簡轉繁處理。 原始總條目數: 4,451,426 條目。 全部 4,451,426 個條目標題。 無法自動去標記的條目數: 3,035,750 有內容的條目數: 1,415,676 因為本資料集內容龐大,要塞進一般的個人電腦中進行計算,恐怕會有資源不足的情形。建議使用parquet格式下載使用。 資料集當中有不少內容為 #REDIRECT 的條目已經嘗試移除,如果移除的不乾淨,就等以後有空推出修正版再來清洗了。
jed351/Chinese-Common-Crawl-Filtered
jed351
Traditional Chinese C4 Dataset Summary Data obtained from 2023-14 Common Crawl. Downloaded and processed using code based on another project attempting to recreate the C4 dataset. The resultant dataset contains both simplified and traditional Chinese. It was then filtered using a modified list of simplified Chinese characters to obtain another traditional Chinese dataset. I would like to acknowledge computational resources and support provided by the Imperial… See the full description on the dataset page: https://huggingface.co/datasets/jed351/Chinese-Common-Crawl-Filtered.
jed351/Traditional-Chinese-Common-Crawl-Filtered
jed351
Traditional Chinese C4 Dataset Summary Data obtained from 2023-14 Common Crawl. Downloaded and processed using code based on another project attempting to recreate the C4 dataset. The resultant dataset contains both simplified and traditional Chinese, which could be found here. It was then filtered using a modified list of simplified Chinese characters to obtain this traditional Chinese dataset. I would like to acknowledge computational resources and support… See the full description on the dataset page: https://huggingface.co/datasets/jed351/Traditional-Chinese-Common-Crawl-Filtered.
Open-Orca/FLAN
Open-Orca
🍮 The WHOLE FLAN Collection! 🍮 Overview This repository includes the full dataset from the FLAN Collection, totalling ~300GB as parquets. Generated using the official seqio templating from the Google FLAN Collection GitHub repo. The data is subject to all the same licensing of the component datasets. To keep up with our continued work on OpenOrca and other exciting research, find our Discord here: https://AlignmentLab.ai Motivation This work was done as part… See the full description on the dataset page: https://huggingface.co/datasets/Open-Orca/FLAN.
deetsadi/musiccaps_spectrograms
deetsadi
Dataset Card for "musiccaps_spectrograms" More Information needed
SiberiaSoft/SiberianPersonaChat
SiberiaSoft
SiberiaSoft/SiberianPersonaChat Датасет инструкций, диалогов, QA Данный датасет был создан для диалоговых агентов с имитацией личности. Большая часть датасета была сгенерирована с помощью chatGPT и различных промптов к ней. Кроме этого, в состав датасета входит измененный TolokaPersonaChatRus Формат описаний личности Ты парень, пилот самолета. Увлекаешься дайвингом. Собираешь марки. Любишь древнюю архитектуру. Ты девушка, художница. Увлекаешься нейросетевым… See the full description on the dataset page: https://huggingface.co/datasets/SiberiaSoft/SiberianPersonaChat.
qwerty8409/digesion_Ayurveda
qwerty8409
Dataset Card for Dataset Name Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Supported Tasks and Leaderboards [More Information Needed] Languages [More Information Needed] Dataset Structure Data Instances [More Information Needed] Data Fields [More Information Needed] Data Splits [More Information… See the full description on the dataset page: https://huggingface.co/datasets/qwerty8409/digesion_Ayurveda.
pleisto/wikipedia-cn-20230720-filtered
pleisto
本数据集基于中文维基2023年7月20日的dump存档。作为一项以数据为中心的工作,本数据集仅保留了 254,547条 质量较高的词条内容。具体而言: 过滤了Template, Category, Wikipedia, File, Topic, Portal, MediaWiki, Draft, Help等特殊类型的词条 使用启发式的方法和自有的NLU模型过滤了一部分质量较低的词条 过滤了一部分内容较为敏感或存在争议性的词条。 进行了简繁转换和习惯用词转换,确保符合中国大陆地区的习惯用词。 This dataset is based on the Chinese Wikipedia dump archive from July 20th, 2023. As a data-centric effort, the dataset retains 254,574 high-quality entries. Specifically: Entries of special types such as Template, Category, Wikipedia, File, Topic… See the full description on the dataset page: https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered.
HachiML/humaneval-ja-v0.6
HachiML
Dataset Card for "humaneval-ja" More Information needed
iamtarun/code_instructions_120k_alpaca
iamtarun
Dataset Card for code_instructions_120k_alpaca This dataset is taken from sahil2801/code_instructions_120k, which adds a prompt column in alpaca style. Refer to the original source here.
Lucastil2212/ufo-reports
Lucastil2212
Dataset Card for Dataset Name Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Supported Tasks and Leaderboards [More Information Needed] Languages [More Information Needed] Dataset Structure Data Instances [More Information Needed] Data Fields [More Information Needed] Data Splits [More Information… See the full description on the dataset page: https://huggingface.co/datasets/Lucastil2212/ufo-reports.
xwjzds/paraphrase_collections
xwjzds
Dataset Card for Sentence Paraphase Collections Dataset Description Repository: Paper: DeTiME: Diffusion-Enhanced Topic Modeling using Encoder-decoder based LLM https://arxiv.org/abs/2310.15296 Leaderboard: Point of Contact: Weijie Xu Dataset Summary Sentence_Paraphase is a combination of sentences paraphase tasks from various sources such as paraphase using ChatGPT, Paraphrase Adversaries from Word Scrambling (PAWS) and STS benchmark. We filtered out pairs that are detected as non english… See the full description on the dataset page: https://huggingface.co/datasets/xwjzds/paraphrase_collections.
lamini/lamini_docs
lamini
Dataset Card for "lamini_docs" More Information needed
NebulaByte/E-Commerce_Customer_Support_Conversations
NebulaByte
Dataset Card for "E-Commerce_Customer_Support_Conversations" The dataset is synthetically generated with OpenAI ChatGPT model (gpt-3.5-turbo). More Information needed
youssef101/artelingo
youssef101
ArtELingo is a benchmark and dataset having a collection of 80,000 artworks from WikiArt with 1.2 Million annotations in English, Arabic, and Chinese.
dagim/urban-dictionary-embeddings
dagim
Dataset Card for "urban-dictionary-embeddings" More Information needed
BI55/MedText
BI55
This is the shuffled version of medtext_1, so the datapoints are in random order and not sorted by category. This is to prevent catastrophic forgetting by category. This is a medical diagnosis dataset containing over 1000 top notch textbook quality patient presentations and diagnosis/treatments. The 100 most common diseases and the 30 most common injuries people go to the hospital with, are, among others, fully captured in the dataset, with multiple datapoints for each ranging from mild to… See the full description on the dataset page: https://huggingface.co/datasets/BI55/MedText.
RazinAleks/SO-Python_QA-Database_and_SQL_class
RazinAleks
Dataset Card for Dataset Name Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Supported Tasks and Leaderboards [More Information Needed] Languages [More Information Needed] Dataset Structure Data Instances [More Information Needed] Data Fields [More Information Needed] Data Splits [More Information… See the full description on the dataset page: https://huggingface.co/datasets/RazinAleks/SO-Python_QA-Database_and_SQL_class.
ssonpull519/safebooru-prompts-2023-upscore8
ssonpull519
Safebooru Prompts with 0-category Safebooru prompts crawled at 2023.7 filtered by up_score >= 8, with tags from Danbooru category group of 0. Source codes for crawling and preprocessing are here.
seungheondoh/LP-MusicCaps-MC
seungheondoh
====================================== !important: Be careful when using caption_attribute_prediction (We don't recommend to use)! ====================================== Dataset Card for LP-MusicCaps-MC Dataset Summary LP-MusicCaps is a Large Language Model based Pseudo Music Caption dataset for text-to-music and music-to-text tasks. We construct the music-to-caption pairs with tag-to-caption generation (using three existing multi-label tag datasets and four task… See the full description on the dataset page: https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MC.
togethercomputer/Long-Data-Collections
togethercomputer
Dataset Summary This collection is a compilation of long context datasets, specifically designed for tasks requiring extensive comprehension and inference from large text inputs. Currently, it encompasses data intended for training a robust base model, which can be found in the pretrain/ directory. Additionally, it includes datasets tailored for specific needs, located in the fine-tune/ directory. These specialized datasets include multi-passage question answering, derived from… See the full description on the dataset page: https://huggingface.co/datasets/togethercomputer/Long-Data-Collections.
dimanchkek/Deepfacelive-DFM-Models
dimanchkek
Description Here you can find files for DeepFaceLab and DeepFaceLive. All sources and active community members are listed below. Disclaimer The author of this repository makes no claim to the data uploaded here other than that created by himself. Feel free to open a discussion for me to mention your contacts if I haven't done so. Risks and Limitations Use these files at your own risk. The authors of the models and the repository creator cannot… See the full description on the dataset page: https://huggingface.co/datasets/dimanchkek/Deepfacelive-DFM-Models.
seungheondoh/LP-MusicCaps-MSD
seungheondoh
====================================== !important: Be careful when using caption_attribute_prediction (We don't recommend to use)! ====================================== Dataset Card for LP-MusicCaps-MSD Dataset Summary LP-MusicCaps is a Large Language Model based Pseudo Music Caption dataset for text-to-music and music-to-text tasks. We construct the music-to-caption pairs with tag-to-caption generation (using three existing multi-label tag datasets and four task… See the full description on the dataset page: https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MSD.
PeterBrendan/Ads_Creative_Ad_Copy_Programmatic
PeterBrendan
Dataset Summary The Programmatic Ad Creatives dataset contains 7097 samples of online programmatic ad creatives along with their ad sizes. The dataset includes 8 unique ad sizes, such as (300, 250), (728, 90), (970, 250), (300, 600), (160, 600), (970, 90), (336, 280), and (320, 50). The dataset is in a tabular format and represents a random sample from Project300x250.com's complete creative data set. It is primarily used for training and evaluating natural language processing… See the full description on the dataset page: https://huggingface.co/datasets/PeterBrendan/Ads_Creative_Ad_Copy_Programmatic.
AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code
AlgorithmicResearchGroup
Dataset Card for "AlgorithmicResearchGroup/arxiv_python_research_code" Dataset Description https://huggingface.co/datasets/AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code Dataset Summary AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code contains over 1.49B of source code files referenced strictly in ArXiv papers. The dataset serves as a curated dataset for Code LLMs. How to use it from datasets… See the full description on the dataset page: https://huggingface.co/datasets/AlgorithmicResearchGroup/arxiv_deep_learning_python_research_code.
shibing624/sharegpt_gpt4
shibing624
Dataset Card Dataset Summary ShareGPT中挑选出的GPT4多轮问答数据,多语言问答。 Languages 数据集是多语言,包括中文、英文、日文等常用语言。 Dataset Structure Data Fields The data fields are the same among all splits. conversations: a List of string . head -n 1 sharegpt_gpt4.jsonl {"conversations":[ {'from': 'human', 'value': '採用優雅現代中文,用中文繁體字型,回答以下問題。為所有標題或專用字詞提供對應的英語翻譯:Using scholarly style, summarize in detail James Barr\'s book "Semantics of Biblical… See the full description on the dataset page: https://huggingface.co/datasets/shibing624/sharegpt_gpt4.
Besteasy/CG-Eval
Besteasy
评测数据集简介 CG-Eval是甲骨易AI研究院与LanguageX AI Lab联合研发的针对中文大模型生成能力的测试基准。在此项测试中,受测的中文大语言模型需要对科技与工程、人文与社会科学、数学计算、医师资格考试、司法考试、注册会计师考试这六个大科目类别下的55个子科目的11000道不同类型问题做出准确且相关的回答。 我们设计了一套复合的打分系统,对于非计算题,每一道名词解释题和简答题都有标准参考答案,采用多个标准打分然后加权求和。对于计算题目,我们会提取最终计算结果和解题过程,然后综合打分。 数据集包括以下字段 大科目类别,子科目名称,题目类型, 题目编号,题目文本,题目答案的汉字长度,题目prompt 论文及数据集下载 CG-Eval论文 https://arxiv.org/abs/2308.04823 CG-Eval测试数据集下载地址 https://huggingface.co/datasets/Besteasy/CG-Eval CG-Eval自动化评测地址… See the full description on the dataset page: https://huggingface.co/datasets/Besteasy/CG-Eval.
nisaar/Constitution_Of_India_Instruction_Set
nisaar
gauravshrm211/VC-startup-evaluation-for-investment
gauravshrm211
This data set includes the completion pairs for evaluating startups before investing in them. This data set iincludes completion examples for Chain of Thought reasoning to perform financial calculations. This data set includes completion examples for evaluating risk profile, growth propspects, cost, ratios, market size, asset, liability, debt, equity and other ratios. This data set includes comparison of different startups.
HoarfrostRaven/sprites
HoarfrostRaven
Dataset Card for "sprites" More Information needed
abacusai/LongChat-Lines
abacusai
Dataset Card for "LongChat-Lines" This dataset is was used to evaluate the performance of model finetuned to operate on longer contexts. It is based on a task template proposed by LMSys to evaluate attention to arbitrary points in the context. See the full details at https;//github.com/abacusai/Long-Context.
iamtarun/code_contest_python3_alpaca
iamtarun
Dataset Card for Code Contest Processed Dataset Summary This dataset contains coding contest questions and their solution written in Python3. This dataset is created by processing code_contest dataset from Deepmind. It is a competitive programming dataset for machine-learning. Read more about dataset at original source. Columns Description id : unique string associated with a problem description : problem description code : one correct code for the… See the full description on the dataset page: https://huggingface.co/datasets/iamtarun/code_contest_python3_alpaca.
HydraLM/CodeAlpaca-20k_alpaca
HydraLM
Dataset Card for "CodeAlpaca-20k_alpaca" More Information needed
ibm-nasa-geospatial/multi-temporal-crop-classification
ibm-nasa-geospatial
Dataset Card for Multi-Temporal Crop Classification Dataset Summary This dataset contains temporal Harmonized Landsat-Sentinel imagery of diverse land cover and crop type classes across the Contiguous United States for the year 2022. The target labels are derived from USDA's Crop Data Layer (CDL). It's primary purpose is for training segmentation geospatial machine learning models. Dataset Structure TIFF Files Each tiff file covers a… See the full description on the dataset page: https://huggingface.co/datasets/ibm-nasa-geospatial/multi-temporal-crop-classification.
Shashkovich/Telecommunication_SMS_time_series
Shashkovich
SMS Time series data for traffic and fraud forecasting This dataset contains various time series from vendors. Vendor A: 01.03.23-14.08.23 TS_*_all - Count of all SMS Vendor A: January TS_*_fraud - Count of fraud TS_*_all - Count of all SMS TS_*_hlrDelay - Mean values of hlr delay Vendor B: January 1-8 1-8_TS_*_fraud - Count of fraud 1-8_TS_*_all - Count of all SMS 1-8_TS_*_hlrDelay - Mean values of hlr delay
sunlab/patch_db
sunlab
PatchDB: A Large-Scale Security Patch Dataset Description To foster large-scale research on vulnerability mitigation and to enable a comparison of different detection approaches, we make our dataset PatchDB from our DSN'21 paper publicly available. PatchDB is a large-scale security patch dataset that contains around 12,073 security patches and 23,742 non-security patches from the real world. You can find more details on the dataset in the paper "PatchDB: A… See the full description on the dataset page: https://huggingface.co/datasets/sunlab/patch_db.
argilla/llama-2-banking-fine-tune
argilla
Dataset Card for llama-2-banking-fine-tune This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the datasets library in Load with datasets. Dataset Summary This dataset contains: A dataset configuration file conforming to the Argilla dataset format named argilla.yaml. This configuration file will be used to configure the dataset when using the… See the full description on the dataset page: https://huggingface.co/datasets/argilla/llama-2-banking-fine-tune.
frodobots/FrodoBots-2K
frodobots
FrodoBots 2K Dataset The FrodoBots 2K Dataset is a diverse collection of camera footage, GPS, IMU, audio recordings & human control data collected from ~2,000 hours of tele-operated sidewalk robots driving in 10+ cities. This dataset is collected from Earth Rovers, a global scavenger hunt "Drive to Earn" game developed by FrodoBots Lab. Please join our Discord for discussions with fellow researchers/makers! If you're interested in contributing driving data, you can buy your own… See the full description on the dataset page: https://huggingface.co/datasets/frodobots/FrodoBots-2K.
C-MTEB/LCQMC
C-MTEB
Dataset Card for "LCQMC" More Information needed
Dewa/Dog_Emotion_Dataset_v2
Dewa
Dataset Card for "Dog_Emotion_Dataset_v2" The Dataset is based on a kaggle dataset Label and its Meaning 0 : sad" 1 : angry" 2 : relaxed" 3 : happy"
HuggingFaceM4/LLaVAR-Instruct-16K
HuggingFaceM4
Dataset Card for "LLaVAR-Instruct-16K" More Information needed
zjunlp/InstructIE
zjunlp
InstructIE: A Bilingual Instruction-based Information Extraction Dataset InstructIE: A Bilingual Instruction-based Information Extraction Dataset Github Tutorial News [2024/02] We released a large-scale (0.32B tokens) high-quality bilingual (Chinese and English) Information Extraction (IE) instruction tuning dataset named IEPile, along with two models trained on IEPile, baichuan2-13b-iepile-lora and llama2-13b-iepile-lora. [2023/10] We released a new bilingual… See the full description on the dataset page: https://huggingface.co/datasets/zjunlp/InstructIE.
OleehyO/latex-formulas
OleehyO
𝑩𝑰𝑮 𝑵𝑬𝑾𝑺‼️ 📮 [2𝟎2𝟒-𝟎2] We trained a formula recognition model, 𝐓𝐞𝐱𝐓𝐞𝐥𝐥𝐞𝐫, using the latex-formulas dataset. It can convert LaTeX formulas into images and boasts high accuracy and strong generalization capabilities, covering most formula recognition scenarios. For more details, please refer to the 𝐓𝐞𝐱𝐓𝐞𝐥𝐥𝐞𝐫 GitHub repository. Dataset Description 中文版本 There are two datasets: raw_formulas and cleaned_formulas(This dataset has 550K… See the full description on the dataset page: https://huggingface.co/datasets/OleehyO/latex-formulas.
THUDM/LongBench
THUDM
LongBench is a comprehensive benchmark for multilingual and multi-task purposes, with the goal to fully measure and evaluate the ability of pre-trained language models to understand long text. This dataset consists of twenty different tasks, covering key long-text application scenarios such as multi-document QA, single-document QA, summarization, few-shot learning, synthetic tasks, and code completion.
amansingh203/stuttering_asr
amansingh203
Dataset Card for "stuttering_asr" More Information needed
ahmed-masry/unichart-pretrain-data
ahmed-masry
Dataset Card for "unichart-pretrain-data" If you wanna load the dataset, you can run the following code: from datasets import load_dataset data = load_dataset('ahmed-masry/unichart-pretrain-data') The dataset has the following structure: DatasetDict({ train: Dataset({ features: ['imgname', 'query', 'label'], num_rows: 6898333 }) }) It has 6898333 rows; each row consist of the imgename, the input query, and the output label. Chart Images… See the full description on the dataset page: https://huggingface.co/datasets/ahmed-masry/unichart-pretrain-data.
LinkSoul/Chinese-LLaVA-Vision-Instructions
LinkSoul
本数据集是对于LLaVA的翻译,请从LLaVA dataset下载对应的图片。 百度网盘链接: https://pan.baidu.com/s/1-jgINIkW0MxusmJuSif85w?pwd=q62v
ArmelR/the-pile-splitted
ArmelR
Dataset description The pile is an 800GB dataset of english text designed by EleutherAI to train large-scale language models. The original version of the dataset can be found here. The dataset is divided into 22 smaller high-quality datasets. For more information each of them, please refer to the datasheet for the pile. However, the current version of the dataset, available on the Hub, is not splitted accordingly. We had to solve this problem in order to improve the user… See the full description on the dataset page: https://huggingface.co/datasets/ArmelR/the-pile-splitted.
Arjun-G-Ravi/Python-codes
Arjun-G-Ravi
Dataset Card for Dataset Name Please note that this dataset maynot be perfect and may contain a very small quantity of non python codes. But the quantity appears to be very small Dataset Summary The dataset contains a collection of python question and their code. This is meant to be used for training models to be efficient in Python specific coding. The dataset has two features - 'question' and 'code'. An example is: {'question': 'Create a function that takes in… See the full description on the dataset page: https://huggingface.co/datasets/Arjun-G-Ravi/Python-codes.
jainabh/smart_contracts_malicious
jainabh
Dataset Card for "smart_contracts_malicious" More Information needed
jainabh/smart-contract-small
jainabh
Dataset Card for "smart-contract-small" More Information needed
jxie/modelnet40
jxie
Dataset Card for "modelnet40" More Information needed
PetraAI/PetraAI
PetraAI
PETRA Overview PETRA is a multilingual dataset for training and evaluating AI systems on a diverse range of tasks across multiple modalities. It contains data in Arabic and English for tasks including translation, summarization, question answering, and more. Dataset Structure Data is separated by language into /ar and /en directories Within each language directory, data is separated by task into subdirectories Tasks include: Translation… See the full description on the dataset page: https://huggingface.co/datasets/PetraAI/PetraAI.
xzuyn/manythings-translations-alpaca
xzuyn
Original Dataset 3,164,972 translations from English to 84 other languages. I've duplicated it to be to and from English, so it's now 6,329,944 translations.
izumi-lab/open-text-books
izumi-lab
Dataset Card for "open-text-books" More Information needed
Agus787/ATMYThotTot
Agus787
ATMYThotTot LoRa's for stable diffusion Link to author's page: https://www.pixiv.net/en/artworks/104957904
bloyal/antiberta-pretrain
bloyal
AntiBERTa Pretraining Data Description Pretraining data for the AntiBERTa protein language model from Alchemab Therapeutics. Citations @article{Leem_Mitchell_Farmery_Barton_Galson_2022, title={Deciphering the language of antibodies using self-supervised learning}, volume={3}, ISSN={2666-3899}, url={https://www.cell.com/patterns/abstract/S2666-3899(22)00105-2}, DOI={10.1016/j.patter.2022.100513}, number={7}, journal={Patterns}, publisher={Elsevier}… See the full description on the dataset page: https://huggingface.co/datasets/bloyal/antiberta-pretrain.
tilyupo/trivia_qa
tilyupo
Dataset Card for "trivia_qa_passages" More Information needed
AISHELL/AISHELL-1
AISHELL
Aishell is an open-source Chinese Mandarin speech corpus published by Beijing Shell Shell Technology Co.,Ltd. 400 people from different accent areas in China are invited to participate in the recording, which is conducted in a quiet indoor environment using high fidelity microphone and downsampled to 16kHz. The manual transcription accuracy is above 95%, through professional speech annotation and strict quality inspection. The data is free for academic use. We hope to provide moderate amount… See the full description on the dataset page: https://huggingface.co/datasets/AISHELL/AISHELL-1.
jjovalle99/amazon_metadata_datathon_2023
jjovalle99
Dataset Card for "amazon_metadata_datathon_2023" More Information needed
hsultanbey/javascript
hsultanbey
Dataset Card for "javascript" More Information needed
garage-bAInd/Open-Platypus
garage-bAInd
Open-Platypus This dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80%: Dataset Name License Type PRM800K MIT MATH MIT ScienceQA Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International SciBench MIT ReClor Non-commercial TheoremQA… See the full description on the dataset page: https://huggingface.co/datasets/garage-bAInd/Open-Platypus.
totally-not-an-llm/EverythingLM-data
totally-not-an-llm
EverythingLM Dataset EverythingLM is a diverse instruct dataset consisting of ~1k sets of system prompts, instructions, and corresponding responses. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions. Categories: Reasoning Creative Writing General Knowledge Brainstorming Search Query Coding Basic Instruct We also leverage various system prompts for evol-instruct and for… See the full description on the dataset page: https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data.
globis-university/aozorabunko-chats
globis-university
Overview This dataset is of conversations extracted from Aozora Bunko (青空文庫), which collects public-domain books in Japan, using a simple heuristic approach. [For Japanese] 日本語での概要説明を Qiita に記載しました: https://qiita.com/akeyhero/items/b53eae1c0bc4d54e321f Method First, lines surrounded by quotation mark pairs (「」) are extracted as utterances from the text field of globis-university/aozorabunko-clean. Then, consecutive utterances are collected and grouped together.… See the full description on the dataset page: https://huggingface.co/datasets/globis-university/aozorabunko-chats.
rombodawg/code_instruct_alpaca_vicuna_wizardlm_56k_backup
rombodawg
Backup of code_instruct_alpaca_vicuna_wizardlm used in rombodawg/MegaCodeTraining112k Link to the combined dataset bellow https://huggingface.co/datasets/rombodawg/MegaCodeTraining112k
nampdn-ai/tiny-orca-textbooks
nampdn-ai
Textbook-like Dataset: A Comprehensive Resource for Text-Based Skills Development in Small Language Models This dataset is a collection of 147k synthetic textbooks designed to enhance the text-based skills of small language models. The curriculum is meticulously structured to progress from simple to complex tasks, ensuring a gradual and effective learning experience during pretraining or finetuning SLMs. The inspiration for this dataset comes from the technical report paper… See the full description on the dataset page: https://huggingface.co/datasets/nampdn-ai/tiny-orca-textbooks.
TitanMLData/arxiv_qa
TitanMLData
Arxiv Paper Generative Question Answering Dataset Summary This dataset is made using ChatGPT (text-davinci-003) to generate Question/Answer pairs from Arxiv papers from this dataset Data Fields TextID: references the datarow (paper) in the arxiv summarizer dataset Question: question based on the text Response: answer Text: Full text with the paper as 'context:' and and the question appended as 'question:'. Used for generative question answering… See the full description on the dataset page: https://huggingface.co/datasets/TitanMLData/arxiv_qa.
haosulab/ManiSkill2
haosulab
ManiSkill2 Data Update: ManiSkill 3 has been released https://github.com/haosulab/ManiSkill/. It uses different datasets than ManiSkill2 so the data here is not expected to transfer over ManiSkill2 is a unified benchmark for learning generalizable robotic manipulation skills powered by SAPIEN. It features 20 out-of-box task families with 2000+ diverse object models and 4M+ demonstration frames. Moreover, it empowers fast visual input learning algorithms so that a… See the full description on the dataset page: https://huggingface.co/datasets/haosulab/ManiSkill2.
Azure99/blossom-math-v1
Azure99
BLOSSOM MATH V1 介绍 Blossom Math V3版本已发布!🤗 Blossom Math V1是基于Math23K衍生而来的中文数学对话数据集,适用于数学问题微调。 本数据集采用全量Math23K的问题,随后调用gpt-3.5-turbo-0613生成结果,并使用原始数据集中的答案对生成的结果进行验证,过滤掉错误答案,很大程度上保证了问题和答案的准确性。 本次发布了全量数据的50%,包含10K记录。 语言 中文 数据集结构 每条数据代表一个完整的题目及答案,包含id、input、output、answer四个字段。 id:字符串,代表Math23K中的题目id。 input:字符串,代表问题。 output:字符串,代表gpt-3.5-turbo-0613生成的答案。 answer:字符串,代表正确答案。 数据集限制… See the full description on the dataset page: https://huggingface.co/datasets/Azure99/blossom-math-v1.
Universal-NER/Pile-NER-type
Universal-NER
Intro Pile-NER-type is a set of GPT-generated data for named entity recognition using the type-based data construction prompt. It was collected by prompting gpt-3.5-turbo-0301 and augmented by negative sampling. Check our project page for more information. License Attribution-NonCommercial 4.0 International
Universal-NER/Pile-NER-definition
Universal-NER
Intro Pile-NER-definition is a set of GPT-generated data for named entity recognition using the definition-based data construction prompt. It was collected by prompting gpt-3.5-turbo-0301 and augmented by negative sampling. Check our project page for more information. License Attribution-NonCommercial 4.0 International
glaiveai/glaive-function-calling
glaiveai
This dataset consists of 52k samples generated through Glaive for the task of function calling, in the following format- SYSTEM: You are an helpful assistant who has access to the following functions to help the user, you can use the functions if needed- { JSON function definiton } USER: user message ASSISTANT: assistant message Function call invocations are formatted as- ASSISTANT: <functioncall> {json function call} Response to the function call is formatted as- FUNCTION RESPONSE: {json… See the full description on the dataset page: https://huggingface.co/datasets/glaiveai/glaive-function-calling.
VatsaDev/pixel-art
VatsaDev
Dataset Card for "pixel-art" More Information needed
lionelchg/dolly_creative_writing
lionelchg
Dataset Card for "dolly_creative_writing" More Information needed
nojiyoon/pagoda-text-and-image-dataset
nojiyoon
Dataset Card for "pagoda-text-and-image-dataset" More Information needed
jojo0217/korean_rlhf_dataset
jojo0217
성균관대학교 산학협력프로젝트 과정에서 한국어 llm 모델 SFT 학습을 위해 구축한 데이터셋 입니다.2023-09-25오픈 어시스턴트 data에서 오픈 어시스턴트를 포함하는 데이터 삭제-> 답변에 오픈 어시스턴트라고 하는 경우가 나오기 때문또한 스탠포드 대학 번역 데이터에서 번역 과정 오류로 input에 입력없음 과 같이 추가된 부분 삭제그리고 <unk> 등으로 gpt 상에서 번역 오류가 난 것들을 삭제 자연스러움을 위해 stanford alpaca data, oig_chip2를 ChatGPT3.5 turbo 16k를 이용하여 새롭게 전처리 과정을 거쳤습니다.https://github.com/JoJo0217/rlhf_korean_dataset/tree/main여기에서 자세한 설명을 볼 수 있으며데이터의 구성은 다음과 같습니다. 데이터 구성 데이터 종류 개수 url koalpaca v1.1 21155… See the full description on the dataset page: https://huggingface.co/datasets/jojo0217/korean_rlhf_dataset.
Multilingual-Perspectivist-NLU/EPIC
Multilingual-Perspectivist-NLU
Dataset Card for EPICorpus Dataset Summary EPIC (English Perspectivist Irony Corpus) is a disaggregated English corpus for irony detection, containing 3,000 pairs of short conversations (posts-replies) from Twitter and Reddit, along with the demographic information of each annotator (age, nationality, gender, and so on). Supported Tasks and Leaderboards Irony classification task using soft labels (i.e., distribution of annotations) or hard labels… See the full description on the dataset page: https://huggingface.co/datasets/Multilingual-Perspectivist-NLU/EPIC.
imoxto/prompt_injection_cleaned_dataset-v2
imoxto
Dataset Card for "prompt_injection_cleaned_dataset-v2" More Information needed
HPC-GPT/HPC
HPC-GPT
This dataset includes two tasks for high-performance computing (HPC) domain. Task 1 is managing AI models and datasets which includes programming language processing (PLP) and MLPerf. Task 2 is data race detection which includes c/c++ language and fortran language.
ml20max/nature-outdoor
ml20max
Dataset Card for "nature-outdoor" More Information needed
MaartenGr/arxiv_nlp
MaartenGr
arXiv Abstracts Abstracts for the cs.CL category of ArXiv between 1991 and 2024. This dataset was created as an instructional tool for the Clustering and Topic Modeling chapter in the upcoming "Hands-On Large Language Models" book. The original dataset was retrieved here. This subset will be updated towards the release of the book to make sure it captures relatively recent articles in the domain.
yentinglin/TaiwanChat
yentinglin
Performance Citation If you find Taiwan LLM is useful in your work, please cite it with: @misc{lin2023taiwan, title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model}, author={Yen-Ting Lin and Yun-Nung Chen}, year={2023}, eprint={2311.17487}, archivePrefix={arXiv}, primaryClass={cs.CL} }
owkin/medical_knowledge_from_extracts
owkin
This dataset is used to train LLMs for medical knowledge extraction tasks
nampdn-ai/tiny-textbooks
nampdn-ai
Textbook-like Dataset: A High-Quality Resource for Small Language Models The idea is simply inspired by the Textbooks Are All You Need II: phi-1.5 technical report paper. The source texts in this dataset have been gathered and carefully select the best of the falcon-refinedweb and minipile datasets to ensure the diversity, quality while tiny in size. The dataset was synthesized using 4x3090 Ti cards over a period of 500 hours, thanks to Nous-Hermes-Llama2-13b finetuned model.… See the full description on the dataset page: https://huggingface.co/datasets/nampdn-ai/tiny-textbooks.
DynamicSuperb/NoiseDetection_VCTK_MUSAN-Music
DynamicSuperb
Dataset Card for "NoiseDetectionmusic_VCTKMusan" More Information needed
ds4sd/MolGrapher-Synthetic-300K
ds4sd
MolGrapher-Synthetic-300K MolGrapher-Synthetic-300K is the synthetic dataset introduced in MolGrapher: Graph-based Visual Recognition of Chemical Structures. Our dataset is created using molecule SMILES retrieved from the database PubChem. Training images are then generated from SMILES using the molecule drawing library RDKit. The synthetic training set is augmented at multiple levels: Molecule level: Molecules are randomly transformed by: (1) displaying explicit hydrogens, (2)… See the full description on the dataset page: https://huggingface.co/datasets/ds4sd/MolGrapher-Synthetic-300K.