id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
albertvillanova/medmnist-v2
|
albertvillanova
|
MedMNIST v2 is a large-scale MNIST-like collection of standardized biomedical images, including 12 datasets for 2D and 6 datasets for 3D.
|
AyoubChLin/CompanyDocuments
|
AyoubChLin
|
Company Documents Dataset
Overview
This dataset comprises a comprehensive collection of over 2,000 company documents, categorized into four primary types: invoices, inventory reports, purchase orders, and shipping orders. Each document is provided in PDF format, along with a CSV file containing the text extracted from these documents, their respective labels, and the word count of each document. This dataset is well-suited for various natural language processing… See the full description on the dataset page: https://huggingface.co/datasets/AyoubChLin/CompanyDocuments.
|
tchebonenko/MedicalTranscriptions
|
tchebonenko
|
Medical Transcriptions
Medical transcription data scraped from mtsamples.com
Content
This dataset contains sample medical transcriptions for various medical specialties.
More information can be found here
Due to data availability only transcripts for the following medical specialties were selected for the model training:
Surgery
Cardiovascular / Pulmonary
Orthopedic
Radiology
General Medicine
Gastroenterology
Neurology
Obstetrics / Gynecology
Urology… See the full description on the dataset page: https://huggingface.co/datasets/tchebonenko/MedicalTranscriptions.
|
yuvidhepe/us-accidents-updated
|
yuvidhepe
|
Dataset Card for US Accidents (2016 - 2023)
Dataset Summary
Description
This is a countrywide car accident dataset, which covers 49 states of the USA. The accident data are collected from February 2016 to Mar 2023, using multiple APIs that provide streaming traffic incident (or event) data. These APIs broadcast traffic data captured by a variety of entities, such as the US and state departments of transportation, law enforcement agencies, traffic… See the full description on the dataset page: https://huggingface.co/datasets/yuvidhepe/us-accidents-updated.
|
TigerResearch/pretrain_en
|
TigerResearch
|
Dataset Card for "pretrain_en"
Tigerbot pretrain数据的英文部分。
Usage
import datasets
ds_sft = datasets.load_dataset('TigerResearch/pretrain_en')
|
argilla/databricks-dolly-15k-curated-en
|
argilla
|
Guidelines
In this dataset, you will find a collection of records that show a category, an instruction, a context and a response to that instruction. The aim of the project is to correct the instructions, intput and responses to make sure they are of the highest quality and that they match the task category that they belong to. All three texts should be clear and include real information. In addition, the response should be as complete but concise as possible.
To curate the… See the full description on the dataset page: https://huggingface.co/datasets/argilla/databricks-dolly-15k-curated-en.
|
TigerResearch/tigerbot-stackexchange-qa-en-0.5m
|
TigerResearch
|
Tigerbot 基于stackexchange问答站点dump数据生成sft数据集
原始来源:https://archive.org/details/stackexchange
Usage
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-stackexchange-qa-en-0.5m')
|
TigerResearch/tigerbot-zhihu-zh-10k
|
TigerResearch
|
Tigerbot 基于开源搜集的知乎数据生成的sft问答对
Usage
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-zhihu-zh-10k')
|
TigerResearch/tigerbot-HC3-zh-12k
|
TigerResearch
|
Tigerbot 基于公开的HC3数据集加工生成的常识问答sft数据集
原始来源:https://huggingface.co/datasets/Hello-SimpleAI/HC3
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same.
If not, they follow CC-BY-SA license.
Usage
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-HC3-zh-12k')
|
TigerResearch/tigerbot-superclue-c3-zh-5k
|
TigerResearch
|
Tigerbot 基于cluebenchmark公开的数据集加工生成的阅读理解sft数据集
Usage
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-superclue-c3-zh-5k')
|
TigerResearch/tigerbot-wiki-qa-zh-1k
|
TigerResearch
|
Tigerbot 自有中文百科问答 数据。
Usage
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-wiki-qa-zh-1k')
|
TigerResearch/tigerbot-law-plugin
|
TigerResearch
|
Tigerbot 模型rethink时使用的外脑原始数据,法律11大类,共5.5W+条款
宪法
刑法
行政法
司法解释
民法商法
民法典
行政法规
社会法
部门规章
经济法
诉讼与非诉讼程序法
Usage
import datasets
ds_sft = datasets.load_dataset('TigerResearch/tigerbot-law-plugin')
|
lemonteaa/algorithmic-reasoning-seed
|
lemonteaa
|
Dataset Card for Algorithmic Reasoning (seed)
Note: This dataset is WIP and most question's answer section is empty or incomplete! See also "Other Known Limitations" section
Warning: If you somehow do use this dataset, remember to NOT do any eval after training on the questions in this dataset!
Dataset Summary
Dataset to help LLM learn how to reason about code, especially on algorithmic tasks, by seeing human demostration.
Supported Tasks and… See the full description on the dataset page: https://huggingface.co/datasets/lemonteaa/algorithmic-reasoning-seed.
|
kraina/airbnb
|
kraina
|
This dataset contains accommodation offers from the AirBnb platform from 10 European cities.
It has been copied from https://zenodo.org/record/4446043#.ZEV8d-zMI-R to make it available as a Huggingface Dataset.
It was originally published as supplementary material for the article: Determinants of Airbnb prices in European cities: A spatial econometrics approach
(DOI: https://doi.org/10.1016/j.tourman.2021.104319)
|
HuggingFaceM4/OBELICS
|
HuggingFaceM4
|
Dataset Card for OBELICS
OBELICS is an open, massive, and curated collection of interleaved image-text web documents, containing 141M English documents, 115B text tokens, and 353M images, extracted from Common Crawl dumps between February 2020 and February 2023. The collection and filtering steps are described in our paper.
Interleaved image-text web documents are a succession of text paragraphs interleaved by images, such as web pages that contain images. Models trained on… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceM4/OBELICS.
|
cognitivecomputations/samantha-data
|
cognitivecomputations
|
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
|
Abzu/CodeAlpacaPython
|
Abzu
|
Dataset Card for "CodeAlpacaPython"
This is HuggingFaceH4/CodeAlpaca_20K only python prompts.
More Information needed
|
explodinggradients/ragas-wikiqa
|
explodinggradients
|
Dataset Card for "ragas-wikiqa"
More Information needed
|
tasksource/PRM800K
|
tasksource
|
https://github.com/openai/prm800k/tree/main
|
TigerResearch/pretrain_zh
|
TigerResearch
|
Dataset Card for "pretrain_zh"
Tigerbot pretrain数据的中文部分。
包含(未压缩前) 中文书籍zh-books 12G, 中文互联网zh-webtext 25G, 中文百科zh-wiki 19G
更多语料请关注开源模型及持续更新 https://github.com/TigerResearch/TigerBot
Usage
import datasets
ds_sft = datasets.load_dataset('TigerResearch/pretrain_zh')
|
Aeala/ShareGPT_Vicuna_unfiltered
|
Aeala
|
Dataset Card
This is a reupload of this dataset that was further cleaned by gozfarb.
|
DavidVivancos/MindBigData2023_MNIST-8B
|
DavidVivancos
|
Dataset Summary
MindBigData 2023 MNIST-8B is the largest, to date (June 1st 2023), brain signals open dataset created for Machine Learning, based on EEG signals from a single subject captured using a custom 128 channels device, replicating the full 70,000 digits from Yaan LeCun et all MNIST dataset. The brain signals were captured while the subject was watching the pixels of the original digits one by one on a screen and listening at the same time to the spoken number 0 to 9… See the full description on the dataset page: https://huggingface.co/datasets/DavidVivancos/MindBigData2023_MNIST-8B.
|
FremyCompany/AGCT-Dataset
|
FremyCompany
|
Automatic Glossary of Clinical Terminology (v2023)
This dataset contains 422,070 short, computer-generated definitions for SnomedCT concepts, covering various domains such as diseases, procedures, drugs, and anatomy. To do so, we prompted the OpenAI Turbo model, a variant of GPT 3.5, using a high-quality verbalization of the SnomedCT relationships of the to-be-defined concept.
Quality Control
IMPORTANT: Following a quality control, we report that the… See the full description on the dataset page: https://huggingface.co/datasets/FremyCompany/AGCT-Dataset.
|
alpayariyak/prm800k
|
alpayariyak
|
From OpenAI
PRM800K: A Process Supervision Dataset
Blog Post
This repository accompanies the paper Let's Verify Step by Step and presents the PRM800K dataset introduced there. PRM800K is a process supervision dataset containing 800,000 step-level correctness labels for model-generated solutions to problems from the MATH dataset. More information on PRM800K and the project can be found in the paper.
We are releasing the raw labels as well as the instructions we gave labelers… See the full description on the dataset page: https://huggingface.co/datasets/alpayariyak/prm800k.
|
Meranti/CLAP_freesound
|
Meranti
|
LAION-Audio-630K Freesound Dataset
LAION-Audio-630K is the largest audio-text dataset publicly available and a magnitude larger than previous audio-text datasets (by 2022-11-05). Notably, it combines eight distinct datasets, which includes the Freesound dataset.
Specifically, this Hugging face repository contains two versions of Freesound dataset. Details of each dataset (e.g. how captions are made etc.) could be found in the "datacard" column of the table below.
Freesound… See the full description on the dataset page: https://huggingface.co/datasets/Meranti/CLAP_freesound.
|
flaviagiammarino/path-vqa
|
flaviagiammarino
|
Dataset Card for PathVQA
Dataset Description
PathVQA is a dataset of question-answer pairs on pathology images. The dataset is intended to be used for training and testing
Medical Visual Question Answering (VQA) systems. The dataset includes both open-ended questions and binary "yes/no" questions.
The dataset is built from two publicly-available pathology textbooks: "Textbook of Pathology" and "Basic Pathology", and a
publicly-available digital library:… See the full description on the dataset page: https://huggingface.co/datasets/flaviagiammarino/path-vqa.
|
hssd/hssd-hab
|
hssd
|
HSSD: Habitat Synthetic Scenes Dataset
The Habitat Synthetic Scenes Dataset (HSSD) is a human-authored 3D scene dataset that more closely mirrors real scenes than prior datasets.
Our dataset represents real interiors and contains a diverse set of 211 scenes and more than 18000 models of real-world objects.
This repository provides a Habitat consumption-ready compressed version of HSSD.
See this repository for corresponding uncompressed assets.
Dataset Structure… See the full description on the dataset page: https://huggingface.co/datasets/hssd/hssd-hab.
|
averageandyyy/imda_dataset_clean_real
|
averageandyyy
|
Dataset Card for "imda_dataset_clean_real"
More Information needed
|
P1ayer-1/stack-exchange-preferences-code
|
P1ayer-1
|
Dataset Card for "stack-exchange-preferences-code"
More Information needed
|
Nan-Do/reason_code-search-net-python
|
Nan-Do
|
Dataset Card for "reason_code-search-net-python"
Dataset Summary
This dataset is an instructional dataset for Python.The dataset contains five different kind of tasks.
Given a Python 3 function:
Type 1: Generate a summary explaining what it does. (For example: This function counts the number of objects stored in the jsonl file passed as input.)
Type 2: Generate a summary explaining what its input parameters represent ("For example: infile: a file descriptor of… See the full description on the dataset page: https://huggingface.co/datasets/Nan-Do/reason_code-search-net-python.
|
kaist-ai/CoT-Collection
|
kaist-ai
|
"""
_LICENSE = "CC BY 4.0"
_HOMEPAGE = "https://github.com/kaistAI/CoT-Collection"
_LANGUAGES = {
"en": "English",
}
# _ALL_LANGUAGES = "all_languages"
class CoTCollectionMultiConfig(datasets.BuilderConfig):
|
codeparrot/conala-mined-curated
|
codeparrot
|
Conala-mined-curated
Conala-mined-curatedd is a dataset that is based on the mined subset of the CoNaLa dataset.
conala is a dataset crawled from Stack Overflow. Part of it is filtered and curated to from a training set and a test set. However, the mined part is not comparably
post-processed. It is a set of 600K examples that we decided to work on.
Dataset description
The conala datasets have 3 columns of interest. We give their description as provided by the… See the full description on the dataset page: https://huggingface.co/datasets/codeparrot/conala-mined-curated.
|
anton-l/commoncrawl_tex
|
anton-l
|
Dataset Card for "commoncrawl_tex"
More Information needed
|
GATE-engine/mini_imagenet
|
GATE-engine
|
Dataset Card for "mini_imagenet"
More Information needed
|
Cainiao-AI/LaDe
|
Cainiao-AI
|
Dataset Download: https://huggingface.co/datasets/Cainiao-AI/LaDe/tree/mainDataset Website: https://cainiaotechai.github.io/LaDe-website/Code Link:https://github.com/wenhaomin/LaDePaper Link: https://arxiv.org/abs/2306.10675
1. About Dataset
LaDe is a publicly available last-mile delivery dataset with millions of packages from industry.
It has three unique characteristics: (1) Large-scale. It involves 10,677k packages of 21k couriers over 6 months of real-world operation.… See the full description on the dataset page: https://huggingface.co/datasets/Cainiao-AI/LaDe.
|
laion/strategic_game_chess
|
laion
|
Chess
Recent advancements in artificial intelligence (AI) underscore the progress of reasoning and planning shown by recent generalist machine learning (ML) models. The progress can be boosted by datasets that can further boost these generic capabilities when used for training foundation models of various kind. This research initiative has generated extensive synthetic datasets from complex games — chess, Rubik's Cube, and mazes — to study facilitation and the advancement of… See the full description on the dataset page: https://huggingface.co/datasets/laion/strategic_game_chess.
|
YeungNLP/moss-003-sft-data
|
YeungNLP
|
moss-003-sft-data
本数据集可用于中文多轮对话指令微调,包含110万中英文多轮对话数据。该数据集来自MOSS项目 中的moss-003-sft-data数据集。
在原数据集的基础上,我们去除了冗余信息,仅提取出有效的对话信息,并且调整数据格式,以便在训练中更加灵活地组织数据格式。更多详细信息,可参考MOSS项目介绍。
本数据集为jsonl格式,每行为一个多轮对话数据,格式如下:
{
"conversation_id":1,
"category":"Brainstorming",
"conversation":[
{
"human":"如何保障工作中遵循正确的安全准则?"… See the full description on the dataset page: https://huggingface.co/datasets/YeungNLP/moss-003-sft-data.
|
llm-book/aio-passages-bpr-bert-base-japanese-v3
|
llm-book
|
Dataset Card for llm-book/aio-passages-bert-base-japanese-v3-bpr
書籍『大規模言語モデル入門』で使用する、「AI王」コンペティションのパッセージデータセットに BPR によるパッセージの埋め込みを適用したデータセットです。
llm-book/aio-passages のデータセットに対して、llm-book/bert-base-japanese-v3-bpr-passage-encoder によるパッセージのバイナリベクトルが embeddings フィールドに追加されています。
Licence
本データセットで利用している Wikipedia のコンテンツは、クリエイティブ・コモンズ表示・継承ライセンス 3.0 (CC BY-SA 3.0) および GNU 自由文書ライセンス (GFDL) の下に配布されているものです。
|
ccmusic-database/song_structure
|
ccmusic-database
|
The raw dataset comprises 300 pop songs in .mp3 format, sourced from the NetEase music, accompanied by a structure annotation file for each song in .txt format. The annotator for music structure is a professional musician and teacher from the China Conservatory of Music. For the statistics of the dataset, there are 208 Chinese songs, 87 English songs, three Korean songs and two Japanese songs. The song structures are labeled as follows: intro, re-intro, verse, chorus, pre-chorus, post-chorus, bridge, interlude and ending. Fig. 7 shows the frequency of each segment label that appears in the set. The labels chorus and verse are the two most prevalent segment labels in the dataset and they are the most common segment in Western popular music. Among them, the number of “Postchorus” tags is the least, with only two present.
Unlike the above three datasets for classification, this one has not undergone pre-processing such as spectrogram transform. Thus we provide the original content only. The integrated version of the dataset is organized based on audio files, with each item structured into three columns: The first column contains the audio of the song in .mp3 format, sampled at 44,100 Hz. The second column consists of lists indicating the time points that mark the boundaries of different song sections, while the third column contains lists corresponding to the labels of the song structures listed in the second column. Strictly speaking, the first column represents the data, while the subsequent two columns represent the label.
|
julien040/hacker-news-posts
|
julien040
|
Hacker News Stories Dataset
This is a dataset containing approximately 4 million stories from Hacker News, exported to a CSV file. The dataset includes the following fields:
id (int64): The unique identifier of the story.
title (string): The title of the story.
url (string): The URL of the story.
score (int64): The score of the story.
time (int64): The time the story was posted, in Unix time.
comments (int64): The number of comments on the story.
author (string): The username… See the full description on the dataset page: https://huggingface.co/datasets/julien040/hacker-news-posts.
|
clarin-knext/dbpedia-pl
|
clarin-knext
|
Part of BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: [email protected]
|
fringek/BigVideo
|
fringek
|
Please email us ([email protected]) to explain your identity and purpose before requesting access.
Directly requesting will not be approved.
Please make sure that all data are used for research only.
Github: https://github.com/DeepLearnXMU/BigVideo-VMT
|
tabtoyou/KoLLaVA-CC3M-Pretrain-595K
|
tabtoyou
|
LLaVA Visual Instruct CC3M 595K Pretrain Dataset Card
LLaVA에서 공개한 CC3M의 595K개 Visual Instruction dataset을 한국어로 번역한 데이터셋입니다. 기존 Ko-conceptual-captions에 공개된 한국어 caption을 가져와 데이터셋을 구축했습니다. 번역 결과가 다소 좋지 않아, 추후에 DeepL로 다시 번역할 수 있습니다.
License: CC-3M 준수
|
SahandNZ/cryptonews-articles-with-price-momentum-labels
|
SahandNZ
|
Dataset Card for Cryptonews articles with price momentum labels
Dataset Summary
The dataset was gathered from two prominent sources in the cryptocurrency industry: Cryptonews.com and Binance.com. The aim of the dataset was to evaluate the impact of news on crypto price movements.
As we know, news events such as regulatory changes, technological advancements, and major partnerships can have a significant impact on the price of cryptocurrencies. By analyzing the… See the full description on the dataset page: https://huggingface.co/datasets/SahandNZ/cryptonews-articles-with-price-momentum-labels.
|
irow/ClothingControlV2
|
irow
|
Dataset Card for "ClothingControlV2"
More Information needed
|
dell-research-harvard/headlines-semantic-similarity
|
dell-research-harvard
|
Dataset Card for HEADLINES
Dataset Summary
HEADLINES is a massive English-language semantic similarity dataset, containing 396,001,930 pairs of different headlines for the same newspaper article, taken from historical U.S. newspapers, covering the period 1920-1989.
Languages
The text in the dataset is in English.
Dataset Structure
Each year in the dataset is divided into a distinct file (eg. 1952_headlines.json), giving a total of… See the full description on the dataset page: https://huggingface.co/datasets/dell-research-harvard/headlines-semantic-similarity.
|
BAAI/COIG-PC
|
BAAI
|
COIG Prompt Collection
License
Default Licensing for Sub-Datasets Without Specific License Declaration: In instances where sub-datasets within the COIG-PC Dataset do not have a specific license declaration, the Apache License 2.0 (Apache-2.0) will be the applicable licensing terms by default.
Precedence of Declared Licensing for Sub-Datasets: For any sub-dataset within the COIG-PC Dataset that has an explicitly declared license, the terms and conditions of the… See the full description on the dataset page: https://huggingface.co/datasets/BAAI/COIG-PC.
|
cvcio/toxic-el
|
cvcio
|
Greek Toxic Tweets Dataset from the Civic Information Office.
|
rhasspy/piper-checkpoints
|
rhasspy
|
Checkpoints for Piper text to speech system.
|
zachgitt/comedy-transcripts
|
zachgitt
|
Dataset Summary
This is a dataset of stand up comedy transcripts. It was scraped from
https://scrapsfromtheloft.com/stand-up-comedy-scripts/ and all terms of use
apply. The transcripts are offered to the public as a contribution to education
and scholarship, and for the private, non-profit use of the academic community.
|
projectlosangeles/Los-Angeles-MIDI-Dataset
|
projectlosangeles
|
Los Angeles MIDI Dataset
SOTA kilo-scale MIDI dataset for MIR and Music AI purposes
Search and Explore Los Angeles MIDI dataset
[NEW] Master MIDI Dataset GPU Search and Filter
Master MIDI Dataset Search and Filter
Make your own Los Angeles MIDI Dataset from any MIDI scrape
Make your own Los Angeles MIDI Dataset Metadata
Los Angeles MIDI Dataset is now avaialable for… See the full description on the dataset page: https://huggingface.co/datasets/projectlosangeles/Los-Angeles-MIDI-Dataset.
|
64bits/lima_vicuna_format
|
64bits
|
LIMA dataset in Vicuna ShareGPT format.
License under LIMA's License.
Original Repo:
https://huggingface.co/datasets/GAIR/lima
|
TigerResearch/sft_en
|
TigerResearch
|
Tigerbot 开源项目中微调英文sft-en数据合集
本合集涵盖本组织下开源的其他中文sft-英文-数据集,不需要重复下载
Usage
import datasets
ds_sft = datasets.load_dataset('TigerResearch/sft_en')
文件细分
类型
语言
数据集文件
数量
alpaca 英文
英文
tigerbot-alpaca-en-50k
50k
头脑风暴
英文
tigerbot-dolly-Brainstorming-en-1.7k
1.7k
分类
英文
tigerbot-dolly-Classification-en-2k
2k
代码
英文
tigerbot-kaggle-leetcodesolutions-en-2k
2k
食谱生成
英文
tigerbot-kaggle-recipes-en-2k
2k
病历生成
英文
tigerbot-mt-note-generation-en
450
多轮对话
英文… See the full description on the dataset page: https://huggingface.co/datasets/TigerResearch/sft_en.
|
zjunlp/Mol-Instructions
|
zjunlp
|
Mol-Instructions datasets.
|
osunlp/Mind2Web
|
osunlp
|
Dataset Card for Dataset Name
Dataset Summary
Mind2Web is a dataset for developing and evaluating generalist agents for the web that can follow language instructions to complete complex tasks on any website. Existing datasets for web agents either use simulated websites or only cover a limited set of websites and tasks, thus not suitable for generalist web agents. With over 2,000 open-ended tasks collected from 137 websites spanning 31 domains and crowdsourced… See the full description on the dataset page: https://huggingface.co/datasets/osunlp/Mind2Web.
|
RichardErkhov/OneMillionFaces
|
RichardErkhov
|
million-faces
Welcome to "million-faces", one of the largest facesets available to the public. Comprising a staggering one million faces, all images in this dataset are entirely AI-generated.
Due to the nature of AI-generated images, please be aware that some artifacts may be present in the dataset.
The dataset is currently being uploaded to Hugging Face, a renowned platform for hosting datasets and models for the machine learning community.
Usage
Feel free to… See the full description on the dataset page: https://huggingface.co/datasets/RichardErkhov/OneMillionFaces.
|
Kamaljp/medium_articles
|
Kamaljp
|
Dataset Card for "medium_articles"
More Information needed
|
coallaoh/COCO-AB
|
coallaoh
|
General Information
Title: COCO-AB
Description:
The COCO-AB dataset is an extension of the COCO 2014 training set, enriched with additional annotation byproducts (AB).
The data includes 82,765 reannotated images from the original COCO 2014 training set.
It has relevance in computer vision, specifically in object detection and location.
The aim of the dataset is to provide a richer understanding of the images (without extra costs) by recording additional actions and… See the full description on the dataset page: https://huggingface.co/datasets/coallaoh/COCO-AB.
|
killah-t-cell/multi_controlnet_dataset_final_final_v2_for_real_this_time
|
killah-t-cell
|
Dataset Card for "multi_controlnet_dataset_final_final_v2_for_real_this_time"
More Information needed
|
binhgiangnguyendanh/reddit_casual_conversation_for_alpaca_lora
|
binhgiangnguyendanh
|
Dataset Card for "reddit_casual_conversation_for_alpaca_lora"
More Information needed
|
Ayaka/ORCHESTRA-simple-1M
|
Ayaka
|
ORCHESTRA-simple-1M
GitHub: nk2028/ORCHESTRA-dataset
中文簡介
ORCHESTRA (cOmpRehensive Classical cHinESe poeTRy dAtaset) 是一個全面的古典中文詩歌的數據集,數據來自搜韻網。本數據集由 nk2028 進行格式轉換並發佈,希望透過公開高品質的古典中文詩歌數據,促進對古典中文詩歌及古典中文自然語言處理的研究。
ORCHESTRA-simple 是 ORCHESTRA 數據集的簡化格式,僅保留 id, title, group_index, type, dynasty, author, content 這 7 個欄位,而去除其他欄位,以簡化使用。
本資料集可用於大型語言模型的訓練。如欲作其他用途,請向數據提供者搜韻網諮詢。
English Introduction
ORCHESTRA (cOmpRehensive Classical cHinESe poeTRy dAtaset) is a comprehensive dataset of… See the full description on the dataset page: https://huggingface.co/datasets/Ayaka/ORCHESTRA-simple-1M.
|
dell-research-harvard/AmericanStories
|
dell-research-harvard
|
American Stories offers high-quality structured data from historical newspapers suitable for pre-training large language models to enhance the understanding of historical English and world knowledge. It can also be integrated into external databases of retrieval-augmented language models, enabling broader access to historical information, including interpretations of political events and intricate details about people's ancestors. Additionally, the structured article texts facilitate the application of transformer-based methods for popular tasks like detecting reproduced content, significantly improving accuracy compared to traditional OCR methods. American Stories serves as a substantial and valuable dataset for advancing multimodal layout analysis models and other multimodal applications.
|
yuyuc/chem-llama-instruct
|
yuyuc
|
This datastes is for llama-based chemistry condition generation model
|
practical-dreamer/RPGPT_PublicDomain-ShareGPT
|
practical-dreamer
|
Experimental Synthetic Dataset of Public Domain Character Dialogue in Roleplay Format
Generated using scripts from my https://github.com/practicaldreamer/build-a-dataset repo
license: mit
|
AtlasUnified/atlas-converse
|
AtlasUnified
|
ATLAS-CONVERSE
This dataset is synthetically generated by GPT-3.5-turbo. It was generated in 1.5 hours and cost me $3.82 USD. This is a conversation dataset based that can work with FastChat, Axolotl, and ShareGPT formatting.
Categories:
The main 41 (See the repo to check the JSONL) categories below were human derived while the subcategories were synthetically generated by GPT-4.
1. Mathematics
1.1. Arithmetic
1.2. Algebra
1.3. Geometry
1.4.… See the full description on the dataset page: https://huggingface.co/datasets/AtlasUnified/atlas-converse.
|
thbndi/Mimic4Dataset
|
thbndi
|
Dataset for mimic4 data, by default for the Mortality task.
Available tasks are: Mortality, Length of Stay, Readmission, Phenotype.
The data is extracted from the mimic4 database using this pipeline: 'https://github.com/healthylaife/MIMIC-IV-Data-Pipeline/tree/main'
mimic path should have this form : "path/to/mimic4data/from/username/mimiciv/2.2"
If you choose a Custom task provide a configuration file for the Time series.
Currently working with Mimic-IV ICU Data.
|
practical-dreamer/RPGPT_PublicDomain-alpaca
|
practical-dreamer
|
Experimental Synthetic Dataset of Public Domain Character Dialogue in Roleplay Format
Generated using scripts from my https://github.com/practicaldreamer/build-a-dataset repo
license: mit
|
OpenIllumination/OpenIllumination
|
OpenIllumination
|
Update
(2023.12.30) We added a global transformation to the camera poses to make the objects axis-aligned along the Z axis. This is preferred for some methods such as TensoRF and TriPlane. Please see here for the new camera poses.
(2023.11.9) We fixed some incorrect links in the download script. Please use the latest script to download the data.
Dataset Card for OpenIllumination
Dataset Summary
Our dataset comprises 64 objects, each captured from… See the full description on the dataset page: https://huggingface.co/datasets/OpenIllumination/OpenIllumination.
|
wellecks/naturalproofs-gen
|
wellecks
|
Naturalproofs-gen
This dataset contains the Naturalproofs-gen corpus from:
NaturalProver: Grounded Mathematical Proof Generation with Language ModelsSean Welleck*, Jiacheng Liu*, Ximing Lu, Hannaneh Hajishirzi, Yejin ChoiNeurIPS 2022
Licensing Information
MIT
Citation Information
Please cite:
@inproceedings{welleck2022naturalprover,
title={NaturalProver: Grounded Mathematical Proof Generation with Language Models},
author={Sean Welleck and… See the full description on the dataset page: https://huggingface.co/datasets/wellecks/naturalproofs-gen.
|
osunlp/MagicBrush
|
osunlp
|
Dataset Card for MagicBrush
Dataset Summary
MagicBrush is the first large-scale, manually-annotated instruction-guided image editing dataset covering diverse scenarios single-turn, multi-turn, mask-provided, and mask-free editing. MagicBrush comprises 10K (source image, instruction, target image) triples, which is sufficient to train large-scale image editing models.
Please check our website to explore more visual results.
Dataset Structure
"img_id"… See the full description on the dataset page: https://huggingface.co/datasets/osunlp/MagicBrush.
|
ibm-nasa-geospatial/hls_burn_scars
|
ibm-nasa-geospatial
|
This dataset contains Harmonized Landsat and Sentinel-2 imagery of burn scars and the associated masks for the years 2018-2021 over the contiguous United States. There are 804 512x512 scenes. Its primary purpose is for training geospatial machine learning models.
|
jlohding/sp500-edgar-10k
|
jlohding
|
Dataset Card for SP500-EDGAR-10K
Dataset Summary
This dataset contains the annual reports for all SP500 historical constituents from 2010-2022 from SEC EDGAR Form 10-K filings.
It also contains n-day future returns of each firm's stock price from each filing date.
Dataset Structure
Data Fields
[More Information Needed]
Data Splits
[More Information Needed]
Dataset Creation
Source Data… See the full description on the dataset page: https://huggingface.co/datasets/jlohding/sp500-edgar-10k.
|
Kamizuru00/diagram_image_to_text
|
Kamizuru00
|
Dataset Card for "diagram_image_to_text"
More Information needed
|
THU-StarLab/CustomerService
|
THU-StarLab
|
数据集说明
组成
类型
文件夹名称
来源
数量
说明
电信问答
telecom_Q&A
百度知道QA
87366
经过脱敏、数据清洗、人工筛选等处理
行业相关知识数据
industry_data
教科书、国际标准等
5218
通过大模型从文档得到的QA数据,部分原文档保存在source_data中
通用指令数据集
general_instruction
firefly
18123
挑选了阅读、情感理解、补全、逻辑推理等主题的通用指令
混合数据集
blended_data
-
-
按照数据集建设进程,混合后组件的训练、测试数据,可直接使用
混合数据 - V1
组成
来源
比例
条数
说明
百度知道
64%
32282
经过脱敏、数据清洗、人工筛选等处理
firefly
36%
18123
挑选了阅读、情感理解、补全、逻辑推理等主题的通用指令
标准问答
-
18
通过联通网上营业厅在线客服整理
合计
100%… See the full description on the dataset page: https://huggingface.co/datasets/THU-StarLab/CustomerService.
|
alpindale/visual-novels
|
alpindale
|
Visual Novel Dataset
This dataset contains parsed Visual Novel scripts for training language models. The dataset consists of approximately 60 million tokens of parsed scripts.
Dataset Structure
The dataset follows a general structure for visual novel scripts:
Dialogue lines: Dialogue lines are formatted with the speaker's name followed by a colon, and the dialogue itself enclosed in quotes. For example:
John: "Hello, how are you?"
Actions and narration: Actions… See the full description on the dataset page: https://huggingface.co/datasets/alpindale/visual-novels.
|
irodkin/celeba_with_llava_captions
|
irodkin
|
Dataset Card for "celeba_with_llava_captions"
More Information needed
|
PKU-Alignment/PKU-SafeRLHF
|
PKU-Alignment
|
Dataset Card for PKU-SafeRLHF
Warning: this dataset contains data that may be offensive or harmful. The data are intended for research purposes, especially research that can make models less harmful. The views expressed in the data do not reflect the views of PKU-Alignment Team or any of its members.
[🏠 Homepage] [🤗 Single Dimension Preference Dataset] [🤗 Q-A Dataset] [🤗 Prompt Dataset]
Citation
If PKU-SafeRLHF has contributed to your work, please consider… See the full description on the dataset page: https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF.
|
mikex86/stackoverflow-posts
|
mikex86
|
StackOverflow Posts Markdown
Dataset Summary
This dataset contains all posts submitted to StackOverflow before the 14th of June 2023 formatted as Markdown text.
The dataset contains ~60 Million posts, totaling ~35GB in size and ~65 billion characters of text.
The data is sourced from Internet Archive StackExchange Data Dump.
Dataset Structure
Each record corresponds to one post of a particular type.
Original ordering from the data dump is not… See the full description on the dataset page: https://huggingface.co/datasets/mikex86/stackoverflow-posts.
|
WizardLMTeam/WizardLM_evol_instruct_V2_196k
|
WizardLMTeam
|
News
🔥 🔥 🔥 [08/11/2023] We release WizardMath Models.
🔥 Our WizardMath-70B-V1.0 model slightly outperforms some closed-source LLMs on the GSM8K, including ChatGPT 3.5, Claude Instant 1 and PaLM 2 540B.
🔥 Our WizardMath-70B-V1.0 model achieves 81.6 pass@1 on the GSM8k Benchmarks, which is 24.8 points higher than the SOTA open-source LLM.
🔥 Our WizardMath-70B-V1.0 model achieves 22.7 pass@1 on the MATH Benchmarks, which is 9.2 points higher than the SOTA open-source LLM.… See the full description on the dataset page: https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_V2_196k.
|
alxfgh/ChEMBL_Drug_Instruction_Tuning
|
alxfgh
|
Dataset Card for ChEMBL Drug Instruction Tuning
Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Supported Tasks and Leaderboards
[More Information Needed]
Languages
[More Information Needed]
Dataset Structure
Data Instances
[More Information Needed]
Data Fields
[More Information Needed]
Data Splits… See the full description on the dataset page: https://huggingface.co/datasets/alxfgh/ChEMBL_Drug_Instruction_Tuning.
|
HausaNLP/AfriSenti-Twitter
|
HausaNLP
|
AfriSenti is the largest sentiment analysis benchmark dataset for under-represented African languages---covering 110,000+ annotated tweets in 14 African languages (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and yoruba).
|
irodkin/celeba_with_llava_shorter_captions
|
irodkin
|
Dataset Card for "celeba_with_llava_shorter_captions"
More Information needed
|
InstaDeepAI/nucleotide_transformer_downstream_tasks
|
InstaDeepAI
|
The 18 classification downstream tasks from the Nucleotide Transformer paper. Each task
corresponds to a dataset configuration.
|
LukeSajkowski/products_ecommerce_embeddings
|
LukeSajkowski
|
Dataset Card for "products_ecommerce_embeddings"
The dataset is based on https://github.com/querqy/chorus/tree/main/data-encoder
More Information needed
|
takaaki-inada/databricks-dolly-15k-ja-zundamon
|
takaaki-inada
|
This dataset was based on "kunishou/databricks-dolly-15k-ja".
This dataset is licensed under CC BY SA 3.0
Last Update : 2023-05-11
databricks-dolly-15k-jahttps://github.com/kunishou/databricks-dolly-15k-jadatabricks-dolly-15khttps://github.com/databrickslabs/dolly/tree/master/data
|
Zilun/RS5M
|
Zilun
|
RS5M
File Explaination
1. pub11_NER_geolocation_info.csv
This file provides extracted geolocation entities in caption. we discovered that the captions from the PUB11 dataset contain a significant amount of location information. As a result, we executed a NER (Named Entity Recognition) extraction on the PUB11 subset.
We hypothesize that the location information in the captions is closely related to the image's content and its shooting location.… See the full description on the dataset page: https://huggingface.co/datasets/Zilun/RS5M.
|
dmayhem93/agieval-aqua-rat
|
dmayhem93
|
Dataset Card for "agieval-aqua-rat"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
Raw dataset: https://github.com/deepmind/AQuA
Copyright 2017 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software… See the full description on the dataset page: https://huggingface.co/datasets/dmayhem93/agieval-aqua-rat.
|
dmayhem93/agieval-lsat-ar
|
dmayhem93
|
Dataset Card for "agieval-lsat-ar"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
Raw datset: https://github.com/zhongwanjun/AR-LSAT
MIT License
Copyright (c) 2022 Wanjun Zhong
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge… See the full description on the dataset page: https://huggingface.co/datasets/dmayhem93/agieval-lsat-ar.
|
dmayhem93/agieval-lsat-lr
|
dmayhem93
|
Dataset Card for "agieval-lsat-lr"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
Raw datset: https://github.com/zhongwanjun/AR-LSAT
MIT License
Copyright (c) 2022 Wanjun Zhong
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge… See the full description on the dataset page: https://huggingface.co/datasets/dmayhem93/agieval-lsat-lr.
|
madebyollin/soa-full
|
madebyollin
|
This dataset is a shuffled list of downloadable CC0 image titles and URLs from Smithsonian Open Access.
Some images may be omitted due to limitations or oversights in the preprocessing pipeline, but there's no deliberate curation.
This dataset only contains metadata; a tool like https://github.com/rom1504/img2dataset can be used to download the actual images:
img2dataset --url_list data --output_folder data_files \
--input_format "parquet" --output_format files \
--caption_col "text"… See the full description on the dataset page: https://huggingface.co/datasets/madebyollin/soa-full.
|
dev7halo/korean-mcfaq
|
dev7halo
|
Usage
pip install datasets
from datasets import load_dataset
dataset = load_dataset("dev7halo/korean-mcfaq")
DatasetDict({
train: Dataset({
features: ['Unnamed: 0', '제목', '등록일', '질문', '답변'],
num_rows: 2452
})
})
# dataset['train'][0]
{'Unnamed: 0': 0,
'제목': "'언젠가', '언젠가는'의 표현",
'등록일': '2019. 12. 6. ',
'질문': '\n\t\t \n\t\t \n\t\t"저는 언젠가 간호사가 되고 싶어요."와 같이 쓸 때, 미래의 불특정한 때를 나타내는 \'언젠가\'라는 단어를 \'언젠가는\'이라고 써도 되나요? \'언젠가\'가 표준어인 것 같은데, 뒤에 \'는\'을 쓴… See the full description on the dataset page: https://huggingface.co/datasets/dev7halo/korean-mcfaq.
|
collabora/whisperspeech
|
collabora
|
The WhisperSpeech Dataset
This dataset contains data to train SPEAR TTS-like text-to-speech models that utilized semantic tokens derived from the OpenAI Whisper
speech recognition model.
We currently provide semantic and acoustic tokens for the LibriLight and LibriTTS datasets (English only).
Acoustic tokens:
24kHz EnCodec 6kbps (8 quantizers)
Semantic tokens:
Whisper tiny VQ bottleneck trained on a subset of LibriLight
Available LibriLight subsets:
small/medium/large… See the full description on the dataset page: https://huggingface.co/datasets/collabora/whisperspeech.
|
SamsungSAILMontreal/deepnets1m
|
SamsungSAILMontreal
|
This is a copy of the DeepNets-1M dataset originally released at https://github.com/facebookresearch/ppuda under the MIT license.
The dataset presents diverse computational graphs (1M training and 1402 evaluation) of neural network architectures used in image classification.
See detailed description at https://paperswithcode.com/dataset/deepnets-1m and
in the Parameter Prediction for Unseen Deep Architectures paper.
There are four files in this dataset:
deepnets1m_eval.hdf5; # 16 MB (md5:… See the full description on the dataset page: https://huggingface.co/datasets/SamsungSAILMontreal/deepnets1m.
|
vesteinn/babylm
|
vesteinn
|
ATTENTION
This is preprocessed data for the BabyLM challenge https://babylm.github.io/
If you want the raw unprocessed files, you should download them directly.
|
KShivendu/dbpedia-entities-openai-1M
|
KShivendu
|
1M OpenAI Embeddings -- 1536 dimensions
Created: June 2023.
Text used for Embedding: title (string) + text (string)
Embedding Model: text-embedding-ada-002
First used for the pgvector vs VectorDB (Qdrant) benchmark: https://nirantk.com/writing/pgvector-vs-qdrant/
Future work
We are planning to take this up to 10M (and possibly 100M) vectors. Contact @KShivendu_ on Twitter or mail to [email protected] if you want to help :)
Credits:
This dataset was generated from… See the full description on the dataset page: https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M.
|
microsoft/LCC_csharp
|
microsoft
|
Dataset Card for "LCC_csharp"
More Information needed
|
richardr1126/spider-schema
|
richardr1126
|
Dataset Card for Spider Schema
Dataset Summary
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students
The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
This dataset contains the 166 databases used in the Spider dataset.
Yale Lily Spider Leaderboards
The leaderboard can be seen at https://yale-lily.github.io/spider… See the full description on the dataset page: https://huggingface.co/datasets/richardr1126/spider-schema.
|
findnitai/english-to-hinglish
|
findnitai
|
English to Hinglish Dataset aggregated from publicly available datasources.
Sources:
Hinglish TOP Dataset
CMU English Dog
HinGE
PHINC
source : 1 - Human Annotated ,
source : 0 - Synthetically Generated
|
Sp1786/multiclass-sentiment-analysis-dataset
|
Sp1786
|
Dataset Card for Dataset Name
Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Supported Tasks and Leaderboards
[More Information Needed]
Languages
[More Information Needed]
Dataset Structure
Data Instances
[More Information Needed]
Data Fields
[More Information Needed]
Data Splits
[More Information… See the full description on the dataset page: https://huggingface.co/datasets/Sp1786/multiclass-sentiment-analysis-dataset.
|
MikhailT/cmu-arctic
|
MikhailT
|
CMU Arctic Dataset
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.