id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
lirus18/deepfashion_with_captions_blowout
|
lirus18
|
Dataset Card for "deepfashion_with_captions_blowout"
More Information needed
|
lansinuote/diffusion.8.instruct_pix2pix
|
lansinuote
|
Dataset Card for "diffusion.8.instruct_pix2pix"
More Information needed
|
lighteval/mmlu
|
lighteval
|
This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge, covering 57 tasks including elementary mathematics, US history, computer science, law, and more.
|
openllmplayground/pandagpt_visual_instruction_dataset
|
openllmplayground
|
[Dataset Details] This dataset is constructed by combining LLaVA Visual Instruct 150K and the dataset released by MiniGPT-4.
[License] Attribution-NonCommercial 4.0 International It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use
Intended use
Primary intended uses: The primary use of this dataset is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in… See the full description on the dataset page: https://huggingface.co/datasets/openllmplayground/pandagpt_visual_instruction_dataset.
|
JasperLS/prompt-injections
|
JasperLS
|
Dataset Card for "deberta-v3-base-injection-dataset"
More Information needed
|
FreedomIntelligence/huatuo26M-testdatasets
|
FreedomIntelligence
|
Dataset Card for huatuo26M-testdatasets
Dataset Summary
We are pleased to announce the release of our evaluation dataset, a subset of the Huatuo-26M. This dataset contains 6,000 entries that we used for Natural Language Generation (NLG) experimentation in our associated research paper.
We encourage researchers and developers to use this evaluation dataset to gauge the performance of their own models. This is not only a chance to assess the accuracy and relevancy… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/huatuo26M-testdatasets.
|
lirus18/deepfashion_with_captions_blowout_stacked
|
lirus18
|
Dataset Card for "deepfashion_with_captions_blowout_stacked"
More Information needed
|
deepset/prompt-injections
|
deepset
|
Dataset Card for "deberta-v3-base-injection-dataset"
More Information needed
|
bleugreen/typescript-chunks
|
bleugreen
|
typescript-chunks
A dataset of TypeScript snippets, processed from the typescript subset of the-stack-smol.
Processing
Each source file is parsed with the TypeScript AST and queried for 'semantic chunks' of the following types.
FunctionDeclaration ---- 8205
ArrowFunction --------- 33890
ClassDeclaration ------- 5325
InterfaceDeclaration -- 12884
EnumDeclaration --------- 518
TypeAliasDeclaration --- 3580
MethodDeclaration ----- 24713
Leading comments are added… See the full description on the dataset page: https://huggingface.co/datasets/bleugreen/typescript-chunks.
|
deepghs/anime_head_detection
|
deepghs
|
Dataset for anime head detection (include the entire head, not only the face parts).
Dataset
Train
Test
Validate
Description
v2.0
16050
766
1528
High-quality manually annotated head detection dataset, containing images from bangumi and a large number of complex multi-person illustrations, which can significantly improve the training quality of the v1.0 dataset. Recommended to be used together with v1.0 dataset. Thanks to @SumomoLee and @Crystal427 for participating in the annotation… See the full description on the dataset page: https://huggingface.co/datasets/deepghs/anime_head_detection.
|
Abrumu/Fashion_controlnet_dataset_V3
|
Abrumu
|
Dataset Card for "Fashion_controlnet_dataset_V3"
More Information needed
|
pszemraj/dolly_hhrlhf-text2text
|
pszemraj
|
dolly_hhrlhf-text2text
This is mosaicml/dolly_hhrlhf with the following changes:
clean up/adapt prompt column for the text2text-generation task (no need for a special template)
split the original train set into a 95% train and an explicit validation set (5%)
fixed extra spaces in puncuation (as this is not a French dataset)
details on extra spaces:
Original sentence 1: How can I be healthy ?
Fixed sentence 1: How can I be healthy?
|
Nan-Do/instructional_code-search-net-ruby
|
Nan-Do
|
Dataset Card for "instructional_code-search-net-ruby"
Dataset Summary
This is an instructional dataset for Ruby.
The dataset contains two different kind of tasks:
Given a piece of code generate a description of what it does.
Given a description generate a piece of code that fulfils the description.
Languages
The dataset is in English.
Data Splits
There are no splits.
Dataset Creation
May of 2023
Curation… See the full description on the dataset page: https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-ruby.
|
joey234/mmlu-abstract_algebra
|
joey234
|
Dataset Card for "mmlu-abstract_algebra"
More Information needed
|
joey234/mmlu-anatomy
|
joey234
|
Dataset Card for "mmlu-anatomy"
More Information needed
|
joey234/mmlu-astronomy
|
joey234
|
Dataset Card for "mmlu-astronomy"
More Information needed
|
joey234/mmlu-business_ethics
|
joey234
|
Dataset Card for "mmlu-business_ethics"
More Information needed
|
joey234/mmlu-clinical_knowledge
|
joey234
|
Dataset Card for "mmlu-clinical_knowledge"
More Information needed
|
joey234/mmlu-college_biology
|
joey234
|
Dataset Card for "mmlu-college_biology"
More Information needed
|
joey234/mmlu-college_chemistry
|
joey234
|
Dataset Card for "mmlu-college_chemistry"
More Information needed
|
joey234/mmlu-college_computer_science
|
joey234
|
Dataset Card for "mmlu-college_computer_science"
More Information needed
|
joey234/mmlu-college_mathematics
|
joey234
|
Dataset Card for "mmlu-college_mathematics"
More Information needed
|
joey234/mmlu-college_medicine
|
joey234
|
Dataset Card for "mmlu-college_medicine"
More Information needed
|
joey234/mmlu-college_physics
|
joey234
|
Dataset Card for "mmlu-college_physics"
More Information needed
|
joey234/mmlu-computer_security
|
joey234
|
Dataset Card for "mmlu-computer_security"
More Information needed
|
joey234/mmlu-conceptual_physics
|
joey234
|
Dataset Card for "mmlu-conceptual_physics"
More Information needed
|
joey234/mmlu-econometrics
|
joey234
|
Dataset Card for "mmlu-econometrics"
More Information needed
|
joey234/mmlu-electrical_engineering
|
joey234
|
Dataset Card for "mmlu-electrical_engineering"
More Information needed
|
joey234/mmlu-elementary_mathematics
|
joey234
|
Dataset Card for "mmlu-elementary_mathematics"
More Information needed
|
joey234/mmlu-formal_logic
|
joey234
|
Dataset Card for "mmlu-formal_logic"
More Information needed
|
joey234/mmlu-global_facts
|
joey234
|
Dataset Card for "mmlu-global_facts"
More Information needed
|
joey234/mmlu-high_school_biology
|
joey234
|
Dataset Card for "mmlu-high_school_biology"
More Information needed
|
joey234/mmlu-high_school_chemistry
|
joey234
|
Dataset Card for "mmlu-high_school_chemistry"
More Information needed
|
joey234/mmlu-high_school_computer_science
|
joey234
|
Dataset Card for "mmlu-high_school_computer_science"
More Information needed
|
joey234/mmlu-high_school_european_history
|
joey234
|
Dataset Card for "mmlu-high_school_european_history"
More Information needed
|
joey234/mmlu-high_school_geography
|
joey234
|
Dataset Card for "mmlu-high_school_geography"
More Information needed
|
joey234/mmlu-high_school_government_and_politics
|
joey234
|
Dataset Card for "mmlu-high_school_government_and_politics"
More Information needed
|
joey234/mmlu-high_school_macroeconomics
|
joey234
|
Dataset Card for "mmlu-high_school_macroeconomics"
More Information needed
|
joey234/mmlu-high_school_mathematics
|
joey234
|
Dataset Card for "mmlu-high_school_mathematics"
More Information needed
|
joey234/mmlu-high_school_microeconomics
|
joey234
|
Dataset Card for "mmlu-high_school_microeconomics"
More Information needed
|
joey234/mmlu-high_school_physics
|
joey234
|
Dataset Card for "mmlu-high_school_physics"
More Information needed
|
joey234/mmlu-high_school_psychology
|
joey234
|
Dataset Card for "mmlu-high_school_psychology"
More Information needed
|
joey234/mmlu-high_school_statistics
|
joey234
|
Dataset Card for "mmlu-high_school_statistics"
More Information needed
|
joey234/mmlu-high_school_us_history
|
joey234
|
Dataset Card for "mmlu-high_school_us_history"
More Information needed
|
joey234/mmlu-high_school_world_history
|
joey234
|
Dataset Card for "mmlu-high_school_world_history"
More Information needed
|
joey234/mmlu-human_aging
|
joey234
|
Dataset Card for "mmlu-human_aging"
More Information needed
|
joey234/mmlu-human_sexuality
|
joey234
|
Dataset Card for "mmlu-human_sexuality"
More Information needed
|
joey234/mmlu-international_law
|
joey234
|
Dataset Card for "mmlu-international_law"
More Information needed
|
joey234/mmlu-jurisprudence
|
joey234
|
Dataset Card for "mmlu-jurisprudence"
More Information needed
|
joey234/mmlu-logical_fallacies
|
joey234
|
Dataset Card for "mmlu-logical_fallacies"
More Information needed
|
joey234/mmlu-machine_learning
|
joey234
|
Dataset Card for "mmlu-machine_learning"
More Information needed
|
joey234/mmlu-management
|
joey234
|
Dataset Card for "mmlu-management"
More Information needed
|
joey234/mmlu-marketing
|
joey234
|
Dataset Card for "mmlu-marketing"
More Information needed
|
joey234/mmlu-medical_genetics
|
joey234
|
Dataset Card for "mmlu-medical_genetics"
More Information needed
|
joey234/mmlu-miscellaneous
|
joey234
|
Dataset Card for "mmlu-miscellaneous"
More Information needed
|
joey234/mmlu-moral_disputes
|
joey234
|
Dataset Card for "mmlu-moral_disputes"
More Information needed
|
joey234/mmlu-moral_scenarios
|
joey234
|
Dataset Card for "mmlu-moral_scenarios"
More Information needed
|
joey234/mmlu-nutrition
|
joey234
|
Dataset Card for "mmlu-nutrition"
More Information needed
|
joey234/mmlu-philosophy
|
joey234
|
Dataset Card for "mmlu-philosophy"
More Information needed
|
joey234/mmlu-prehistory
|
joey234
|
Dataset Card for "mmlu-prehistory"
More Information needed
|
joey234/mmlu-professional_accounting
|
joey234
|
Dataset Card for "mmlu-professional_accounting"
More Information needed
|
joey234/mmlu-professional_law
|
joey234
|
Dataset Card for "mmlu-professional_law"
More Information needed
|
joey234/mmlu-professional_medicine
|
joey234
|
Dataset Card for "mmlu-professional_medicine"
More Information needed
|
joey234/mmlu-public_relations
|
joey234
|
Dataset Card for "mmlu-public_relations"
More Information needed
|
joey234/mmlu-security_studies
|
joey234
|
Dataset Card for "mmlu-security_studies"
More Information needed
|
joey234/mmlu-us_foreign_policy
|
joey234
|
Dataset Card for "mmlu-us_foreign_policy"
More Information needed
|
joey234/mmlu-virology
|
joey234
|
Dataset Card for "mmlu-virology"
More Information needed
|
joey234/mmlu-world_religions
|
joey234
|
Dataset Card for "mmlu-world_religions"
More Information needed
|
voidful/StrategyQA
|
voidful
|
A Question Answering Benchmark with Implicit Reasoning Strategies
The StrategyQA dataset was created through a crowdsourcing pipeline for eliciting creative and diverse yes/no questions that require implicit reasoning steps. To solve questions in StrategyQA, the reasoning steps should be inferred using a strategy. To guide and evaluate the question answering process, each example in StrategyQA was annotated with a decomposition into reasoning steps for answering it, and Wikipedia paragraphs… See the full description on the dataset page: https://huggingface.co/datasets/voidful/StrategyQA.
|
ma2za/many_emotions
|
ma2za
|
Dataset Card for "many_emotions"
Dataset Summary
Languages
[More Information Needed]
Dataset Structure
Data Instances
[More Information Needed]
Data Fields
The data fields are:
id: unique identifier
text: a string feature.
label: a classification label, with possible values including anger (0), fear (1), joy (2), love (
3), sadness (4), surprise (5), neutral (6).
license: inherited license from source… See the full description on the dataset page: https://huggingface.co/datasets/ma2za/many_emotions.
|
CohereForAI/xP3x
|
CohereForAI
|
A multilingual collection of Winograd Schemas in six languages that can be used for evaluation of cross-lingual commonsense reasoning capabilities.
|
tarungupta83/MidJourney_v5_Prompt_dataset
|
tarungupta83
|
Dataset contain raw prompts from Mid Journey v5
Total Records : 4245117
Sample Data
AuthorID
Author
Date
Content
Attachments
Reactions
936929561302675456
Midjourney Bot#9282
04/20/2023 12:00 AM
benjamin frankling with rayban sunglasses reflecting a usa flag walking on a side of penguin, whit...
Link
936929561302675456
Midjourney Bot#9282
04/20/2023 12:00 AM
Street vendor robot in 80's Poland, meat market, fruit stall, communist style, real photo, real ph...
Link… See the full description on the dataset page: https://huggingface.co/datasets/tarungupta83/MidJourney_v5_Prompt_dataset.
|
mio/sukasuka-anime-vocal-dataset
|
mio
|
《末日时在做什么?有没有空?可以来拯救吗?》全角色语音数据集
介绍
该数据集包含了《末日时在做什么?有没有空?可以来拯救吗?》全角色的语音数据,包含wav文件和对应的日语文本台词.
制作流程如下:
从动漫视频提取12集完整音频,然后用demucs提取人声
利用字幕文件提取各个音频片段
人工这3000多个台词进行分辨其角色
|
BNNT/IPQuiz
|
BNNT
|
The IPQuiz dataset is used to assess a model's understanding of intellectual property-related concepts and regulations.IPQuiz is a multiple-choice question-response dataset collected from publicly available websites around the world in a variety of languages. For each question, the model needs to select an answer from a candidate list.
source:
http://epaper.iprchn.com/zscqb/h5/html5/2023-04/21/content_27601_7600799.htm… See the full description on the dataset page: https://huggingface.co/datasets/BNNT/IPQuiz.
|
codeparrot/self-instruct-starcoder
|
codeparrot
|
Self-instruct-starcoder
Summary
Self-instruct-starcoder is a dataset that was generated by prompting starcoder to generate new instructions based on some human-written seed instructions.
The underlying process is explained in the paper self-instruct. This algorithm gave birth to famous machine generated
datasets such as Alpaca and Code Alpaca which are two datasets
obtained by prompting OpenAI text-davinci-003 engine.
Our approach
While our method… See the full description on the dataset page: https://huggingface.co/datasets/codeparrot/self-instruct-starcoder.
|
Stardrums/pico-breast-cancer
|
Stardrums
|
The corpus consists of about 1,011 PubMed abstracts which are RCTs related
to breast cancer. For each abstract, text snippets that identify the
Participants, Intervention, Control, and Outcome (PICO elements) are annotated.
The abstracts were annotated using BRAT (https://brat.nlplab.org/) and later
converted to IOB format.
|
tasksource/ruletaker
|
tasksource
|
Dataset Card for "ruletaker"
https://github.com/allenai/ruletaker
@inproceedings{ruletaker2020,
title = {Transformers as Soft Reasoners over Language},
author = {Clark, Peter and Tafjord, Oyvind and Richardson, Kyle},
booktitle = {Proceedings of the Twenty-Ninth International Joint Conference on
Artificial Intelligence, {IJCAI-20}},
publisher = {International Joint Conferences on Artificial Intelligence Organization},
editor = {Christian… See the full description on the dataset page: https://huggingface.co/datasets/tasksource/ruletaker.
|
izumi-lab/llm-japanese-dataset-vanilla
|
izumi-lab
|
llm-japanese-dataset-vanilla
LLM構築用の日本語チャットデータセット
izumi-lab/llm-japanese-dataset から,日英翻訳のデータセット等を抜いたものです.
主に,日本語LLMモデルなどに対して,チャット(Instruction)応答タスクに関してLoRAなどでチューニングするために使用できます.
※様々な公開言語資源を利用させていただきました.関係各位にはこの場を借りて御礼申し上げます.
データの詳細
データの詳細は,izumi-lab/llm-japanese-dataset に関する,以下の論文を参照してください.
日本語: https://jxiv.jst.go.jp/index.php/jxiv/preprint/view/383
英語: https://arxiv.org/abs/2305.12720
GitHub: https://github.com/masanorihirano/llm-japanese-dataset
最新情報:… See the full description on the dataset page: https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset-vanilla.
|
stanfordnlp/SHP-2
|
stanfordnlp
|
🚢 Stanford Human Preferences Dataset v2 (SHP-2)
Summary
SHP-2 is a dataset of 4.8M collective human preferences over responses to questions/instructions in 129 different subject areas, from cooking to legal advice. It is an extended version of the original 385K SHP dataset.
The preferences are meant to reflect the helpfulness of one response over another, and are intended to be used for training RLHF reward models and NLG evaluation models (e.g., SteamSHP).
Each… See the full description on the dataset page: https://huggingface.co/datasets/stanfordnlp/SHP-2.
|
gretelai/symptom_to_diagnosis
|
gretelai
|
Dataset Summary
This dataset contains natural language descriptions of symptoms labeled with 22 corresponding diagnoses. Gretel/symptom_to_diagnosis provides 1065 symptom descriptions in the English language labeled with 22 diagnoses, focusing on fine-grained single-domain diagnosis.
Data Fields
Each row contains the following fields:
input_text : A string field containing symptoms
output_text : A string field containing a diagnosis
Example:
{
"output_text":… See the full description on the dataset page: https://huggingface.co/datasets/gretelai/symptom_to_diagnosis.
|
BioDEX/BioDEX-QA
|
BioDEX
|
Dataset Card for "BioDEX-QA"
More Information needed
|
tasksource/tasksource-instruct-v0
|
tasksource
|
Dataset Card for "tasksource-instruct-v0" (TSI)
Multi-task instruction-tuning data recasted from 485 of the tasksource datasets.
Dataset size is capped at 30k examples per task to foster task diversity.
!pip install tasksource, pandit
import tasksource, pandit
df = tasksource.list_tasks(instruct=True).sieve(id=lambda x: 'mmlu' not in x)
for tasks in df.id:
yield tasksource.load_task(task,instruct=True,max_rows=30_000,max_rows_eval=200)
https://github.com/sileod/tasksource… See the full description on the dataset page: https://huggingface.co/datasets/tasksource/tasksource-instruct-v0.
|
szymonrucinski/types-of-film-shots
|
szymonrucinski
|
What a shot!
Data set created by Szymon Ruciński. It consists of ~ 1000 images of different movie shots precisely labeled with shot type. The data set is divided into categories: detail, close-up, medium shot, full shot and long shot, extreme long shot. Data was gathered and labeled on the platform plan-doskonaly.netlify.com created by Szymon. The data set is available under the Creative Commons Attribution 4.0 International license.
|
fblgit/tree-of-knowledge
|
fblgit
|
tree-of-knowledge-llm
ToK aka Tree of Knowledge for Large Language Models LLM. It's a novel dataset that inspires knowledge symbolic correlation in simple input and output prompts.
https://github.com/fblgit/tree-of-knowledge-llm
The set experimentially can be used with multiple purposes:
Knowledge Extraction from a Model
Fine Tuning a model with newer data
Create Granular Domain Knowledge Sets
Improve training performance
Syntax Example:
{
"instruction": "Describe… See the full description on the dataset page: https://huggingface.co/datasets/fblgit/tree-of-knowledge.
|
Linly-AI/Chinese-pretraining-dataset
|
Linly-AI
|
Data source: https://github.com/CVI-SZU/Linly/wiki/Linly-OpenLLaMA
|
winddude/reddit_finance_43_250k
|
winddude
|
reddit finance 43 250k
reddit_finance_43_250k is a collection of 250k post/comment pairs from 43 financial, investing and crypto subreddits. Post must have all been text, with a length of 250chars, and a positive score. Each subreddit is narrowed down to the 70th qunatile before being mergered with their top 3 comments and than the other subs. Further score based methods are used to select the top 250k post/comment pairs.
The code to recreate the dataset is here:… See the full description on the dataset page: https://huggingface.co/datasets/winddude/reddit_finance_43_250k.
|
mssongit/KorfinQA
|
mssongit
|
FinQA 한국어 번역본
Question, Answer 총 6252 Rows
|
beomi/KoAlpaca-v1.1a
|
beomi
|
Dataset Card for "KoAlpaca-v1.1a"
Project Repo
Github Repo: Beomi/KoAlpaca
How to use
>>> from datasets import load_dataset
>>> ds = load_dataset("beomi/KoAlpaca-v1.1a", split="train")
>>> ds
Dataset({
features: ['instruction', 'input', 'output'],
num_rows: 21155
})
>>> ds[0]
{'instruction': '양파는 어떤 식물 부위인가요? 그리고 고구마는 뿌리인가요?',
'output': '양파는 잎이 아닌 식물의 줄기 부분입니다. 고구마는 식물의 뿌리 부분입니다. \n\n식물의 부위의 구분에 대해 궁금해하는 분이라면 분명 이 질문에 대한 답을 찾고 있을 것입니다.… See the full description on the dataset page: https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a.
|
wanng/midjourney-v5-202304-clean
|
wanng
|
midjourney-v5-202304-clean
简介 Brief Introduction
非官方的,爬取自midjourney v5的2023年4月的数据,一共1701420条。
Unofficial, crawled from midjourney v5 for April 2023, 1,701,420 pairs in total.
数据集信息 Dataset Information
原始项目地址:https://huggingface.co/datasets/tarungupta83/MidJourney_v5_Prompt_dataset
我做了一些清洗,清理出了两个文件:
ori_prompts_df.parquet (1,255,812对,midjourney的四格图)
upscaled_prompts_df.parquet (445,608对,使用了高清指令的图,这意味着这个图更受欢迎。)
Original project address:… See the full description on the dataset page: https://huggingface.co/datasets/wanng/midjourney-v5-202304-clean.
|
openchat/openchat_sharegpt4_dataset
|
openchat
|
This repository contains cleaned and filtered ShareGPT GPT-4 data used to train OpenChat. Details can be found in the OpenChat repository.
|
wanng/midjourney-kaggle-clean
|
wanng
|
midjourney-v5-202304-clean
简介 Brief Introduction
非官方的,对Kaggle (Midjourney User Prompts & Generated Images (250k))[https://www.kaggle.com/datasets/succinctlyai/midjourney-texttoimage?select=general-01_2022_06_20.json] 上的数据集进行了清理,一共有 248,167对。
Unofficially, a cleanup of the dataset on Kaggle (Midjourney User Prompts & Generated Images (250k))[https://www.kaggle.com/datasets/succinctlyai/midjourney-texttoimage?select=general-01_2022_06_20.json] yielded 248,167 pairs.… See the full description on the dataset page: https://huggingface.co/datasets/wanng/midjourney-kaggle-clean.
|
donfour/bricks_ui_elements_v5_donut
|
donfour
|
Dataset Card for "bricks_ui_elements_v5_donut"
More Information needed
|
AlekseyKorshuk/roleplay-characters
|
AlekseyKorshuk
|
Dataset Card for "roleplay-characters"
More Information needed
|
bbz662bbz/databricks-dolly-15k-ja-gozaru
|
bbz662bbz
|
This dataset was using "kunishou/databricks-dolly-15k-ja"
This dataset is licensed under CC BY SA 3.0
Last Update : 2023-05-28
databricks-dolly-15k-ja-gozaru
kunishou/databricks-dolly-15k-ja
https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja
|
hammer888/interior_style_dataset
|
hammer888
|
Dataset Card for "interior_style_dataset"
More Information needed
|
fujiki/japanese_hh-rlhf-49k
|
fujiki
|
This is a little bit different version of kunishou/hh-rlhf-49k-ja without ng_translation == 1 examples.
Please also refer to the original dataset kunishou/hh-rlhf-49k-ja.
|
linhtran92/viet_bud500
|
linhtran92
|
Bud500: A Comprehensive Vietnamese ASR Dataset
Introducing Bud500, a diverse Vietnamese speech corpus designed to support ASR research community. With aprroximately 500 hours of audio, it covers a broad spectrum of topics including podcast, travel, book, food, and so on, while spanning accents from Vietnam's North, South, and Central regions. Derived from free public audio resources, this publicly accessible dataset is designed to significantly enhance the work of developers and… See the full description on the dataset page: https://huggingface.co/datasets/linhtran92/viet_bud500.
|
Den4ikAI/ru_sberquad_long_answers
|
Den4ikAI
|
UPD 29.05.2023: Добавлены негативные примеры.
Датасет для ответов на вопросы по тексту.
Сгенерирован моделью Den4ikAI/FRED-T5-XL_instructor
Отличия от sberquad, xquad и т.д:
Ответы не односложные, развернутые, представляют несколько предложений
Не подходит для обучения энкодерных моделей!
|
tatsu-lab/alpaca_eval
|
tatsu-lab
|
Data for alpaca_eval, which aims to help automatic evaluation of instruction-following models
|
singletongue/wikipedia-utils
|
singletongue
|
Wikipedia-Utils: Preprocessed Wikipedia Texts for NLP
Preprocessed Wikipedia texts generated with the scripts in singletongue/wikipedia-utils repo.
For detailed information on how the texts are processed, please refer to the repo.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.