id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
google/wiki40b
|
google
|
Dataset Card for "wiki40b"
Dataset Summary
Clean-up text for 40+ Wikipedia languages editions of pages
correspond to entities. The datasets have train/dev/test splits per language.
The dataset is cleaned up by page filtering to remove disambiguation pages,
redirect pages, deleted pages, and non-entity pages. Each example contains the
wikidata id of the entity, and the full Wikipedia article after page processing
that removes non-content sections and structured… See the full description on the dataset page: https://huggingface.co/datasets/google/wiki40b.
|
legacy-datasets/wikipedia
|
legacy-datasets
|
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
|
KTH/nst
|
KTH
|
This database was created by Nordic Language Technology for the development of automatic speech recognition and dictation in Swedish. In this updated version, the organization of the data have been altered to improve the usefulness of the database.
In the original version of the material, the files were organized in a specific folder structure where the folder names were meaningful. However, the file names were not meaningful, and there were also cases of files with identical names in different folders. This proved to be impractical, since users had to keep the original folder structure in order to use the data. The files have been renamed, such that the file names are unique and meaningful regardless of the folder structure. The original metadata files were in spl format. These have been converted to JSON format. The converted metadata files are also anonymized and the text encoding has been converted from ANSI to UTF-8.
|
SetFit/go_emotions
|
SetFit
|
GoEmotions
This dataset is a port of the official go_emotions dataset on the Hub. It only contains the simplified subset as these are the only fields we need for text classification.
|
csebuetnlp/xlsum
|
csebuetnlp
|
We present XLSum, a comprehensive and diverse dataset comprising 1.35 million professionally
annotated article-summary pairs from BBC, extracted using a set of carefully designed heuristics.
The dataset covers 45 languages ranging from low to high-resource, for many of which no
public dataset is currently available. XL-Sum is highly abstractive, concise,
and of high quality, as indicated by human and intrinsic evaluation.
|
PolyAI/minds14
|
PolyAI
|
MINDS-14 is training and evaluation resource for intent
detection task with spoken data. It covers 14
intents extracted from a commercial system
in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
|
huggan/wikiart
|
huggan
|
Dataset Summary
Dataset containing 81,444 pieces of visual art from various artists, taken from WikiArt.org,
along with class labels for each image :
"artist" : 129 artist classes, including a "Unknown Artist" class
"genre" : 11 genre classes, including a "Unknown Genre" class
"style" : 27 style classes
On WikiArt.org, the description for the "Artworks by Genre" page reads :
A genre system divides artworks according to depicted themes and objects. A classical hierarchy of… See the full description on the dataset page: https://huggingface.co/datasets/huggan/wikiart.
|
google/fleurs
|
google
|
FLEURS
Fleurs is the speech version of the FLoRes machine translation benchmark.
We use 2009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages.
Training sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is
used and ”unit error rate” (characters, signs) of all languages is averaged. Languages and results are also grouped into seven… See the full description on the dataset page: https://huggingface.co/datasets/google/fleurs.
|
toxigen/toxigen-data
|
toxigen
|
Dataset Card for ToxiGen
Sign up for Data Access
To access ToxiGen, first fill out this form.
Dataset Summary
This dataset is for implicit hate speech detection. All instances were generated using GPT-3 and the methods described in our paper.
Languages
All text is written in English.
Dataset Structure
Data Fields
We release TOXIGEN as a dataframe with the following fields:
prompt is the prompt used for… See the full description on the dataset page: https://huggingface.co/datasets/toxigen/toxigen-data.
|
ILSVRC/imagenet-1k
|
ILSVRC
|
ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). ImageNet aims to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, ImageNet hopes to offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy. ImageNet 2012 is the most commonly used subset of ImageNet. This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images
|
facebook/voxpopuli
|
facebook
|
A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation.
|
stanfordnlp/sst2
|
stanfordnlp
|
Dataset Card for [Dataset Name]
Dataset Summary
The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the
compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005)
and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and
includes a total of 215,154 unique phrases from those parse trees, each… See the full description on the dataset page: https://huggingface.co/datasets/stanfordnlp/sst2.
|
codeparrot/apps
|
codeparrot
|
APPS is a benchmark for Python code generation, it includes 10,000 problems, which range from having simple oneline solutions to being substantial algorithmic challenges, for more details please refer to this paper: https://arxiv.org/pdf/2105.09938.pdf.
|
knkarthick/dialogsum
|
knkarthick
|
Dataset Card for DIALOGSum Corpus
Dataset Description
Links
Homepage: https://aclanthology.org/2021.findings-acl.449
Repository: https://github.com/cylnlp/dialogsum
Paper: https://aclanthology.org/2021.findings-acl.449
Point of Contact: https://huggingface.co/knkarthick
Dataset Summary
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with… See the full description on the dataset page: https://huggingface.co/datasets/knkarthick/dialogsum.
|
kiddothe2b/contract-nli
|
kiddothe2b
|
ContractNLI: A Benchmark Dataset for ContractNLI in English
|
okite97/news-data
|
okite97
|
Dataset Card for news-data
Dataset Summary
The News Dataset is an English-language dataset containing just over 4k unique news articles scrapped from AriseTv- One of the most popular news television in Nigeria.
Supported Tasks and Leaderboards
It supports news article classification into different categories.
Languages
English
Dataset Structure
Data Instances
'''
{'Title': 'Nigeria: APC Yet to Zone Party… See the full description on the dataset page: https://huggingface.co/datasets/okite97/news-data.
|
ShapeNet/ShapeNetCore
|
ShapeNet
|
This repository contains ShapeNetCore (v2), a subset of ShapeNet.ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in WordNet 3.0.
Please see DATA.md for details about the data.
If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates and colleagues provided that… See the full description on the dataset page: https://huggingface.co/datasets/ShapeNet/ShapeNetCore.
|
zeroshot/twitter-financial-news-sentiment
|
zeroshot
|
Dataset Description
The Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their sentiment.
The dataset holds 11,932 documents annotated with 3 labels:
sentiments = {
"LABEL_0": "Bearish",
"LABEL_1": "Bullish",
"LABEL_2": "Neutral"
}
The data was collected using the Twitter API. The current dataset supports the multi-class… See the full description on the dataset page: https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment.
|
priyank-m/chinese_text_recognition
|
priyank-m
|
Source of data: https://github.com/FudanVI/benchmarking-chinese-text-recognition
|
Gustavosta/Stable-Diffusion-Prompts
|
Gustavosta
|
Stable Diffusion Dataset
This is a set of about 80,000 prompts filtered and extracted from the image finder for Stable Diffusion: "Lexica.art". It was a little difficult to extract the data, since the search engine still doesn't have a public API without being protected by cloudflare.
If you want to test the model with a demo, you can go to: "spaces/Gustavosta/MagicPrompt-Stable-Diffusion".
If you want to see the model, go to: "Gustavosta/MagicPrompt-Stable-Diffusion".
|
jamescalam/youtube-transcriptions
|
jamescalam
|
The YouTube transcriptions dataset contains technical tutorials (currently from James Briggs, Daniel Bourke, and AI Coffee Break) transcribed using OpenAI's Whisper (large). Each row represents roughly a sentence-length chunk of text alongside the video URL and timestamp.
Note that each item in the dataset contains just a short chunk of text. For most use cases you will likely need to merge multiple rows to create more substantial chunks of text, if you need to do that, this code snippet will… See the full description on the dataset page: https://huggingface.co/datasets/jamescalam/youtube-transcriptions.
|
openai/summarize_from_feedback
|
openai
|
Summarize from Feedback contains the human feedback data released by the "Learning to summarize from human feedback" paper.
|
ds4sd/DocLayNet
|
ds4sd
|
DocLayNet is a human-annotated document layout segmentation dataset from a broad variety of document sources.
|
reazon-research/reazonspeech
|
reazon-research
|
Dataset Card for ReazonSpeech
Dataset Summary
This dataset contains a diverse set of natural Japanese speech, collected
from terrestrial television streams. It contains more than 35000 hours of
audio.
Paper: ReazonSpeech: A Free and Massive Corpus for Japanese ASR
Disclaimer
TO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET
SOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4.
Dataset Format
Audio files are… See the full description on the dataset page: https://huggingface.co/datasets/reazon-research/reazonspeech.
|
keremberke/table-extraction
|
keremberke
|
Dataset Labels
['bordered', 'borderless']
Number of Images
{'test': 34, 'train': 238, 'valid': 70}
How to Use
Install datasets:
pip install datasets
Load the dataset:
from datasets import load_dataset
ds = load_dataset("keremberke/table-extraction", name="full")
example = ds['train'][0]
Roboflow Dataset Page
https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/dataset/2
Citation… See the full description on the dataset page: https://huggingface.co/datasets/keremberke/table-extraction.
|
swaption2009/20k-en-zh-translation-pinyin-hsk
|
swaption2009
|
20,000+ chinese sentences with translations and pinyin
Source: https://mnemosyne-proj.org/cards/20000-chinese-sentences-translations-and-pinyin
Contributed by: Brian Vaughan http://brianvaughan.net/
Dataset Structure
Each sample consists of:
English sentence
HSK level
Chinese translation
Pinyin
separator ("--")
Other Info from the Source
HSK level
All of the sentences came from sample sentences intended to describe a
particular… See the full description on the dataset page: https://huggingface.co/datasets/swaption2009/20k-en-zh-translation-pinyin-hsk.
|
alespalla/chatbot_instruction_prompts
|
alespalla
|
Dataset Card for Chatbot Instruction Prompts Datasets
Dataset Summary
This dataset has been generated from the following ones:
tatsu-lab/alpaca
Dahoas/instruct-human-assistant-prompt
allenai/prosocial-dialog
The datasets has been cleaned up of spurious entries and artifacts. It contains ~500k of prompt and expected resposne. This DB is intended to train an instruct-type model
|
HuggingFaceH4/CodeAlpaca_20K
|
HuggingFaceH4
|
This dataset splits the original CodeAlpaca dataset into train and test splits.
|
mozilla-foundation/common_voice_13_0
|
mozilla-foundation
|
Dataset Card for Common Voice Corpus 13.0
Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added.
Take a look at the Languages… See the full description on the dataset page: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0.
|
bigcode/starcoderdata
|
bigcode
|
StarCoder Training Dataset
Dataset description
This is the dataset used for training StarCoder and StarCoderBase. It contains 783GB of code in 86 programming languages, and includes 54GB GitHub Issues + 13GB Jupyter notebooks in scripts and text-code pairs,
and 32GB of GitHub commits, which is approximately 250 Billion tokens.
Dataset creation
The creation and filtering of The Stack is explained in the original dataset, we additionally decontaminate… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/starcoderdata.
|
RyokoAI/ShareGPT52K
|
RyokoAI
|
Dataset Card for ShareGPT52K90K
Dataset Summary
This dataset is a collection of approximately 52,00090,000 conversations scraped via the ShareGPT API before it was shut down.
These conversations include both user prompts and responses from OpenAI's ChatGPT.
This repository now contains the new 90K conversations version. The previous 52K may
be found in the old/ directory.
Supported Tasks and Leaderboards
text-generation
Languages… See the full description on the dataset page: https://huggingface.co/datasets/RyokoAI/ShareGPT52K.
|
anon8231489123/ShareGPT_Vicuna_unfiltered
|
anon8231489123
|
Further cleaning done. Please look through the dataset and ensure that I didn't miss anything.
Update: Confirmed working method for training the model: https://huggingface.co/AlekseyKorshuk/vicuna-7b/discussions/4#64346c08ef6d5abefe42c12c
Two choices:
Removes instances of "I'm sorry, but": https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json
Has instances of "I'm sorry, but":… See the full description on the dataset page: https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered.
|
dominguesm/Canarim-Instruct-PTBR-Dataset
|
dominguesm
|
🐥 🇧🇷 Canarim Instruct Dataset
[🐱 Github]
What's Canarim?
Canarim is a dataset with over 300,000 instructions in Portuguese, ranging from simple instructions like "Descreva os efeitos do aquecimento global" to more complex instructions like "Nesta tarefa, você precisa ser capaz de resumir uma determinada lista de pontos-chave" where additional context is provided.
Why it's called Canarim?
"Canarim" is spoken in some regions of Brazil… See the full description on the dataset page: https://huggingface.co/datasets/dominguesm/Canarim-Instruct-PTBR-Dataset.
|
kunishou/databricks-dolly-15k-ja
|
kunishou
|
This dataset was created by automatically translating "databricks-dolly-15k" into Japanese.This dataset is licensed under CC-BY-SA-3.0
Last Update : 2023-05-11
databricks-dolly-15k-jahttps://github.com/kunishou/databricks-dolly-15k-jadatabricks-dolly-15khttps://github.com/databrickslabs/dolly/tree/master/data
|
OpenAssistant/oasst1
|
OpenAssistant
|
OpenAssistant Conversations Dataset (OASST1)
Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort… See the full description on the dataset page: https://huggingface.co/datasets/OpenAssistant/oasst1.
|
shareAI/ShareGPT-Chinese-English-90k
|
shareAI
|
ShareGPT-Chinese-English-90k Bilingual Human-Machine QA Dataset
A high-quality Chinese-English parallel bilingual human-machine QA dataset, covering user questions in real and complex scenarios. It is used for training high-quality dialogue models (more robust in instruction distribution than those datasets generated by repeatedly calling API interfaces to simulate machine-generated Q&A, like Moss)
Features:
Provides fully semantically equivalent Chinese-English parallel… See the full description on the dataset page: https://huggingface.co/datasets/shareAI/ShareGPT-Chinese-English-90k.
|
liuhaotian/LLaVA-CC3M-Pretrain-595K
|
liuhaotian
|
LLaVA Visual Instruct CC3M 595K Pretrain Dataset Card
Dataset details
Dataset type:
LLaVA Visual Instruct CC3M Pretrain 595K is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution.
Captions are also associated with BLIP synthetic caption for reference.
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
We aim to build large multimodal towards GPT-4 vision/language capability.
Dataset… See the full description on the dataset page: https://huggingface.co/datasets/liuhaotian/LLaVA-CC3M-Pretrain-595K.
|
howard-hou/OCR-VQA
|
howard-hou
|
Dataset Card for "OCR-VQA"
More Information needed
|
WizardLMTeam/WizardLM_evol_instruct_70k
|
WizardLMTeam
|
This is the training data of WizardLM.
News
🔥 🔥 🔥 [08/11/2023] We release WizardMath Models.
🔥 Our WizardMath-70B-V1.0 model slightly outperforms some closed-source LLMs on the GSM8K, including ChatGPT 3.5, Claude Instant 1 and PaLM 2 540B.
🔥 Our WizardMath-70B-V1.0 model achieves 81.6 pass@1 on the GSM8k Benchmarks, which is 24.8 points higher than the SOTA open-source LLM.
🔥 Our WizardMath-70B-V1.0 model achieves 22.7 pass@1 on the MATH Benchmarks, which is 9.2 points… See the full description on the dataset page: https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_70k.
|
bigcode/governance-card
|
bigcode
|
This document serves as an overview of the different mechanisms and areas of governance in the BigCode project.
It aims to support transparency by providing relevant information about choices that were made during the project to the broader public,
and to serve as an example of intentional governance of an open research project that future endeavors can leverage to shape their own approach.The first section, Project Structure, covers the project organization, its stated goals and values, its… See the full description on the dataset page: https://huggingface.co/datasets/bigcode/governance-card.
|
FreedomIntelligence/huatuo_knowledge_graph_qa
|
FreedomIntelligence
|
Dataset Card for Huatuo_knowledge_graph_qa
Dataset Summary
We built this QA dataset based on the medical knowledge map, with a total of 798,444 pieces of data, in which the questions are constructed by means of templates, and the answers are the contents of the entries in the knowledge map.
Dataset Creation
Source Data… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa.
|
cognitivecomputations/wizard_vicuna_70k_unfiltered
|
cognitivecomputations
|
This dataset is the wizard_vicuna dataset junelee/wizard_vicuna_70k, removing conversations with alignment.
34598 conversations remain.
inspired by https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered
All credit to anon8231489123 I basically took his scripts and applied them to this new dataset.
|
tiiuae/falcon-refinedweb
|
tiiuae
|
📀 Falcon RefinedWeb
Falcon RefinedWeb is a massive English web dataset built by TII and released under an ODC-By 1.0 license.
See the 📓 paper on arXiv for more details.
RefinedWeb is built through stringent filtering and large-scale deduplication of CommonCrawl; we found models trained on RefinedWeb to achieve performance in-line or better than models trained on curated datasets, while only relying on web data.
RefinedWeb is also "multimodal-friendly": it contains links and… See the full description on the dataset page: https://huggingface.co/datasets/tiiuae/falcon-refinedweb.
|
FreedomIntelligence/huatuo_encyclopedia_qa
|
FreedomIntelligence
|
Dataset Card for Huatuo_encyclopedia_qa
Dataset Summary
This dataset has a total of 364,420 pieces of medical QA data, some of which have multiple questions in different ways. We extract medical QA pairs from plain texts (e.g., medical encyclopedias and medical articles). We collected 8,699 encyclopedia entries for diseases and 2,736 encyclopedia entries for medicines on Chinese Wikipedia. Moreover, we crawled 226,432 high-quality medical articles from the Qianwen… See the full description on the dataset page: https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa.
|
Haidra-Org/AI-Horde-Ratings
|
Haidra-Org
|
AI Horde Aesthetic and Artifact Ratings
A dataset of exported aesthetic and artifact ratings provided by the AI Horde community through our open ratings API.
Each row in this dataset presents the rating for a single image from the diffusiondb. Each image UUID in this parquet will match the diffusiondb filename.
Each rating contains an aesthetic rating of 1-10, where 1 represents an image found distasteful, and 10 an image most found very pleasing. This is an explicitly… See the full description on the dataset page: https://huggingface.co/datasets/Haidra-Org/AI-Horde-Ratings.
|
lucasmccabe-lmi/CodeAlpaca-20k
|
lucasmccabe-lmi
|
Dataset Card for "CodeAlpaca-20k"
We provide a minor modification of the CodeAlpaca-20k dataset. In particular, we add the phrase, "Write corresponding code in Python." if the intended language is not explicitly stated.
Numbers:
Prompts: 20022
Tokens: 1561716 using the EleutherAI/gpt-neox-20b tokenizer (counting instruction+input+output)
|
Slep/LAION-RVS-Fashion
|
Slep
|
LAION - Referred Visual Search - Fashion
Introduced in LRVSF-Fashion: Extending Visual Search with Referring Instructions
Simon Lepage
—
Jérémie Mary
—
David Picard
CRITEO AI Lab
&
ENPC
Useful Links
Test set —
Benchmark Code —
LRVS-F Leaderboard —
Demo
Composition
LAION-RVS-Fashion is composed of images from :
LAION 2B EN
LAION 2B MULTI TRANSLATED
LAION 1B NOLANG TRANSLATED
These images have been grouped based on extracted product IDs. Each… See the full description on the dataset page: https://huggingface.co/datasets/Slep/LAION-RVS-Fashion.
|
flaviagiammarino/vqa-rad
|
flaviagiammarino
|
Dataset Card for VQA-RAD
Dataset Description
VQA-RAD is a dataset of question-answer pairs on radiology images. The dataset is intended to be used for training and testing
Medical Visual Question Answering (VQA) systems. The dataset includes both open-ended questions and binary "yes/no" questions.
The dataset is built from MedPix, which is a free open-access online database of medical images.
The question-answer pairs were manually generated by a team of… See the full description on the dataset page: https://huggingface.co/datasets/flaviagiammarino/vqa-rad.
|
GAIR/lima
|
GAIR
|
A high-quality dataset for efficient instruction tuning.
|
PKU-Alignment/BeaverTails
|
PKU-Alignment
|
Dataset Card for BeaverTails
BeaverTails is an AI safety-focused collection comprising a series of datasets.
This repository includes human-labeled data consisting of question-answer (QA) pairs, each identified with their corresponding harm categories.
It should be noted that a single QA pair can be associated with more than one category.
The 14 harm categories are defined as follows:
Animal Abuse: This involves any form of cruelty or harm inflicted on animals, including… See the full description on the dataset page: https://huggingface.co/datasets/PKU-Alignment/BeaverTails.
|
cerebras/SlimPajama-627B
|
cerebras
|
The dataset consists of 59166 jsonl files and is ~895GB compressed. It is a cleaned and deduplicated version of Together's RedPajama.
Check out our blog post explaining our methods, our code on GitHub, and join the discussion on the Cerebras Discord.
Getting Started
You can download the dataset using Hugging Face datasets:
from datasets import load_dataset
ds = load_dataset("cerebras/SlimPajama-627B")
Background
Today we are releasing SlimPajama – the largest… See the full description on the dataset page: https://huggingface.co/datasets/cerebras/SlimPajama-627B.
|
maharshipandya/spotify-tracks-dataset
|
maharshipandya
|
Content
This is a dataset of Spotify tracks over a range of 125 different genres. Each track has some audio features associated with it. The data is in CSV format which is tabular and can be loaded quickly.
Usage
The dataset can be used for:
Building a Recommendation System based on some user input or preference
Classification purposes based on audio features and available genres
Any other application that you can think of. Feel free to discuss!… See the full description on the dataset page: https://huggingface.co/datasets/maharshipandya/spotify-tracks-dataset.
|
zzliang/GRIT
|
zzliang
|
GRIT: Large-Scale Training Corpus of Grounded Image-Text Pairs
Dataset Summary
We introduce GRIT, a large-scale dataset of Grounded Image-Text pairs, which is created based on image-text pairs from COYO-700M and LAION-2B. We construct a pipeline to extract and link text spans (i.e., noun phrases, and referring expressions) in the caption to their corresponding image regions. More details can be found in the paper.
Supported Tasks
During the… See the full description on the dataset page: https://huggingface.co/datasets/zzliang/GRIT.
|
matorus/coder
|
matorus
|
Training dataset for finetuning for human-eval.
This dataset has been created from the following datasets:
sahil2801/CodeAlpaca-20k
sahil2801/code_instructions_120k
mhhmm/leetcode-solutions-python
teknium1/GPTeacher
Script for generating dataset: create_dataset.py.
|
theblackcat102/evol-codealpaca-v1
|
theblackcat102
|
Evolved codealpaca
Updates:
2023/08/26 - Filtered results now only contain pure english instruction and removed any mentioned of trained by OAI response
Median sequence length : 471
We employed a methodology similar to that of WizardCoder, with the exception that ours is open-source. We used the gpt-4-0314 and gpt-4-0613 models to augment and answer each response, with the bulk of generation handled by gpt-4-0314.
The aim of this dataset is twofold: firstly, to facilitate the… See the full description on the dataset page: https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1.
|
mlabonne/guanaco-llama2-1k
|
mlabonne
|
Guanaco-1k: Lazy Llama 2 Formatting
This is a subset (1000 samples) of the excellent timdettmers/openassistant-guanaco dataset, processed to match Llama 2's prompt format as described in this article. It was created using the following colab notebook.
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for this article about fine-tuning a Llama 2 (chat) model in a Google Colab.
|
heliosbrahma/mental_health_chatbot_dataset
|
heliosbrahma
|
Dataset Card for "heliosbrahma/mental_health_chatbot_dataset"
Dataset Description
Dataset Summary
This dataset contains conversational pair of questions and answers in a single text related to Mental Health. Dataset was curated from popular healthcare blogs like WebMD, Mayo Clinic and HeatlhLine, online FAQs etc. All questions and answers have been anonymized to remove any PII data and pre-processed to remove any unwanted characters.… See the full description on the dataset page: https://huggingface.co/datasets/heliosbrahma/mental_health_chatbot_dataset.
|
LibrAI/do-not-answer
|
LibrAI
|
Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
Overview
Do not answer is an open-source dataset to evaluate LLMs' safety mechanism at a low cost. The dataset is curated and filtered to consist only of prompts to which responsible language models do not answer.
Besides human annotations, Do not answer also implements model-based evaluation, where a 600M fine-tuned BERT-like evaluator achieves comparable results with human and GPT-4.… See the full description on the dataset page: https://huggingface.co/datasets/LibrAI/do-not-answer.
|
qgyd2021/chinese_ner_sft
|
qgyd2021
|
中文实体识别指令数据集
收集开源的实体识别数据集, 将其制作为 sft 数据集用于 LLM 微调.
该数据集的目的是构建通用实体识别的LLM研究.
数据集分为三大类:
{dataset_name}, {dataset_name}_template, {dataset_name}_prompt.
{dataset_name}: 为对应的实体识别数据集.
{dataset_name}_template: 是为各数据集编写的 prompt 模板, 因为各数据集的主题不同, 所以模板分别编写会更加准确.
{dataset_name}_prompt: 是根据 {dataset_name} 和 {dataset_name}_template 合成的 prompt 数据集. 由于是动态生成的 huggingface 可能无法展示, 以下是一些数据示例.
数据示例展开查看
{
"prompt": "在做手机智能助手上, 你需要识别用户话语中的关键实体, 实体类型包括:\n联系人姓名,场景,主旋律,乐器名称,曲风,手机号码,语言,时代,目的地… See the full description on the dataset page: https://huggingface.co/datasets/qgyd2021/chinese_ner_sft.
|
teknium/openhermes
|
teknium
|
OpenHermes Dataset
The OpenHermes dataset is composed of 242,000 entries of primarily GPT-4 generated data, from open datasets across the AI landscape, including:
OpenHermes 13B is the first fine tune of the Hermes dataset that has a fully open source dataset!
OpenHermes was trained on 242,000 entries of primarily GPT-4 generated data, from open datasets across the AI landscape, including:
GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets, by… See the full description on the dataset page: https://huggingface.co/datasets/teknium/openhermes.
|
bugdaryan/sql-create-context-instruction
|
bugdaryan
|
Overview
This dataset is built upon SQL Create Context, which in turn was constructed using data from WikiSQL and Spider.
There are 78,577 examples of natural language queries, SQL CREATE TABLE statements, and SQL Query answering the question using the CREATE statement as context. This dataset was built with text-to-SQL LLMs in mind, intending to prevent hallucination of column and table names often seen when trained on text-to-SQL datasets. The CREATE TABLE statement can often… See the full description on the dataset page: https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction.
|
re-align/just-eval-instruct
|
re-align
|
Just Eval Instruct
Highlights
Data sources:
AlpacaEval (covering 5 datasets),
LIMA-test,
MT-bench,
Anthropic red-teaming,
and MaliciousInstruct.
1K examples: 1,000 instructions, including 800 for problem-solving test, and 200 specifically for safety test.
Category: We tag each example with (one or multiple) labels on its task types and topics.… See the full description on the dataset page: https://huggingface.co/datasets/re-align/just-eval-instruct.
|
a686d380/sis-novel
|
a686d380
|
这是一个中文H小说数据集,收集自sis001
sis-novel1为中短篇小说,112182项,解压缩后大小5.7GB,数据截止2022年7月
sis-novel2为长篇小说,4555项,解压缩后大小3.6GB,数据截止2023年3月
数据均为未清洗的txt版本,并且可能包含有评论
|
glaiveai/glaive-code-assistant
|
glaiveai
|
Glaive-code-assistant
Glaive-code-assistant is a dataset of ~140k code problems and solutions generated using Glaive’s synthetic data generation platform.
The data is intended to be used to make models act as code assistants, and so the data is structured in a QA format where the questions are worded similar to how real users will ask code related questions.
The data has ~60% python samples.
To report any problems or suggestions in the data, join the Glaive discord
|
ShengbinYue/DISC-Law-SFT
|
ShengbinYue
|
DISC-Law-SFT Dataset
Legal Intelligent systems in Chinese require a combination of various abilities, including legal text understanding and generation. To achieve this, we have constructed a high-quality supervised fine-tuning dataset called DISC-Law-SFT, which covers different legal scenarios such as legal information extraction, legal judgment prediction, legal document summarization, and legal question answering. DISC-Law-SFT comprises two subsets, DISC-Law-SFT-Pair and… See the full description on the dataset page: https://huggingface.co/datasets/ShengbinYue/DISC-Law-SFT.
|
MBZUAI/VideoInstruct-100K
|
MBZUAI
|
VideoInstruct100K is a high-quality video conversation dataset generated using human-assisted and semi-automatic annotation techniques. The question answers in the dataset are related to,
Video Summariazation
Description-based question-answers (exploring spatial, temporal, relationships, and reasoning concepts)
Creative/generative question-answers
For mored details, please visit Oryx/VideoChatGPT/video-instruction-data-generation.
If you find this dataset useful, please consider citing the… See the full description on the dataset page: https://huggingface.co/datasets/MBZUAI/VideoInstruct-100K.
|
FinGPT/fingpt-fiqa_qa
|
FinGPT
|
Dataset Card for "fingpt-fiqa_qa"
More Information needed
|
heegyu/chart2text_statista
|
heegyu
|
Dataset Card for "chart2text_statista"
original dataset: https://github.com/vis-nlp/Chart-to-text
|
AI4Math/MathVista
|
AI4Math
|
Dataset Card for MathVista
Dataset Description
Paper Information
Dataset Examples
Leaderboard
Dataset Usage
Data Downloading
Data Format
Data Visualization
Data Source
Automatic Evaluation
License
Citation
Dataset Description
MathVista is a consolidated Mathematical reasoning benchmark within Visual contexts. It consists of three newly created datasets, IQTest, FunctionQA, and PaperQA, which address the missing visual domains and are tailored to evaluate… See the full description on the dataset page: https://huggingface.co/datasets/AI4Math/MathVista.
|
Weyaxi/huggingface-leaderboard
|
Weyaxi
|
Huggingface Leaderboard's History Dataset
🏆 This is the history dataset of Huggingface Leaderboard.
🗒️ This dataset contains full dataframes in a CSV file for each time lapse.
⌛ This dataset is automatically updated when space restarts. (Which is approximately every 6 hours)
Leaderboard Link
🔗 Weyaxi/huggingface-leaderboard
|
satellite-image-deep-learning/SODA-A
|
satellite-image-deep-learning
|
SODA-A comprises 2513 high-resolution images of aerial scenes, which has 872069 instances annotated with oriented rectangle box annotations over 9 classes.
Website
|
HuggingFaceH4/ultrafeedback_binarized
|
HuggingFaceH4
|
Dataset Card for UltraFeedback Binarized
Dataset Description
This is a pre-processed version of the UltraFeedback dataset and was used to train Zephyr-7Β-β, a state of the art chat model at the 7B parameter scale.
The original UltraFeedback dataset consists of 64k prompts, where each prompt is accompanied with four model completions from a wide variety of open and proprietary models. GPT-4 is then used to assign a score to each completion, along criteria like… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized.
|
lmsys/toxic-chat
|
lmsys
|
Update
[01/31/2024] We update the OpenAI Moderation API results for ToxicChat (0124) based on their updated moderation model on on Jan 25, 2024.[01/28/2024] We release an official T5-Large model trained on ToxicChat (toxicchat0124). Go and check it for you baseline comparision![01/19/2024] We have a new version of ToxicChat (toxicchat0124)!
Content
This dataset contains toxicity annotations on 10K user prompts collected from the Vicuna online demo.
We utilize a… See the full description on the dataset page: https://huggingface.co/datasets/lmsys/toxic-chat.
|
allenai/WildChat
|
allenai
|
Dataset Card for WildChat
Note: a newer version with 1 million conversations and demographic information can be found here.
Dataset Description
Paper: https://arxiv.org/abs/2405.01470
Interactive Search Tool: https://wildvisualizer.com (paper)
License: ODC-BY
Language(s) (NLP): multi-lingual
Point of Contact: Yuntian Deng
Dataset Summary
WildChat is a collection of 650K conversations between human users and ChatGPT. We… See the full description on the dataset page: https://huggingface.co/datasets/allenai/WildChat.
|
hltcoe/megawika-report-generation
|
hltcoe
|
Dataset Card for MegaWika for Report Generation
Dataset Summary
MegaWika is a multi- and crosslingual text dataset containing 30 million Wikipedia passages with their scraped and cleaned web citations. The passages span
50 Wikipedias in 50 languages, and the articles in which the passages were originally embedded are included for convenience. Where a Wikipedia passage is in a
non-English language, an automated English translation is provided.
This dataset… See the full description on the dataset page: https://huggingface.co/datasets/hltcoe/megawika-report-generation.
|
kunishou/oasst1-chat-44k-ja
|
kunishou
|
oasst1-89k-jaをチャット形式に変換したデータセットになります。マルチターン会話でのファインチューニングをする際にご活用下さい(1レコードのトークン長が大きいのでそれなりの計算リソースが必要になります)。フォーマットは ShareGPT 形式になっています。ファインチューニングをする際はこちらの記事を参考にして下さい。
oasst1-ja-89k Repositoryhttps://github.com/kunishou/oasst1-89k-ja
OpenAssistant/oasst1https://huggingface.co/datasets/OpenAssistant/oasst1
|
facebook/emu_edit_test_set_generations
|
facebook
|
Dataset Card for the Emu Edit Generations on Emu Edit Test Set
Dataset Summary
This dataset contains Emu Edit's generations on the Emu Edit test set. For more information please read our paper or visit our homepage.
Licensing Information
Licensed with CC-BY-NC 4.0 License available here.
Citation Information
@inproceedings{Sheynin2023EmuEP,
title={Emu Edit: Precise Image Editing via Recognition and Generation Tasks}… See the full description on the dataset page: https://huggingface.co/datasets/facebook/emu_edit_test_set_generations.
|
EDS-lab/electricity-demand
|
EDS-lab
|
Electricity Demand Dataset
This dataset compiles and harmonizes multiple open smart meter datasets.
Curated by: Attila Balint
License: BSD 3-clause "New" or "Revised" licence
Uses
This smart meter dataset facilitates primarily electricity demand forecasting.
Dataset Structure
The dataset contains three main files.
data/demand.parquet
data/metadata.parquet
data/weather.parquet
data/demand.parquet
This file contains the electricity… See the full description on the dataset page: https://huggingface.co/datasets/EDS-lab/electricity-demand.
|
HAERAE-HUB/KMMLU
|
HAERAE-HUB
|
KMMLU (Korean-MMLU)
We propose KMMLU, a new Korean benchmark with 35,030 expert-level multiple-choice questions across 45 subjects ranging from humanities to STEM.
Unlike previous Korean benchmarks that are translated from existing English benchmarks, KMMLU is collected from original Korean exams, capturing linguistic and cultural aspects of the Korean language.
We test 26 publically available and proprietary LLMs, identifying significant room for improvement.
The best… See the full description on the dataset page: https://huggingface.co/datasets/HAERAE-HUB/KMMLU.
|
ise-uiuc/Magicoder-Evol-Instruct-110K
|
ise-uiuc
|
A decontaminated version of evol-codealpaca-v1. Decontamination is done in the same way as StarCoder (bigcode decontamination process).
|
XzJosh/audiodataset
|
XzJosh
|
音频数据集
制作:Xz乔希
注意
1、数据集均取自对应人物视频切片,声音版权归属于对应人物,早期质量一坨的就没上传;
2、音频仅进行分离人声及自动切片,未进行精选,请下载进行抽选试听后再考虑是否使用(弃用音频在手工标注时进行了跳过);
3、手工标注文件随机掉落(手工标注无法保证每一句都标的很标准,可以自行检查);
4、请在法律允许范围内进行测试使用!使用本数据集产生问题请自行承担!
|
argilla/ultrafeedback-binarized-preferences-cleaned
|
argilla
|
UltraFeedback - Binarized using the Average of Preference Ratings (Cleaned)
This dataset represents a new iteration on top of argilla/ultrafeedback-binarized-preferences,
and is the recommended and preferred dataset by Argilla to use from now on when fine-tuning on UltraFeedback.
Read more about Argilla's approach towards UltraFeedback binarization at argilla/ultrafeedback-binarized-preferences/README.md.
Differences with argilla/ultrafeedback-binarized-preferences… See the full description on the dataset page: https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned.
|
Anthropic/discrim-eval
|
Anthropic
|
Dataset Card for Discrim-Eval
Dataset Summary
The data contains a diverse set of prompts covering 70 hypothetical decision scenarios, ranging from approving a loan to providing press credentials.
Each prompt instructs the model to make a binary decision (yes/no)
about a particular person described in the prompt.
Each person is described in terms of three demographic attributes:
age (ranging from 20 to 100 in increments of 10), gender (male, female, non-binary)… See the full description on the dataset page: https://huggingface.co/datasets/Anthropic/discrim-eval.
|
allenai/reward-bench
|
allenai
|
Code | Leaderboard | Prior Preference Sets | Results | Paper
Reward Bench Evaluation Dataset Card
The RewardBench evaluation dataset evaluates capabilities of reward models over the following categories:
Chat: Includes the easy chat subsets (alpacaeval-easy, alpacaeval-length, alpacaeval-hard, mt-bench-easy, mt-bench-medium)
Chat Hard: Includes the hard chat subsets (mt-bench-hard, llmbar-natural, llmbar-adver-neighbor, llmbar-adver-GPTInst, llmbar-adver-GPTOut… See the full description on the dataset page: https://huggingface.co/datasets/allenai/reward-bench.
|
google/Synthetic-Persona-Chat
|
google
|
Dataset Card for SPC: Synthetic-Persona-Chat Dataset
Abstract from the paper introducing this dataset:
High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user's character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and… See the full description on the dataset page: https://huggingface.co/datasets/google/Synthetic-Persona-Chat.
|
osv5m/osv5m
|
osv5m
|
OpenStreetView-5M The Many Roads to Global Visual Geolocation 📍🌍
First authors: Guillaume Astruc, Nicolas Dufour, Ioannis SiglidisSecond authors: Constantin Aronssohn, Nacim Bouia, Stephanie Fu, Romain Loiseau, Van Nguyen Nguyen, Charles Raude, Elliot Vincent, Lintao XU, Hongyu ZhouLast author: Loic LandrieuResearch Institute: Imagine, LIGM, Ecole des Ponts, Univ Gustave Eiffel, CNRS, Marne-la-Vallée, France
Introduction 🌍
OpenStreetView-5M is the first… See the full description on the dataset page: https://huggingface.co/datasets/osv5m/osv5m.
|
HuggingFaceM4/WebSight
|
HuggingFaceM4
|
Dataset Card for WebSight
Dataset Description
WebSight is a large synthetic dataset containing HTML/CSS codes representing synthetically generated English websites, each accompanied by a corresponding screenshot.
This dataset serves as a valuable resource for tasks such as generating UI codes from a screenshot.
It comes in two versions:
v0.1: Websites are coded with HTML + CSS. They do not include real images.
v0.2: Websites are coded with HTML + Tailwind CSS.… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceM4/WebSight.
|
Tele-AI/TeleChat-PTD
|
Tele-AI
|
TeleChat预训练数据集(TeleChat-PTD)
🤗 Hugging Face • 🏔 MindSpore️ • 🦉 github️ • 🐾 gitee️ • 💬 WeChat
Tech Report
数据介绍
TeleChat-PTD 是由电信星辰大模型TeleChat预训练语料中抽取出的的综合性大规模中文数据集。数据主要来源于网页、书籍、官方媒体等。 我们使用规则+模型的方式进行了相关的过滤,并对数据进行了相似性去重,尽可能地提取出高质量地数据。
TeleChat-PTD 数据集大约公开了2.7亿条数据,数据由纯中文文本构成,原始大小约1TB,压缩后480G,共189个文件。数据集中已经去除了其它冗余信息。
数据下载
huggingface下载地址:数据下载
天翼云盘下载地址:数据下载(访问码:pkg8)
数据格式
数据为jsonl格式,仅有一个字段data: 单条处理后的预训练数据
数据清洗… See the full description on the dataset page: https://huggingface.co/datasets/Tele-AI/TeleChat-PTD.
|
osunlp/TravelPlanner
|
osunlp
|
TravelPlanner Dataset
TravelPlanner is a benchmark crafted for evaluating language agents in tool-use and complex planning within multiple constraints. (See our paper for more details.)
Introduction
In TravelPlanner, for a given query, language agents are expected to formulate a comprehensive plan that includes transportation, daily meals, attractions, and accommodation for each day.
TravelPlanner comprises 1,225 queries in total. The number of days and hard… See the full description on the dataset page: https://huggingface.co/datasets/osunlp/TravelPlanner.
|
Teklia/NorHand-v3-line
|
Teklia
|
NorHand v3 - line level
Dataset Summary
The NorHand v3 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.
Note that all images are resized to a fixed height of 128 pixels.
Languages
All the documents in the dataset are written in Norwegian Bokmål.
Dataset Structure
Data Instances
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4300x128 at 0x1A800E8E190… See the full description on the dataset page: https://huggingface.co/datasets/Teklia/NorHand-v3-line.
|
lmms-lab/DocVQA
|
lmms-lab
|
Large-scale Multi-modality Models Evaluation Suite
Accelerating the development of large-scale multi-modality models (LMMs) with lmms-eval
🏠 Homepage | 📚 Documentation | 🤗 Huggingface Datasets
This Dataset
This is a formatted version of DocVQA. It is used in our lmms-eval pipeline to allow for one-click evaluations of large multi-modality models.
@article{mathew2020docvqa,
title={DocVQA: A Dataset for VQA on Document Images. CoRR abs/2007.00398 (2020)}… See the full description on the dataset page: https://huggingface.co/datasets/lmms-lab/DocVQA.
|
McAuley-Lab/Amazon-Reviews-2023
|
McAuley-Lab
|
Amazon Review 2023 is an updated version of the Amazon Review 2018 dataset.
This dataset mainly includes reviews (ratings, text) and item metadata (desc-
riptions, category information, price, brand, and images). Compared to the pre-
vious versions, the 2023 version features larger size, newer reviews (up to Sep
2023), richer and cleaner meta data, and finer-grained timestamps (from day to
milli-second).
|
Shitao/MLDR
|
Shitao
|
Dataset Summary
MLDR is a Multilingual Long-Document Retrieval dataset built on Wikipeida, Wudao and mC4, covering 13 typologically diverse languages. Specifically, we sample lengthy articles from Wikipedia, Wudao and mC4 datasets and randomly choose paragraphs from them. Then we use GPT-3.5 to generate questions based on these paragraphs. The generated question and the sampled article constitute a new text pair to the dataset. The prompt for GPT3.5 is “You are a curious AI… See the full description on the dataset page: https://huggingface.co/datasets/Shitao/MLDR.
|
Locutusque/function-calling-chatml
|
Locutusque
|
Dataset Card for "function-calling-chatml"
Converted glaiveai/Glaive-function-calling-v2 to chatml format.
Example entry
[ { "from": "system", "value": "You are a helpful assistant with access to the following functions. Use them if required -{\n \"name\": \"create_contact\",\n \"description\": \"Create a new contact\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\",\n \"description\": \"The name of the… See the full description on the dataset page: https://huggingface.co/datasets/Locutusque/function-calling-chatml.
|
PJMixers/Math-Multiturn-1K-ShareGPT
|
PJMixers
|
All samples were created with this script, no GPT, just python.
|
nvidia/OpenMathInstruct-1
|
nvidia
|
OpenMathInstruct-1
OpenMathInstruct-1 is a math instruction tuning dataset with 1.8M problem-solution pairs
generated using permissively licensed Mixtral-8x7B model.
The problems are from GSM8K
and MATH training subsets and the solutions
are synthetically generated by allowing Mixtral model to use a mix of text reasoning and
code blocks executed by Python interpreter.
The dataset is split into train and validation subsets that we used in the ablations experiments.
These two… See the full description on the dataset page: https://huggingface.co/datasets/nvidia/OpenMathInstruct-1.
|
HuggingFaceTB/cosmopedia
|
HuggingFaceTB
|
Cosmopedia v0.1
Image generated by DALL-E, the prompt was generated by Mixtral-8x7B-Instruct-v0.1
Note: Cosmopedia v0.2 is available at smollm-corpus
User: What do you think "Cosmopedia" could mean? Hint: in our case it's not related to cosmology.
Mixtral-8x7B-Instruct-v0.1: A possible meaning for "Cosmopedia" could be an encyclopedia or collection of information about
different cultures, societies, and topics from around the world, emphasizing diversity and global… See the full description on the dataset page: https://huggingface.co/datasets/HuggingFaceTB/cosmopedia.
|
Henrychur/MMedC
|
Henrychur
|
MMedC
💻Github Repo 🖨️arXiv Paper
The official pre-training dataset for "Towards Building Multilingual Language Model for Medicine".
News
We add Arabic and German corpus to MMedC.
Introduction
This repo contains MMedC, a multilingual medical corpus with 25.5 billion tokens.
Language
Family
Filtering Content
Textbooks
Websites
Small-scale Dataset
TotAmt
English
Indo-European
6.56
4.00
0.00
0.00
10.56
Spanish
Indo-European
3.98
0.31… See the full description on the dataset page: https://huggingface.co/datasets/Henrychur/MMedC.
|
orai-nlp/ZelaiHandi
|
orai-nlp
|
Dataset Card for 🌱ZelaiHandi
🌱ZelaiHandi: A Large Collection of Basque Texts.
ZelaiHandi, which means "large pasture" in Basque, is the largest collection of freely licensed and clean Basque texts to date (March 4th, 2024), gathered from selected web sources.
This collection comprises approximately 521 million words.
The dataset will receive periodical updates.
The corpus has been released with the objective of “feeding” Large Language Models. Naturally, models that are… See the full description on the dataset page: https://huggingface.co/datasets/orai-nlp/ZelaiHandi.
|
parler-tts/mls_eng
|
parler-tts
|
Dataset Card for English MLS
Dataset Summary
This is a streamable version of the English version of the Multilingual LibriSpeech (MLS) dataset.
The data archives were restructured from the original ones from OpenSLR to make it easier to stream.
MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese… See the full description on the dataset page: https://huggingface.co/datasets/parler-tts/mls_eng.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.