id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
yanbingzheng/LongBench
|
yanbingzheng
|
LongBench is a comprehensive benchmark for multilingual and multi-task purposes, with the goal to fully measure and evaluate the ability of pre-trained language models to understand long text. This dataset consists of twenty different tasks, covering key long-text application scenarios such as multi-document QA, single-document QA, summarization, few-shot learning, synthetic tasks, and code completion.
|
luci/questions
|
luci
|
Présentation Générale:
Ce dataset est une collection de questions et réponses en français, principalement axée sur les sujets techniques tels que le développement, le DevOps, la sécurité, les données, l'apprentissage automatique, et bien d'autres domaines liés à la technologie.
Structure du Dataset:
Chaque élément du dataset est un objet avec les champs suivants:
id: Un identifiant unique pour chaque entrée.
category: La catégorie ou le domaine de la question… See the full description on the dataset page: https://huggingface.co/datasets/luci/questions.
|
natmin322/3k_vietnamese_voice_augmented
|
natmin322
|
Dataset Card for "3k_vietnamese_voice_augmented"
More Information needed
|
natmin322/28k_vietnamese_voice_augmented_of_VigBigData
|
natmin322
|
Dataset Card for "28k_vietnamese_voice_augmented_of_VigBigData"
More Information needed
|
PeterBrendan/AdImageNet
|
PeterBrendan
|
Dataset Summary
The AdImageNet dataset contains 9,003 samples of online programmatic ad creatives along with their ad sizes and extracted creative text. Just as ImageNet revolutionized computer vision, AdImageNet aims to serve as a transformative resource for the field of advertising creatives. The dataset includes various ad sizes, such as (300, 250), (728, 90), (970, 250), (300, 600), (160, 600), (970, 90), (336, 280), and (320, 50). This dataset was curated from a larger… See the full description on the dataset page: https://huggingface.co/datasets/PeterBrendan/AdImageNet.
|
Tarklanse/Traditional_Chinese_roleplay_chat_Dataset
|
Tarklanse
|
Traditional_Chinese_roleplay_chat_Dataset
這個資料集是以繁體中文為主,將各種由ChatGPT生成與極小部分個人撰寫的對話內容整理為alpaca dataset format的格式
以一層一層堆疊的方式,將一則對話紀錄拆成數筆資料(共約1000則對話),在幾次嘗試性的訓練中能夠讓llama2重現原本英文那種很活躍的對話風格,並且能夠維持善於扮演各種角色的能力
目前個人有以這個資料集製作一個lora
2023/09/07 更新
為資料集加入一些中英翻譯的句子,以期AI能以更好的文字去描寫他的動作,並增加了一些與食物有關的對話,希望能降低AI生出奇怪食物名的機率
|
SUFE-AIFLM-Lab/FinEval
|
SUFE-AIFLM-Lab
|
The FinEval Dataset
FinEval is a collection of high-quality multiple-choice questions covering various domains such as finance, economics, accounting, and certifications. It consists of 4,661 questions spanning across 34 distinct academic subjects. To ensure a comprehensive assessment of model performance, FinEval employs various methods including zero-shot, few-shot, answer-only, and chain-of-thought prompts. Evaluating state-of-the-art large language models in both Chinese and English… See the full description on the dataset page: https://huggingface.co/datasets/SUFE-AIFLM-Lab/FinEval.
|
FreedomIntelligence/sharegpt-chinese
|
FreedomIntelligence
|
Chinese ShareGPT data translated by gpt-3.5-turbo.The dataset is used in the research related to MultilingualSIFT.
|
bghira/pseudo-camera-10k
|
bghira
|
pseudo-camera-10k dataset
Contents
This dataset contains 10k free images from world class photographers. The images have been resized using Lanczos antialiasing, with their smaller edge shifted to 1024px.
The aim of this dataset is a highly variable but high quality and high resolution set of images containing difficult concepts, with about half of the images being numbered group shots and family portraits with the number of subjects labeled.
No images were… See the full description on the dataset page: https://huggingface.co/datasets/bghira/pseudo-camera-10k.
|
grammarly/coedit
|
grammarly
|
Dataset Card for CoEdIT: Text Editing via Instruction Tuning
Paper: CoEdIT: Text Editing by Task-Specific Instruction Tuning
Authors: Vipul Raheja, Dhruv Kumar, Ryan Koo, Dongyeop Kang
Project Repo: https://github.com/vipulraheja/coedit
Dataset Summary
This is the dataset that was used to train the CoEdIT text editing models. Full details of the dataset can be found in our paper.
Dataset Structure
The… See the full description on the dataset page: https://huggingface.co/datasets/grammarly/coedit.
|
MuskumPillerum/General-Knowledge
|
MuskumPillerum
|
Dataset Card for Dataset Name
Dataset Summary
The dataset is a collection of questions and answers themed on general facts and reasoning. The dataset is divided into two features - 'Question' and 'Answer'.
It is meant to be used for training a model to be good at general knowledge and reasoning. This dataset is inspired from the Alpaca dataset, and infact contains a subset of the alpaca dataset in itself.
Distribution
The distribution of the… See the full description on the dataset page: https://huggingface.co/datasets/MuskumPillerum/General-Knowledge.
|
nampdn-ai/mini-en
|
nampdn-ai
|
Tiny English
A collection of short texts that have been curated for long-term human value. The texts in this dataset have been filtered from the falcon-refinedweb and minipile datasets to ensure better quality and tiny in size.
The tiny-en dataset is concise and small in size, yet highly diverse, making it an excellent resource for training natural language processing models. Despite its compact size, the dataset offers a wide range of content that has been carefully selected… See the full description on the dataset page: https://huggingface.co/datasets/nampdn-ai/mini-en.
|
google/trueteacher
|
google
|
TrueTeacher
Dataset Summary
This is a large-scale synthetic dataset for training Factual Consistency Evaluation models, introduced in the TrueTeacher paper (Gekhman et al, 2023).
Dataset Details
The dataset contains model-generated summaries of articles from the train split of the CNN/DailyMail dataset (Hermann et al., 2015)
which are annotated for factual consistency using FLAN-PaLM 540B (Chung et al.,2022).
Summaries were generated using… See the full description on the dataset page: https://huggingface.co/datasets/google/trueteacher.
|
m720/SHADR
|
m720
|
SDoH Human Annotated Demographic Robustness (SHADR) Dataset
Overview
The Social determinants of health (SDoH) play a pivotal role in determining patient outcomes. However, their documentation in electronic health records (EHR) remains incomplete. This dataset was created from a study examining the capability of large language models in extracting SDoH from the free text sections of EHRs. Furthermore, the study delved into the potential of synthetic clinical text… See the full description on the dataset page: https://huggingface.co/datasets/m720/SHADR.
|
inuwamobarak/african-atire
|
inuwamobarak
|
African Attire Images Dataset
Overview
The African Attire Images Dataset is a curated collection of images showcasing various African traditional attires and clothing styles. This dataset is intended to promote cultural diversity and provide valuable resources for research, education, and creative projects related to fashion, culture, and identity.
Dataset Details
Total Images: XXXX
Categories(ClassLabel): 'Adire', 'Idama', 'Idgo', 'Idoma', 'Igala'… See the full description on the dataset page: https://huggingface.co/datasets/inuwamobarak/african-atire.
|
google/dreambooth
|
google
|
Dataset Card for "dreambooth"
Dataset of the Google paper DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
The dataset includes 30 subjects of 15 different classes. 9 out of these subjects are live subjects (dogs and cats) and 21 are objects. The dataset contains a variable number of images per subject (4-6). Images of the subjects are usually captured in different conditions, environments and under different angles.
We include… See the full description on the dataset page: https://huggingface.co/datasets/google/dreambooth.
|
nampdn-ai/tiny-math-textbooks
|
nampdn-ai
|
Math Textbook
635k of tiny math textbooks [Not ready yet!]
Topics included
Algebra
Calculus
Geometry
Logic and Deduction
Mathematical Thinking
Probability
Statistics
|
thaottn/DataComp_medium_pool_BLIP2_captions
|
thaottn
|
Dataset Card for DataComp_medium_pool_BLIP2_captions
Dataset Summary
Supported Tasks and Leaderboards
We have used this dataset for pre-training CLIP models and found that it rivals or outperforms models trained on raw web captions on average across the 38 evaluation tasks proposed by DataComp.
Refer to the DataComp leaderboard (https://www.datacomp.ai/leaderboard.html) for the top baselines uncovered in our work.
Languages
Primarily… See the full description on the dataset page: https://huggingface.co/datasets/thaottn/DataComp_medium_pool_BLIP2_captions.
|
botp/yentinglin-zh_TW_c4
|
botp
|
Language Models for Taiwanese Culture
✍️ Online Demo
•
🤗 HF Repo • 🐦 Twitter • 📃 [Paper Coming Soon]
• 👨️ Yen-Ting Lin
Overview
Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications.
Taiwan-LLaMa v1.0 pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin.
Demo
A live demonstration of… See the full description on the dataset page: https://huggingface.co/datasets/botp/yentinglin-zh_TW_c4.
|
chanelcolgate/yenthienviet
|
chanelcolgate
|
This dataset contains all THIENVIET products images and annotations split in training
and validation.
|
tyouisen/aclue
|
tyouisen
|
The Ancient Chinese Language Understanding Evaluation (ACLUE) is an evaluation benchmark focused on ancient Chinese language comprehension. It aims to assess the performance of large-scale language models on understanding ancient Chinese.
|
totally-not-an-llm/EverythingLM-data-V2
|
totally-not-an-llm
|
EverythingLM V2 Dataset
EverythingLM V2 is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.
Differences for V1:
All data in V2 is generated by GPT4
Higher quality dataset generation pipeline:
More humalike seed prompts
Fixed some bugs in the script
More diverse creative writing
More diverse seed… See the full description on the dataset page: https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V2.
|
jinaai/code_exercises
|
jinaai
|
Dataset Card for "code_exercises"
Code exercise
This dataset is composed of a diverse set of ~120k Python code exercises (~120m total tokens) generated by ChatGPT 3.5. It is designed to distill ChatGPT 3.5 knowledge about Python coding tasks into other (potentially smaller) models. The exercises have been generated by following the steps described in the related GitHub repository.
The generated exercises follow the format of the Human Eval benchmark. Each training… See the full description on the dataset page: https://huggingface.co/datasets/jinaai/code_exercises.
|
nascetti-a/BioMassters
|
nascetti-a
|
BioMassters: A Benchmark Dataset for Forest Biomass Estimation using Multi-modal Satellite Time-series https://nascetti-a.github.io/BioMasster/
The objective of this repository is to provide a deep learning ready dataset to predict yearly Above Ground Biomass (AGB) for Finnish forests using multi-temporal satellite imagery from
the European Space Agency and European Commission's joint Sentinel-1 and Sentinel-2 satellite missions, designed to collect a rich array of Earth… See the full description on the dataset page: https://huggingface.co/datasets/nascetti-a/BioMassters.
|
ds4sd/DocLayNet-v1.1
|
ds4sd
|
Dataset Card for DocLayNet v1.1
Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
Human Annotation: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and… See the full description on the dataset page: https://huggingface.co/datasets/ds4sd/DocLayNet-v1.1.
|
ntt123/VietBibleVox
|
ntt123
|
VietBibleVox Dataset
The VietBibleVox Dataset is based on the data extracted from open.bible specifically for the Vietnamese language. As the original data is provided under the cc-by-sa-4.0 license, this derived dataset is also licensed under cc-by-sa-4.0.
The dataset comprises 29,185 pairs of (verse, audio clip), with each verse from the Bible read in Vietnamese by a male voice.
The verses are the original texts and may not be directly usable for training text-to-speech… See the full description on the dataset page: https://huggingface.co/datasets/ntt123/VietBibleVox.
|
allenai/objaverse-xl
|
allenai
|
Objaverse-XL
Objaverse-XL is an open dataset of over 10 million 3D objects!
With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: 🧵👇
Scale Comparison
Objaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.
Objaverse-XL is over an order of magnitude larger and much more diverse!
Unlocking Generalization
Compared… See the full description on the dataset page: https://huggingface.co/datasets/allenai/objaverse-xl.
|
ProgramComputer/voxceleb
|
ProgramComputer
|
This dataset includes both VoxCeleb and VoxCeleb2
Multipart Zips
Already joined zips for convenience but these specified files are NOT part of the original datasets
vox2_mp4_1.zip - vox2_mp4_6.zip
vox2_aac_1.zip - vox2_aac_2.zip
Joining Zip
cat vox1_dev* > vox1_dev_wav.zip
cat vox2_dev_aac* > vox2_aac.zip
cat vox2_dev_mp4* > vox2_mp4.zip
Citation Information
@article{Nagrani19,
author = "Arsha Nagrani and Joon~Son Chung and Weidi Xie and… See the full description on the dataset page: https://huggingface.co/datasets/ProgramComputer/voxceleb.
|
icantiemyshoe/cve-to-metasploit-module
|
icantiemyshoe
|
CVE To Metasploit Module Prompt
This dataset is a submodule to the overall project to create an LLM that can look at newly published CVE writeups and create metasploit modules. The main repo for the project can be found here.
Usage
TO-DO
References
TO-DO
|
Hecate2/ISML-captcha
|
Hecate2
|
Classic model: (accepts 40% with 8-char accuracy 85%)
https://github.com/Hecate2/Ignareo
Modern model: (accepts 100% with 8-char accuracy 91%)
https://huggingface.co/AIris-Channel/vit-gpt2-verifycode-caption
|
botp/RyokoAI_ShareGPT52K
|
botp
|
Dataset Card for ShareGPT52K90K
Dataset Summary
This dataset is a collection of approximately 52,00090,000 conversations scraped via the ShareGPT API before it was shut down.
These conversations include both user prompts and responses from OpenAI's ChatGPT.
This repository now contains the new 90K conversations version. The previous 52K may
be found in the old/ directory.
Supported Tasks and Leaderboards
text-generation
Languages… See the full description on the dataset page: https://huggingface.co/datasets/botp/RyokoAI_ShareGPT52K.
|
botp/RyokoAI_Syosetu711K
|
botp
|
Dataset Card for Syosetu711K
The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.
Dataset Summary
Syosetu711K is a dataset composed of approximately 711,700 novels scraped from the Japanese novel self-publishing
website Syosetuka ni Narou (JA: 小説家になろう, lit. "Let's Become a Novelist") between March 26 and March 27, 2023.
The dataset contains most if not all novels published on the site, regardless of… See the full description on the dataset page: https://huggingface.co/datasets/botp/RyokoAI_Syosetu711K.
|
botp/RyokoAI_ScribbleHub17K
|
botp
|
Dataset Card for ScribbleHub17K
The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.
Dataset Summary
ScribbleHub17K is a dataset consisting of text from over 373,000 chapters across approximately 17,500 series posted on the
original story sharing site Scribble Hub.
Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models;… See the full description on the dataset page: https://huggingface.co/datasets/botp/RyokoAI_ScribbleHub17K.
|
dikw/hh_rlhf_cn
|
dikw
|
hh-rlhf中文翻译版本
基于Anthropic论文Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback 开源的helpful 和harmless数据,使用翻译工具进行了翻译。hh_rlhf_train.jsonl 合并中英文训练集数据 清洗过后17万条hh_rlhf_test.jsonl 合并中英文测试集数据 清洗过后9千条harmless_base_cn_train.jsonl 42394条harmless_base_cn_test.jsonl 2304条helpful_base_cn_train.jsonl 43722条helpful_base_cn_test.jsonl 2346条
实验报告
相关rlhf实验报告:https://zhuanlan.zhihu.com/p/652044120
|
lhoestq/squad
|
lhoestq
|
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
|
joey234/mmlu-abstract_algebra-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-abstract_algebra-neg-prepend-fix"
More Information needed
|
joey234/mmlu-anatomy-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-anatomy-neg-prepend-fix"
More Information needed
|
joey234/mmlu-astronomy-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-astronomy-neg-prepend-fix"
More Information needed
|
joey234/mmlu-business_ethics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-business_ethics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-college_biology-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-college_biology-neg-prepend-fix"
More Information needed
|
joey234/mmlu-college_chemistry-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-college_chemistry-neg-prepend-fix"
More Information needed
|
joey234/mmlu-college_mathematics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-college_mathematics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-college_medicine-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-college_medicine-neg-prepend-fix"
More Information needed
|
joey234/mmlu-college_physics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-college_physics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-computer_security-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-computer_security-neg-prepend-fix"
More Information needed
|
joey234/mmlu-conceptual_physics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-conceptual_physics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-econometrics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-econometrics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-electrical_engineering-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-electrical_engineering-neg-prepend-fix"
More Information needed
|
joey234/mmlu-elementary_mathematics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-elementary_mathematics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-formal_logic-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-formal_logic-neg-prepend-fix"
More Information needed
|
joey234/mmlu-global_facts-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-global_facts-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_biology-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_biology-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_chemistry-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_chemistry-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_computer_science-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_computer_science-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_european_history-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_european_history-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_geography-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_geography-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_government_and_politics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_government_and_politics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_macroeconomics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_macroeconomics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_mathematics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_mathematics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_microeconomics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_microeconomics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_physics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_physics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_psychology-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_psychology-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_statistics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_statistics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_us_history-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_us_history-neg-prepend-fix"
More Information needed
|
joey234/mmlu-high_school_world_history-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-high_school_world_history-neg-prepend-fix"
More Information needed
|
joey234/mmlu-human_aging-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-human_aging-neg-prepend-fix"
More Information needed
|
joey234/mmlu-human_sexuality-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-human_sexuality-neg-prepend-fix"
More Information needed
|
joey234/mmlu-international_law-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-international_law-neg-prepend-fix"
More Information needed
|
joey234/mmlu-jurisprudence-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-jurisprudence-neg-prepend-fix"
More Information needed
|
joey234/mmlu-logical_fallacies-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-logical_fallacies-neg-prepend-fix"
More Information needed
|
joey234/mmlu-machine_learning-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-machine_learning-neg-prepend-fix"
More Information needed
|
joey234/mmlu-management-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-management-neg-prepend-fix"
More Information needed
|
joey234/mmlu-marketing-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-marketing-neg-prepend-fix"
More Information needed
|
joey234/mmlu-medical_genetics-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-medical_genetics-neg-prepend-fix"
More Information needed
|
joey234/mmlu-miscellaneous-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-miscellaneous-neg-prepend-fix"
More Information needed
|
joey234/mmlu-moral_disputes-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-moral_disputes-neg-prepend-fix"
More Information needed
|
joey234/mmlu-moral_scenarios-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-moral_scenarios-neg-prepend-fix"
More Information needed
|
joey234/mmlu-nutrition-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-nutrition-neg-prepend-fix"
More Information needed
|
joey234/mmlu-philosophy-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-philosophy-neg-prepend-fix"
More Information needed
|
joey234/mmlu-prehistory-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-prehistory-neg-prepend-fix"
More Information needed
|
joey234/mmlu-professional_accounting-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-professional_accounting-neg-prepend-fix"
More Information needed
|
joey234/mmlu-professional_law-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-professional_law-neg-prepend-fix"
More Information needed
|
joey234/mmlu-professional_medicine-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-professional_medicine-neg-prepend-fix"
More Information needed
|
joey234/mmlu-professional_psychology-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-professional_psychology-neg-prepend-fix"
More Information needed
|
joey234/mmlu-public_relations-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-public_relations-neg-prepend-fix"
More Information needed
|
joey234/mmlu-security_studies-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-security_studies-neg-prepend-fix"
More Information needed
|
joey234/mmlu-sociology-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-sociology-neg-prepend-fix"
More Information needed
|
joey234/mmlu-us_foreign_policy-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-us_foreign_policy-neg-prepend-fix"
More Information needed
|
joey234/mmlu-virology-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-virology-neg-prepend-fix"
More Information needed
|
joey234/mmlu-world_religions-neg-prepend-fix
|
joey234
|
Dataset Card for "mmlu-world_religions-neg-prepend-fix"
More Information needed
|
ticoAg/Chinese-medical-dialogue
|
ticoAg
|
Note
process data from Chinese-medical-dialogue-data
单轮医患对话
raw data samples
department
title
ask
answer
心血管科
高血压患者能吃党参吗?
我有高血压这两天女婿来的时候给我拿了些党参泡水喝,您好高血压可以吃党参吗?
高血压病人可以口服党参的。党参有降血脂,降血压的作用,可以彻底消除血液中的垃圾,从而对冠心病以及心血管疾病的患者都有一定的稳定预防工作作用,因此平时口服党参能远离三高的危害。另外党参除了益气养血,降低中枢神经作用,调整消化系统功能,健脾补肺的功能。感谢您的进行咨询,期望我的解释对你有所帮助。
内分泌科
糖尿病还会进行遗传吗?
糖尿病有隔代遗传吗?我妈是糖尿病,很多年了,也没养好,我现在也是,我妹子也是,我儿子现在二十岁,没什么问题,但是以后会不会也得糖尿病啊,真是难过,我现在就已经开始让他控制点吃东西。… See the full description on the dataset page: https://huggingface.co/datasets/ticoAg/Chinese-medical-dialogue.
|
erfanloghmani/myket-android-application-recommendation-dataset
|
erfanloghmani
|
Myket Android Application Install Dataset
This dataset contains information on application install interactions of users in the Myket android application market. The dataset was created for the purpose of evaluating interaction prediction models, requiring user and item identifiers along with timestamps of the interactions.
Data Creation
The dataset was initially generated by the Myket data team, and later cleaned and subsampled by Erfan Loghmani a master student… See the full description on the dataset page: https://huggingface.co/datasets/erfanloghmani/myket-android-application-recommendation-dataset.
|
tyang816/MedChatZH
|
tyang816
|
MedChatZH
MedChatZH中文微调参数
基于Baichuan-7B在中医书籍数据和医疗指令对话数据上训练,使其具备较强的中文医疗对话能力,能够理解中医问题。
Github:MedChatZH-7B
🚀 结果
问题
回答
中医治脱发有效吗脱发5、6年了,明显比以前少了很多,脱发位置比较均衡,没有突出掉的比较多的地方,我的发质比较硬,头发生长也较快。
中药治疗脱发的效果因人而异,但对于您这种头发质量好且生长速度快的患者来说,中药可能是一个不错的选择。一些常用的中药方剂包括六味地黄丸、四物汤等,这些药物可以滋养肝肾,促进血液循环和毛囊营养供应,从而改善脱发问题。此外,还可以通过针灸、艾灸等方式进行辅助治疗。但是需要注意的是,中药治疗需要长期坚持才能见效,同时还需要注意饮食和生活习惯等方面的问题,以达到更好的疗效。建议您咨询专业的中医医生或皮肤科医生,根据个人情况制定个性化的治疗方案。… See the full description on the dataset page: https://huggingface.co/datasets/tyang816/MedChatZH.
|
davidscripka/MIT_environmental_impulse_responses
|
davidscripka
|
MIT Environmental Impulse Response Dataset
The audio recordings in this dataset are originally created by the Computational Audition Lab at MIT. The source of the data can be found at: https://mcdermottlab.mit.edu/Reverb/IR_Survey.html.
The audio files in the dataset have been resampled to a sampling rate of 16 kHz. This resampling was done to reduce the size of the dataset while making it more suitable for various tasks, including data augmentation.
The dataset consists of 271 audio files… See the full description on the dataset page: https://huggingface.co/datasets/davidscripka/MIT_environmental_impulse_responses.
|
declare-lab/HarmfulQA
|
declare-lab
|
Paper | Github | Dataset| Model
📣📣📣: Do check our new multilingual dataset CatQA here used in Safety Vectors:📣📣📣
As a part of our research efforts toward making LLMs more safe for public use, we create HarmfulQA i.e. a ChatGPT-distilled dataset constructed using the Chain of Utterances (CoU) prompt. More details are in our paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
HarmfulQA serves as both-a new LLM safety benchmark and an alignment dataset… See the full description on the dataset page: https://huggingface.co/datasets/declare-lab/HarmfulQA.
|
digitalpipelines/samantha-1.1-uncensored
|
digitalpipelines
|
This dataset is based on ehartford/samantha-data that was used to create ehartford/samantha-1.1-llama-7b and other samantha models. It has been unfiltered and uncensored.
|
ymoslem/MedicalSciences-StackExchange
|
ymoslem
|
All StackExchange questions and their answers from the Medical Sciences site, up to 14 August 2023. The repository includes a notebook for the process using the official StackExchange API.
|
ymoslem/Law-StackExchange
|
ymoslem
|
All StackExchange legal questions and their answers from the Law site, up to 14 August 2023. The repository includes a notebook for the process using the official StackExchange API.
|
Nan-Do/SPP_30K_reasoning_tasks
|
Nan-Do
|
Dataset Card for "SPP_30K_verified_tasks"
Dataset Summary
This is an augmented version of the Synthetic Python Problems(SPP) Dataset.
This dataset has been generated from the subset of the data has been de-duplicated and verified using a Python interpreter. (SPP_30k_verified.jsonl).
The original dataset contains small Python functions that include a docstring with a small description of what the function does and some calling examples
for the function.
The… See the full description on the dataset page: https://huggingface.co/datasets/Nan-Do/SPP_30K_reasoning_tasks.
|
if001/oscar_2023_filtered
|
if001
|
from datasets import load_dataset
ds=load_dataset("if001/oscar_2023_filtered")
ds['train']
---
Dataset({
features: ['text'],
num_rows: 312396
})
oscar 2023をfilterしたものhttps://huggingface.co/datasets/oscar-corpus/OSCAR-2301
詳細はコードを参照https://github.com/if001/HojiChar_OSCAR_sample/tree/0.0.4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.