id
stringlengths 6
121
| author
stringlengths 2
42
| description
stringlengths 0
6.67k
|
---|---|---|
ToKnow-ai/Summary-of-Registered-Entities-and-Companies-in-Kenya
|
ToKnow-ai
|
Kenya Business Registrations Dataset (2015-2024)
Overview
This dataset contains information on business entity registrations in Kenya from financial year 2015/2016 to 2024/2025. It includes monthly registration figures for various types of business entities, providing insights into Kenya's economic trends and entrepreneurial landscape.
Dataset Contents
The dataset includes the following types of business registrations:
Business Names
Private… See the full description on the dataset page: https://huggingface.co/datasets/ToKnow-ai/Summary-of-Registered-Entities-and-Companies-in-Kenya.
|
youjunhyeok/Magpie-Llama-3.1-Pro-DPO-100K-v0.1-ko
|
youjunhyeok
|
번역이 제대로 안 되어 재번역 작업중입니다. (Responses 번역 안됨)
|
BattleTag/Email
|
BattleTag
|
Dataset Card for Scam Email Dataset
This dataset includes three categories emails in both Chinese and English.
Dataset Details
Our dataset contains three categories: AI-scam, scam, and normal. There are 30,441 data in total, of which AI-scam, scam, and normal account for 10,147 each. It contains emails in two languages, with the proportions being: Chinese 49.9% and English 50.1%
|
self-generate/topp09_temp07_reflection_scored_ds_chat_original_cn_mining_oj_iter0-binarized
|
self-generate
|
Dataset Card for "topp09_temp07_reflection_scored_ds_chat_original_cn_mining_oj_iter0-binarized"
More Information needed
|
self-generate/topp09_temp07_reflection_scored_ds_chat_original_cn_mining_oj_iter0-full_response_traceback
|
self-generate
|
Dataset Card for "topp09_temp07_reflection_scored_ds_chat_original_cn_mining_oj_iter0-full_response_traceback"
More Information needed
|
self-generate/topp09_temp07_reflection_scored_ds_chat_original_cn_mining_oj_iter0-binarized_all_pairs
|
self-generate
|
Dataset Card for "topp09_temp07_reflection_scored_ds_chat_original_cn_mining_oj_iter0-binarized_all_pairs"
More Information needed
|
infinite-dataset-hub/StockPredictTraining
|
infinite-dataset-hub
|
StockPredictTraining
tags: stock market, supervised learning, time series
Note: This is an AI-generated dataset so its content may be inaccurate or false
Dataset Description:
The 'StockPredictTraining' dataset is a curated collection of stock market time series data used for training supervised machine learning models. Each row represents daily stock information with various features that might influence the stock's future price. The dataset includes open, high, low, close… See the full description on the dataset page: https://huggingface.co/datasets/infinite-dataset-hub/StockPredictTraining.
|
ZixuanKe/cfa_exercise_synthesis_1_sup
|
ZixuanKe
|
Dataset Card for "cfa_exercise_synthesis_1_unsup"
More Information needed
|
kimsan0622/code-knowledge-eval
|
kimsan0622
|
Code Knowledge Value Evaluation Dataset
This dataset is created by evaluating the knowledge value of code sourced from the bigcode/the-stack repository. It is designed to assess the educational and knowledge potential of different code samples.
Dataset Overview
The dataset is split into training, validation, and test sets with the following number of samples:
Training set: 22,786 samples
Validation set: 4,555 samples
Test set: 18,232 samples
Usage… See the full description on the dataset page: https://huggingface.co/datasets/kimsan0622/code-knowledge-eval.
|
ConnorJiang/test
|
ConnorJiang
|
This dataset was created using LeRobot.
|
d1d9/import_wiki_ottawa
|
d1d9
|
This dataset contains information about the laureates of animated films during the Ottawa International Animation Festival.
The dataset was compiled from Wikipedia.
|
xhwl/cruxeval-x
|
xhwl
|
CRUXEVAL-X: A Benchmark for Multilingual Code Reasoning, Understanding and Execution
📃 Paper •
🤗 Space •
🏆 Leaderboard •
🔎 Github Repo
Dataset Description
CRUXEVAL-X stands as a multi-lingual code reasoning benchmark, encompassing 19 programming languages and built upon the foundation of CRUXEVAL. This comprehensive resource features a minimum of 600 subjects per language, collectively contributing to a robust total of 19,000 content-consistent tests.… See the full description on the dataset page: https://huggingface.co/datasets/xhwl/cruxeval-x.
|
nlp-team-issai/part_data_kk_wiki_english_ver
|
nlp-team-issai
|
590mb data from kk_wiki_english_ver
generated with this system prompt on qwen72B
"anchor_positive_negative": {
"system_prompt": "You are an AI assistant specialized in generating training data for embedding models. Generate three anchors (a question, a statement, and a keyword) and a negative text based on the given positive text. The negative should be at least as long as the positive text, with a minimum of 350 tokens. Ensure all outputs are distinct from each other and the… See the full description on the dataset page: https://huggingface.co/datasets/nlp-team-issai/part_data_kk_wiki_english_ver.
|
ibru/pickup_knight
|
ibru
|
This dataset was created using LeRobot.
|
jmercat/eval_koch_feed_cat_new
|
jmercat
|
This dataset was created using LeRobot.
|
mauriciogtec/TRESNET-KDD
|
mauriciogtec
|
This dataset contains the necesary data to reproduce the paper
Tec et al. (2024).
Causal Estimation of Exposure Shifts with Neural Networks. In: KDD.
|
dogtooth/llama31-8b-generated-llama31-8b-scored-uf_1729055762
|
dogtooth
|
allenai/open_instruct: Rejection Sampling Dataset
See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail
Configs
args:
{'add_timestamp': True,
'hf_entity': 'dogtooth',
'hf_repo_id': 'llama31-8b-generated-llama31-8b-scored-uf',
'hf_repo_id_scores': 'rejection_sampling_scores',
'include_reference_completion_for_rejection_sampling': True,
'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/llama31-8b-generated-llama31-8b-scored-uf_1729055762.
|
dogtooth/rejection_sampling_scores_1729055762
|
dogtooth
|
allenai/open_instruct: Rejection Sampling Dataset
See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail
Configs
args:
{'add_timestamp': True,
'hf_entity': 'dogtooth',
'hf_repo_id': 'llama31-8b-generated-llama31-8b-scored-uf',
'hf_repo_id_scores': 'rejection_sampling_scores',
'include_reference_completion_for_rejection_sampling': True,
'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/rejection_sampling_scores_1729055762.
|
open-llm-leaderboard/nbeerbower__Llama3.1-Gutenberg-Doppel-70B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of nbeerbower/Llama3.1-Gutenberg-Doppel-70B
Dataset automatically created during the evaluation run of model nbeerbower/Llama3.1-Gutenberg-Doppel-70B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/nbeerbower__Llama3.1-Gutenberg-Doppel-70B-details.
|
jmercat/eval_koch_feed_cat_new_ckpt5
|
jmercat
|
This dataset was created using LeRobot.
|
open-llm-leaderboard/flammenai__Mahou-1.5-llama3.1-70B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of flammenai/Mahou-1.5-llama3.1-70B
Dataset automatically created during the evaluation run of model flammenai/Mahou-1.5-llama3.1-70B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/flammenai__Mahou-1.5-llama3.1-70B-details.
|
open-llm-leaderboard/flammenai__Llama3.1-Flammades-70B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of flammenai/Llama3.1-Flammades-70B
Dataset automatically created during the evaluation run of model flammenai/Llama3.1-Flammades-70B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/flammenai__Llama3.1-Flammades-70B-details.
|
open-llm-leaderboard/mlabonne__Hermes-3-Llama-3.1-70B-lorablated-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of mlabonne/Hermes-3-Llama-3.1-70B-lorablated
Dataset automatically created during the evaluation run of model mlabonne/Hermes-3-Llama-3.1-70B-lorablated
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/mlabonne__Hermes-3-Llama-3.1-70B-lorablated-details.
|
Sudhakar123/output.json
|
Sudhakar123
|
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
Curated by: [More Information Needed]
Funded by [optional]: [More Information Needed]
Shared by [optional]: [More Information Needed]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/Sudhakar123/output.json.
|
yuekai/aishell
|
yuekai
|
aishell
|
ZixuanKe/cfa_exercise_verify_1_sup
|
ZixuanKe
|
Dataset Card for "cfa_exercise_verify_1_sup"
More Information needed
|
ClaudeChen/test_viewer
|
ClaudeChen
|
[doc] audio dataset 10
This dataset contains four audio files, two in the /train directory (one in the cat/ subdirectory and one in the dog/ subdirectory), and two in the test/ directory (same distribution in subdirectories). The label column is not created because the configuration contains drop_labels: true.
|
open-llm-leaderboard/ymcki__gemma-2-2b-jpn-it-abliterated-18-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of ymcki/gemma-2-2b-jpn-it-abliterated-18
Dataset automatically created during the evaluation run of model ymcki/gemma-2-2b-jpn-it-abliterated-18
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/ymcki__gemma-2-2b-jpn-it-abliterated-18-details.
|
nyuuzyou/uchitelya
|
nyuuzyou
|
Dataset Card for Uchitelya.com Educational Materials
Dataset Summary
This dataset contains metadata and original files for 199,230 educational materials from the uchitelya.com platform, a resource for teachers, educators, students, and parents providing diverse educational content on various topics. The dataset includes information such as material titles, URLs, download URLs, and extracted text content where available.
Languages
The dataset is… See the full description on the dataset page: https://huggingface.co/datasets/nyuuzyou/uchitelya.
|
PLyas/PLDatasets
|
PLyas
|
物理层AI数据集
介绍
此处为储存物理层AI相关的数据集,补充……
数据集保存到Dataset文件夹,相关信息填入Dataset.md
test
|
open-llm-leaderboard/OpenBuddy__openbuddy-llama3.2-3b-v23.2-131k-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of OpenBuddy/openbuddy-llama3.2-3b-v23.2-131k
Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-llama3.2-3b-v23.2-131k
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/OpenBuddy__openbuddy-llama3.2-3b-v23.2-131k-details.
|
open-llm-leaderboard/Youlln__ECE.EIFFEIL.ia-0.5B-SLERP-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of Youlln/ECE.EIFFEIL.ia-0.5B-SLERP
Dataset automatically created during the evaluation run of model Youlln/ECE.EIFFEIL.ia-0.5B-SLERP
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Youlln__ECE.EIFFEIL.ia-0.5B-SLERP-details.
|
open-llm-leaderboard/kaist-ai__janus-7b-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of kaist-ai/janus-7b
Dataset automatically created during the evaluation run of model kaist-ai/janus-7b
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/kaist-ai__janus-7b-details.
|
open-llm-leaderboard/kaist-ai__janus-rm-7b-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of kaist-ai/janus-rm-7b
Dataset automatically created during the evaluation run of model kaist-ai/janus-rm-7b
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/kaist-ai__janus-rm-7b-details.
|
open-llm-leaderboard/kaist-ai__janus-dpo-7b-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of kaist-ai/janus-dpo-7b
Dataset automatically created during the evaluation run of model kaist-ai/janus-dpo-7b
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/kaist-ai__janus-dpo-7b-details.
|
hazemessam/ddg
|
hazemessam
|
These datasets are collected from this repo: https://github.com/mitiau/PROSTATA
|
han5i5j1986/koch_lego_2024-10-16-01
|
han5i5j1986
|
This dataset was created using LeRobot.
|
open-llm-leaderboard/bunnycore__Best-Mix-Llama-3.1-8B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of bunnycore/Best-Mix-Llama-3.1-8B
Dataset automatically created during the evaluation run of model bunnycore/Best-Mix-Llama-3.1-8B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/bunnycore__Best-Mix-Llama-3.1-8B-details.
|
open-llm-leaderboard/Gryphe__Pantheon-RP-Pure-1.6.2-22b-Small-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
Dataset automatically created during the evaluation run of model Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Gryphe__Pantheon-RP-Pure-1.6.2-22b-Small-details.
|
open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
Dataset automatically created during the evaluation run of model PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train"… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B-details.
|
intronhealth/afrispeech-dialog
|
intronhealth
|
AfriSpeech-Dialog v1: A Conversational Speech Dataset for African Accents
This work is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Overview and Purpose
AfriSpeech-Dialog is a pan-African conversational speech dataset with 6 hours of recorded dialogue, designed to support speech recognition (ASR) and speaker diarization applications. Collected from diverse accents across Nigeria, Kenya, and South Africa, the… See the full description on the dataset page: https://huggingface.co/datasets/intronhealth/afrispeech-dialog.
|
lzy0928/Physics_Fluids_Plasmas
|
lzy0928
|
Text data related to the subject of Fluids and Plasmas Physics.
Compressed file "physics.tar.gz" is a collection of json files, which are filtered from Fineweb dataset.
File "stat-physics.jsonl" shows some information of "physics.tar.gz".
File "wiki_physics_yes.jsonl" is a json file filtered from Wikipedia dataset.
File "sft-physics.jsonl" is a json file for supervised fine-tuning (SFT).
|
dogtooth/llama31-8b-generated-gold-scored-uf_1729070334
|
dogtooth
|
allenai/open_instruct: Rejection Sampling Dataset
See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail
Configs
args:
{'add_timestamp': True,
'hf_entity': 'dogtooth',
'hf_repo_id': 'llama31-8b-generated-gold-scored-uf',
'hf_repo_id_scores': 'rejection_sampling_scores',
'include_reference_completion_for_rejection_sampling': True,
'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/llama31-8b-generated-gold-scored-uf_1729070334.
|
open-llm-leaderboard/EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds
Dataset automatically created during the evaluation run of model EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/EpistemeAI__Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code-ds-details.
|
tppllm/stack-overflow-description
|
tppllm
|
Stack Overflow Description Dataset
This dataset contains badge awards earned by users on Stack Overflow between January 1, 2022, and December 31, 2023. It includes 3,336 sequences with 187,836 events and 25 badge types, derived from the Stack Exchange Data Dump under the CC BY-SA 4.0 license. The detailed data preprocessing steps used to create this dataset can be found in the TPP-LLM paper and TPP-LLM-Embedding paper.
If you find this dataset useful, we kindly invite you to… See the full description on the dataset page: https://huggingface.co/datasets/tppllm/stack-overflow-description.
|
open-llm-leaderboard/sonthenguyen__ft-unsloth-zephyr-sft-bnb-4bit-20241014-164205-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of sonthenguyen/ft-unsloth-zephyr-sft-bnb-4bit-20241014-164205
Dataset automatically created during the evaluation run of model sonthenguyen/ft-unsloth-zephyr-sft-bnb-4bit-20241014-164205
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/sonthenguyen__ft-unsloth-zephyr-sft-bnb-4bit-20241014-164205-details.
|
HuggingFaceTB/MATH
|
HuggingFaceTB
|
MATH is a dataset of 12,500 challenging competition mathematics problems. Each
problem in Math has a full step-by-step solution which can be used to teach
models to generate answer derivations and explanations.
|
open-llm-leaderboard/Tsunami-th__Tsunami-0.5x-7B-Instruct-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of Tsunami-th/Tsunami-0.5x-7B-Instruct
Dataset automatically created during the evaluation run of model Tsunami-th/Tsunami-0.5x-7B-Instruct
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Tsunami-th__Tsunami-0.5x-7B-Instruct-details.
|
arXivBenchLLM/arXivBench
|
arXivBenchLLM
|
Dataset construction:
Our benchmark consists of two main components. The first part includes 4,000 prompts across eight major subject categories on arXiv: Math, Computer Science (CS), Quantitative Biology (QB), Physics, Quantitative Finance (QF), Statistics, Electrical Engineering and Systems Science (EESS), and Economics.
The second part of arXivBench includes 2,500 prompts from five subfields within computer science, one of the most popular fields among all the categories:… See the full description on the dataset page: https://huggingface.co/datasets/arXivBenchLLM/arXivBench.
|
open-llm-leaderboard/sonthenguyen__ft-unsloth-zephyr-sft-bnb-4bit-20241014-170522-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of sonthenguyen/ft-unsloth-zephyr-sft-bnb-4bit-20241014-170522
Dataset automatically created during the evaluation run of model sonthenguyen/ft-unsloth-zephyr-sft-bnb-4bit-20241014-170522
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/sonthenguyen__ft-unsloth-zephyr-sft-bnb-4bit-20241014-170522-details.
|
anonymouscodee/webcode2m
|
anonymouscodee
|
WebCode2M: A Real-World Dataset for Code Generation from Webpage Designs
Features:
image: the screenshot of the webpage.
bbox: the layout information, i.e., the bounding boxes (Bbox) of all the elements in the webpage, which contains the size, position, and hierarchy information.
text: the webpage code text including HTML/CSS code.
scale: the scale of the screenshot, in the format [width, height].
lang: the main language of the text content displayed on the rendered page (excluding HTML/CSS… See the full description on the dataset page: https://huggingface.co/datasets/anonymouscodee/webcode2m.
|
open-llm-leaderboard/sonthenguyen__ft-unsloth-zephyr-sft-bnb-4bit-20241014-161415-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of sonthenguyen/ft-unsloth-zephyr-sft-bnb-4bit-20241014-161415
Dataset automatically created during the evaluation run of model sonthenguyen/ft-unsloth-zephyr-sft-bnb-4bit-20241014-161415
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/sonthenguyen__ft-unsloth-zephyr-sft-bnb-4bit-20241014-161415-details.
|
arranonymsub/HiCUPID
|
arranonymsub
|
Datset Card for HiCUPID
HiCUPID (Conversational User Profile Inclusive Dataset) is a synthetic dialogue dataset designed to train and evaluate large language models (LLMs) in terms of personalization performance. HiCUPID consists of 600 unique user profiles (dialogue histories, each consisting of ~15,000 tokens), and 24,000 question-answer pairs specific to each user.
Dataset Details
This repository provides three different subsets:
Dialogue: Contains user… See the full description on the dataset page: https://huggingface.co/datasets/arranonymsub/HiCUPID.
|
argilla-warehouse/proofread-assistant
|
argilla-warehouse
|
Dataset Card for proofread-assistant
This dataset has been created with distilabel.
The pipeline script was uploaded to easily reproduce the dataset:
pipeline.py.
It can be run directly using the CLI:
distilabel pipeline run --script "https://huggingface.co/datasets/argilla-warehouse/proofread-assistant/raw/main/pipeline.py"
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in… See the full description on the dataset page: https://huggingface.co/datasets/argilla-warehouse/proofread-assistant.
|
open-llm-leaderboard/TheTsar1209__nemo-carpmuscle-v0.1-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of TheTsar1209/nemo-carpmuscle-v0.1
Dataset automatically created during the evaluation run of model TheTsar1209/nemo-carpmuscle-v0.1
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/TheTsar1209__nemo-carpmuscle-v0.1-details.
|
sofia-uni/toxic-data-bg
|
sofia-uni
|
Warning: This dataset contains content that includes toxic, offensive, or otherwise inappropriate language.
The toxic-data-bg dataset consists of 4,384 manually annotated sentences across four categories: toxic language, medical terminology, non-toxic language, and terms related to minority communities.
|
opensourceorg/osaid
|
opensourceorg
|
Building up Open Source AI with the Hugging Face Community
As the field of AI advances, it's becoming increasingly important to understand what "Open Source" means in the context of artificial intelligence.
While Open Source software has long had established standards, AI models and datasets bring new challenges around transparency, accessibility and collaboration.
To address this, the Open Source Initiative (OSI) has been developing an Open Source AI Definition (OSAID) which… See the full description on the dataset page: https://huggingface.co/datasets/opensourceorg/osaid.
|
huzaifa525/Medical_Intelligence_Dataset_40k_Rows_of_Disease_Info_Treatments_and_Medical_QA
|
huzaifa525
|
Medical Intelligence Dataset: 40k+ Rows of Disease Info, Treatments, and Medical Q&ACreated by: Huzefa Nalkheda Wala
Unlock a valuable dataset containing 40,443 rows of detailed medical information. This dataset is ideal for patients, medical students, researchers, and AI developers. It offers a rich combination of disease information, symptoms, treatments, and curated medical Q&A for students, as well as dialogues between patients and doctors, making it highly versatile.
What's… See the full description on the dataset page: https://huggingface.co/datasets/huzaifa525/Medical_Intelligence_Dataset_40k_Rows_of_Disease_Info_Treatments_and_Medical_QA.
|
open-llm-leaderboard/zelk12__MT2-gemma-2-9B-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of zelk12/MT2-gemma-2-9B
Dataset automatically created during the evaluation run of model zelk12/MT2-gemma-2-9B
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT2-gemma-2-9B-details.
|
aaaakash001/cifar100_Vit_large
|
aaaakash001
|
This dataset is original cifar10 dataset but also contains features from vit_large.
Import packages
from transformers import AutoImageProcessor, AutoModelForImageClassification
import torch
from torch.utils.data import DataLoader
from datasets import load_dataset
model
processor = AutoImageProcessor.from_pretrained("google/vit-large-patch32-384",use_fast=True)
model = AutoModelForImageClassification.from_pretrained("google/vit-large-patch32-384")… See the full description on the dataset page: https://huggingface.co/datasets/aaaakash001/cifar100_Vit_large.
|
CJWeiss/ZeroLexSumm
|
CJWeiss
|
ZeroLexSumm Benchmark
The ZeroLexSumm Benchmark contains 8 datasets from various juresdictions such as the US, UK, EU, and India. The datasets are curated out of the LexSum datasets, each conatining 50 samples. All datasets adheres to the same format with columns: input, output, id, cluster, old_id and length.
cluster: which inout bracket 0-8k, 8k-16k, 16k+ and longest
old_id: ID in the LexSumm dataset
length: length of the input
Below you can find the links to the seperate… See the full description on the dataset page: https://huggingface.co/datasets/CJWeiss/ZeroLexSumm.
|
han5i5j1986/koch_lego_2024-10-16-02
|
han5i5j1986
|
This dataset was created using LeRobot.
|
Forceless/zenodo-pptx
|
Forceless
|
Forceless/zenodo-pptx
Crawled from zenodo
dirname = f"zenodo-pptx/pptx/{task['license']}/{task['created'][:4]}/"
basename = f"{task['checksum'][4:]}-{task['filename']}"
filepath = dirname + basename
try:
open('/tmp/'+basename,'wb').close()
except:
filepath = dirname + basename[:240] + ".pptx"
Citation
|
han5i5j1986/koch_lego_2024-10-16-03
|
han5i5j1986
|
This dataset was created using LeRobot.
|
aliberts/stanford_kuka_multimodal_dataset
|
aliberts
|
This dataset was created using LeRobot.
meta/info.json
{
"codebase_version": "v2.0",
"data_path": "data/chunk-{episode_chunk:03d}/train-{episode_index:05d}-of-{total_episodes:05d}.parquet",
"robot_type": "unknown",
"total_episodes": 3000,
"total_frames": 149985,
"total_tasks": 1,
"total_videos": 3000,
"total_chunks": 4,
"chunks_size": 1000,
"fps": 20,
"splits": {
"train": "0:3000"
},
"keys": [
"observation.state"… See the full description on the dataset page: https://huggingface.co/datasets/aliberts/stanford_kuka_multimodal_dataset.
|
han5i5j1986/koch_lego_2024-10-16-04
|
han5i5j1986
|
This dataset was created using LeRobot.
|
toyxyz/Coco2017_dethanything_512
|
toyxyz
|
I cropped the Coco 2017 dataset to 512 and created a depth image with DepthAnything v2.
https://cocodataset.org/#download
|
DeathDaDev/textures3
|
DeathDaDev
|
Description
This is the third iteration and official release of a dataset curated to power the Materializer model for Blender. The dataset contains a range of labeled texture images that were sourced from ambientCG under their Creative Commons CC0 1.0 Universal License. These textures are designed to help in the classification of various material maps, which are essential for creating realistic 3D materials in Blender.
Future Plans
The dataset is still evolving… See the full description on the dataset page: https://huggingface.co/datasets/DeathDaDev/textures3.
|
oakwood/curtain
|
oakwood
|
This dataset was created using 🤗 LeRobot.
|
celvaigh/periad
|
celvaigh
|
This dataset was built in the context of the PERIAD project which allows The Saturday Review's authors predictions.
Within the scope of this prediction project, the main objectives were as follows:
Preparing data related to the magazine.
Using algorithms for author prediction.
Generating documents for presenting the predictions.
|
arrmlet/reddit_dataset_245
|
arrmlet
|
Bittensor Subnet 13 Reddit Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility of this dataset… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/reddit_dataset_245.
|
arrmlet/x_dataset_245
|
arrmlet
|
Bittensor Subnet 13 X (Twitter) Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/x_dataset_245.
|
AnnaStuckert/FaceMap_Rescale
|
AnnaStuckert
|
This dataset contains images and labels for the 2 example videos from the FACEMAP dataset. These images have been rescaled to fit the 224x224 pixel input to the ViT, but no further adjustments are made.
|
JanSchTech/starcoderdata-python-edu-lang-score
|
JanSchTech
|
Dataset Card for Starcoder Data with Python Education and Language Scores
Dataset Summary
The starcoderdata-python-edu-lang-score dataset contains the Python subset of the starcoderdata dataset. It augments the existing Python subset with features that assess the educational quality of code and classify the language of code comments. This dataset was created for high-quality Python education and language-based training, with a primary focus on facilitating models… See the full description on the dataset page: https://huggingface.co/datasets/JanSchTech/starcoderdata-python-edu-lang-score.
|
AnnaStuckert/Facemap_RawData_with_labels
|
AnnaStuckert
|
This dataset contains images and labels for the 2 example videos from the FACEMAP dataset. Please notice this is the raw dataset prior to any preprocessing, so prior to train/test split, and no augmentation applied.
|
han5i5j1986/koch_lego_2024-10-16-05
|
han5i5j1986
|
This dataset was created using LeRobot.
|
arrmlet/reddit_dataset_196
|
arrmlet
|
Bittensor Subnet 13 Reddit Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility of this dataset… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/reddit_dataset_196.
|
arrmlet/x_dataset_196
|
arrmlet
|
Bittensor Subnet 13 X (Twitter) Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed data from X (formerly Twitter). The data is continuously updated by network miners, providing a real-time stream of tweets for various analytical and machine learning tasks.
For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility… See the full description on the dataset page: https://huggingface.co/datasets/arrmlet/x_dataset_196.
|
explodinggradients/physics_metrics_alignment
|
explodinggradients
|
Dataset Card for "physics_metrics_alingment"
More Information needed
|
SriramRokkam/ner_dataset.csv
|
SriramRokkam
|
Dataset : Name Entity Recognition
The dataset used is a custom NER dataset provided in CSV format with columns:
sentence_id: Unique identifier for sentences.
words: The words in each sentence.
labels: The named entity labels corresponding to each word.
|
humanhow/QA-Dataset-mini
|
humanhow
|
Dataset Card for "QA-Dataset-mini"
More Information needed
|
TUL-Poland/InLUT3D
|
TUL-Poland
|
This resource contains Indoor Lodz University of Technology Point Cloud Dataset (InLUT3D) - a point cloud dataset tailored for real object classification and both semantic and instance segmentation tasks. Comprising of 321 scans, some areas in the dataset are covered by multiple scans. All of them are captured using the Leica BLK360 scanner.
Available categories
The points are divided into 18 distinct categories outlined in the label.yaml file along with their respective codes… See the full description on the dataset page: https://huggingface.co/datasets/TUL-Poland/InLUT3D.
|
burtenshaw/fosllms-week-1-demo
|
burtenshaw
|
Dataset Card for fosllms-week-1-demo
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/burtenshaw/fosllms-week-1-demo/raw/main/pipeline.yaml"
or explore the configuration:
distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/burtenshaw/fosllms-week-1-demo.
|
laicsiifes/nocaps-pt-br
|
laicsiifes
|
🎉 nocaps Dataset Translation for Portuguese Image Captioning
💾 Dataset Summary
nocaps Portuguese Translation, a multimodal dataset for Portuguese image captioning benchmark, each image accompanied by ten descriptive captions
that have been generated by human annotators for every individual image. The original English captions were rendered into Portuguese
through the utilization of the Google Translator API.
🧑💻 Hot to Get Started with the Dataset… See the full description on the dataset page: https://huggingface.co/datasets/laicsiifes/nocaps-pt-br.
|
Papersnake/gushiwen
|
Papersnake
|
古诗文网数据集
数据来源古诗文网,433841条古诗文数据,数据字段直接从网站导出。
|
dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_154013
|
dvilasuero
|
Dataset Card for meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_154013
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_154013/raw/main/pipeline.yaml"
or… See the full description on the dataset page: https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_154013.
|
Omarrran/english_tts_hf_dataset
|
Omarrran
|
Omarrran/english_tts_hf_dataset
Dataset Description
This dataset contains English text-to-speech (TTS) data, including paired text and audio files.
Dataset Statistics
Total number of samples: 797
Number of samples in train split: 637
Number of samples in test split: 160
Audio Statistics
Total audio duration: 7961.12 seconds (2.21 hours)
Average audio duration: 9.99 seconds
Minimum audio duration: 1.12 seconds
Maximum audio… See the full description on the dataset page: https://huggingface.co/datasets/Omarrran/english_tts_hf_dataset.
|
dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_155750
|
dvilasuero
|
Dataset Card for meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_155750
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_155750/raw/main/pipeline.yaml"
or… See the full description on the dataset page: https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_155750.
|
dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_160056
|
dvilasuero
|
Dataset Card for meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_160056
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_160056/raw/main/pipeline.yaml"
or… See the full description on the dataset page: https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_160056.
|
dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_161532
|
dvilasuero
|
Dataset Card for meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_161532
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_161532/raw/main/pipeline.yaml"
or… See the full description on the dataset page: https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_161532.
|
dvilasuero/meta-llama_Llama-3.1-8B-Instruct_cot_ifeval_20241016_163608
|
dvilasuero
|
Dataset Card for meta-llama_Llama-3.1-8B-Instruct_cot_ifeval_20241016_163608
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_cot_ifeval_20241016_163608/raw/main/pipeline.yaml"
or explore… See the full description on the dataset page: https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_cot_ifeval_20241016_163608.
|
jaeyong2/persona-inst
|
jaeyong2
|
How to use
>>> from datasets import load_dataset
>>> ds = load_dataset("jaeyong2/persona-inst", split="train")
>>> ds
Dataset({
features: ['Level', 'English', 'Korean', 'Thai', 'Vietnamese', 'context'],
num_rows: 3006572
})
Development Process
Generate persona pair from proj-persona/PersonaHub
We used Qwen/Qwen2-72B-Instruct model to generate Question.
License
Qwen/Qwen2.5-72B-Instruct :… See the full description on the dataset page: https://huggingface.co/datasets/jaeyong2/persona-inst.
|
mxforml/conv_xai
|
mxforml
| |
dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_173921
|
dvilasuero
|
Dataset Card for meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_173921
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI:
distilabel pipeline run --config "https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_173921/raw/main/pipeline.yaml"
or… See the full description on the dataset page: https://huggingface.co/datasets/dvilasuero/meta-llama_Llama-3.1-8B-Instruct_thinking_ifeval_20241016_173921.
|
AnelMusic/python18k_instruct_sharegpt
|
AnelMusic
|
Note:
This dataset builds upon the iamtarun/python_code_instructions_18k_alpaca dataset and adheres to the ShareGPT format with a unique “conversations” column containing messages in JSONL. Unlike simpler formats like Alpaca, ShareGPT is ideal for storing multi-turn conversations, which is closer to how users interact with LLMs.
Example:
from datasets import load_dataset
dataset = load_dataset("AnelMusic/python18k_instruct_sharegpt", split = "train")
def… See the full description on the dataset page: https://huggingface.co/datasets/AnelMusic/python18k_instruct_sharegpt.
|
Techno-Trade/trec-ja
|
Techno-Trade
|
日本語TRECライクな質問分類データセット
概要
trec-ja.json
このデータセットは、日本語の質問文とその分類ラベルを含む、TRECデータセットを模した質問分類用のデータセットです。日本の文化や地理に関連する質問を含み、自然言語処理や機械学習のタスクに適しています。
データセットの特徴
質問数: 535
言語: 日本語
粗粒度ラベル数: 6
細粒度ラベル数: 50
データ構造
各データポイントは以下の構造を持っています:
{
"text": "質問文",
"coarse_label": 粗粒度ラベル(整数),
"fine_label": 細粒度ラベル(整数)
}
ラベルの説明
粗粒度ラベル
0: 略語 (ABBR)
1: エンティティ (ENTY)
2: 説明 (DESC)
3: 人物 (HUM)
4: 場所 (LOC)
5: 数値 (NUM)… See the full description on the dataset page: https://huggingface.co/datasets/Techno-Trade/trec-ja.
|
takara-ai/kurai_toori_dark_streets
|
takara-ai
|
Kurai Toori Dark Streets Dataset
Overview
The Kurai Toori Dark Streets Dataset is a collection of photorealistic, cinematic images depicting dark cityscapes with a run-down and dilapidated atmosphere. This dataset is designed to provide high-quality, moody urban scenes for various applications in computer vision, machine learning, and creative projects.
Dataset Details
Content: Photorealistic images of dark, urban environments
Style: Cinematic… See the full description on the dataset page: https://huggingface.co/datasets/takara-ai/kurai_toori_dark_streets.
|
jl3676/SafetyAnalystData
|
jl3676
|
Dataset Card for SafetyAnalystData
Disclaimer:
The data includes examples that might be disturbing, harmful or upsetting. It includes a range of harmful topics such as discriminatory language and discussions
about abuse, violence, self-harm, sexual content, misinformation among other high-risk categories. The main goal of this data is for advancing research in building safe LLMs.
It is recommended not to train a LLM exclusively on the harmful examples.… See the full description on the dataset page: https://huggingface.co/datasets/jl3676/SafetyAnalystData.
|
manushetty/api_training
|
manushetty
|
[
{
"input": "What is the total sales from last quarter?",
"output": "https://{host}:port/sap/opu/odata/sap/API_SALES_ORDER_SRV/SalesOrder?$filter=SalesDate ge '2024-07-01' and SalesDate le '2024-09-30'"
},
{
"input": "How many purchase orders are pending?",
"output": "https://{host}:port/sap/opu/odata/sap/API_PURCHASE_ORDER_SRV/PurchaseOrder?$filter=Status eq 'Pending'"
},
{
"input": "Show me the sales orders from customer ABC.",
"output":… See the full description on the dataset page: https://huggingface.co/datasets/manushetty/api_training.
|
open-llm-leaderboard/piotr25691__thea-rp-3b-25r-details
|
open-llm-leaderboard
|
Dataset Card for Evaluation run of piotr25691/thea-rp-3b-25r
Dataset automatically created during the evaluation run of model piotr25691/thea-rp-3b-25r
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/piotr25691__thea-rp-3b-25r-details.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.