id
stringlengths
6
121
author
stringlengths
2
42
description
stringlengths
0
6.67k
nbalepur/musique_with_scores2
nbalepur
Dataset Card for "musique_with_scores2" More Information needed
AndyChen123/Berkeley-Function-Calling-Leaderboard-Fix
AndyChen123
Berkeley Function Calling Leaderboard The Berkeley function calling leaderboard is a live leaderboard to evaluate the ability of different LLMs to call functions (also referred to as tools). We built this dataset from our learnings to be representative of most users' function calling use-cases, for example, in agents, as a part of enterprise workflows, etc. To this end, our evaluation dataset spans diverse categories, and across multiple languages. Checkout the Leaderboard at… See the full description on the dataset page: https://huggingface.co/datasets/AndyChen123/Berkeley-Function-Calling-Leaderboard-Fix.
aycankatitas/tinyllamamodel
aycankatitas
Dataset Card for tinyllamamodel This dataset has been created with distilabel. Dataset Summary This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI: distilabel pipeline run --config "https://huggingface.co/datasets/aycankatitas/tinyllamamodel/raw/main/pipeline.yaml" or explore the configuration: distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/aycankatitas/tinyllamamodel.
dogtooth/llama31-8b-generated-hs_1729476209
dogtooth
allenai/open_instruct: Generation Dataset See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail Configs args: {'add_timestamp': True, 'alpaca_eval': False, 'dataset_end_idx': 8636, 'dataset_mixer_list': ['dogtooth/helpsteer2_binarized', '1.0'], 'dataset_splits': ['train'], 'dataset_start_idx': 0, 'hf_entity': 'dogtooth', 'hf_repo_id': 'llama31-8b-generated-hs', 'llm_judge': False, 'mode': 'generation'… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/llama31-8b-generated-hs_1729476209.
akemiH/SemiHVision
akemiH
Paper Link: https://arxiv.org/abs/2410.14948 Dataset Caption Available Link License Deeplesion Yes Link CC BY 4.0 PadChest Yes Link PADCHEST Dataset Research Use Agreement Eurorad Yes Link Creative Commons Attribution 4.0 International License MIMIC-CXR-JPG No Link PhysioNet Credentialed Health Data License 1.5.0 LLD Yes Link LLD-MMRI Agreement MAMA-MIA Yes Link CC BY-NC-SA 4.0 PMC-VQA Yes Link CC BY-SA PMC-Instruct Yes - OpenRAIL Quilt Yes Link - Radiopaedia No… See the full description on the dataset page: https://huggingface.co/datasets/akemiH/SemiHVision.
OALL/details_Slim205__Barka-9b-it-v02
OALL
Dataset Card for Evaluation run of Slim205/Barka-9b-it-v02 Dataset automatically created during the evaluation run of model Slim205/Barka-9b-it-v02. The dataset is composed of 136 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An… See the full description on the dataset page: https://huggingface.co/datasets/OALL/details_Slim205__Barka-9b-it-v02.
zrjin/xvector_nnet_1a_libritts_clean_460
zrjin
This dataset contains x-vectors extracted using kaldi toolkit from libritts-{clean-460,dev-clean,test-clean} using the pre-trained model from http://kaldi-asr.org/models/8/0008_sitw_v2_1a.tar.gz
ms180/ljspeech_with_dsu
ms180
LJSpeech with train/dev/test split. This dataset includes discrete units generated by 6-th layer of hubert-base.
ringos/bio-brief-Mistral-7B-v0.1-gemma2-rm
ringos
Dataset Card for "bio-brief-Mistral-7B-v0.1-gemma2-rm" More Information needed
ringos/bio-detailed-Mistral-7B-v0.1-gemma2-rm
ringos
Dataset Card for "bio-detailed-Mistral-7B-v0.1-gemma2-rm" More Information needed
ZelonPrograms/MovieDataset
ZelonPrograms
Movie Dataset This dataset contains information about various movies, including details such as the year of release, title, genre, director, actors, plot, language, country, awards, ratings, and IMDb ID. It is designed for use in film analysis, recommendation systems, or as a resource for studying popular culture. Dataset Overview Format: CSV Number of Records: 34 Number of Features: 13 Features Year: The year the movie was released. Title: The… See the full description on the dataset page: https://huggingface.co/datasets/ZelonPrograms/MovieDataset.
dogtooth/llama31-8b-generated-classifier-scored-hs_1729485696
dogtooth
allenai/open_instruct: Rejection Sampling Dataset See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail Configs args: {'add_timestamp': True, 'hf_entity': 'dogtooth', 'hf_repo_id': 'llama31-8b-generated-classifier-scored-hs', 'hf_repo_id_scores': 'rejection_sampling_scores', 'include_reference_completion_for_rejection_sampling': True, 'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/llama31-8b-generated-classifier-scored-hs_1729485696.
dogtooth/rejection_sampling_scores_1729485696
dogtooth
allenai/open_instruct: Rejection Sampling Dataset See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail Configs args: {'add_timestamp': True, 'hf_entity': 'dogtooth', 'hf_repo_id': 'llama31-8b-generated-classifier-scored-hs', 'hf_repo_id_scores': 'rejection_sampling_scores', 'include_reference_completion_for_rejection_sampling': True, 'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/rejection_sampling_scores_1729485696.
lianghsun/tw-judgment-gist
lianghsun
Data Card for 中華民國台灣之判決書要旨集(tw-judgment-gist) Dataset Summary 本資料集是收集司法院精選判決書的要旨。 Supported Tasks and Leaderboards 本資料集可以運用在 Continuned Pre-Training,讓模型學習判決書的要旨。 Languages 繁體中文。 Dataset Structure Data Instances Data Fields Data Splits
dogtooth/llama31-8b-generated-classifier-scored-hs_1729489866
dogtooth
allenai/open_instruct: Rejection Sampling Dataset See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail Configs args: {'add_timestamp': True, 'hf_entity': 'dogtooth', 'hf_repo_id': 'llama31-8b-generated-classifier-scored-hs', 'hf_repo_id_scores': 'rejection_sampling_scores', 'include_reference_completion_for_rejection_sampling': True, 'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/llama31-8b-generated-classifier-scored-hs_1729489866.
dogtooth/rejection_sampling_scores_1729489866
dogtooth
allenai/open_instruct: Rejection Sampling Dataset See https://github.com/allenai/open-instruct/blob/main/docs/algorithms/rejection_sampling.md for more detail Configs args: {'add_timestamp': True, 'hf_entity': 'dogtooth', 'hf_repo_id': 'llama31-8b-generated-classifier-scored-hs', 'hf_repo_id_scores': 'rejection_sampling_scores', 'include_reference_completion_for_rejection_sampling': True, 'input_filename':… See the full description on the dataset page: https://huggingface.co/datasets/dogtooth/rejection_sampling_scores_1729489866.
OALL/details_nvidia__Llama-3.1-Nemotron-70B-Instruct-HF
OALL
Dataset Card for Evaluation run of nvidia/Llama-3.1-Nemotron-70B-Instruct-HF Dataset automatically created during the evaluation run of model nvidia/Llama-3.1-Nemotron-70B-Instruct-HF. The dataset is composed of 136 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing… See the full description on the dataset page: https://huggingface.co/datasets/OALL/details_nvidia__Llama-3.1-Nemotron-70B-Instruct-HF.
i0xs0/Arabic-SQuAD
i0xs0
Arabic-SQuAD: consists of 48,344 questions in 10,364 paragraphs. Note that Arabic-SQuAD is translated from English SQuAD Arabic QA dataset follows the SQuAD format:
jjz5463/Multilingual_ParallelStyle_v1
jjz5463
Dataset Card Add more information here This dataset was produced with DataDreamer 🤖💤. The synthetic dataset card can be found here.
NickyNicky/Rasonamiento_noticias_test
NickyNicky
Nota: ** Se observa alu en las infe
IPEC-COMMUNITY/LiveScene
IPEC-COMMUNITY
Dataset Card for LiveScene Dataset Description The dataset consists of two parts: the InterReal dataset, which was captured using the Polycam app on an iPhone 15 Pro, and the OmniSim dataset created with the OmniGibson simulator. In total, the dataset provides 28 interactive subsets, containing 2 million samples across various modalities, including RGB, depth, segmentation, camera trajectories, interaction variables, and object captions. This comprehensive… See the full description on the dataset page: https://huggingface.co/datasets/IPEC-COMMUNITY/LiveScene.
nickprock/AIRC_FAQ
nickprock
FAQ AIRC Collection of FAQs from AIRC's website. AIRC is the Italian association for cancer research. Founded more than 50 years ago, it continuously supports, through fundraising, the progress of research for the treatment of cancer and disseminates correct information about its results, prevention and therapeutic prospects.
alibabasglab/VoxCeleb2-mix
alibabasglab
A modified version of the VoxCeleb2 Dataset. Original data can be downloaded here. This dataset is used for Audio-visual speaker extraction conditioned on face recordings in the reentry paper, which the code can be found here. Usage cat orig* > orig.tar tar -xvf orig.tar cat audio_clean* > audio_clean.tar tar -xvf audio_clean.tar
longvitu/LongViTU
longvitu
LongViTU LongViTU is a large-scale instruction-tuning dataset for long-form video understanding. Please refer to our anonymous project page.
twodgirl/b-to-bbox
twodgirl
Validation dataset for digital and traditional art Study of traditional paintings to improve spatial recognition and labeling. This sample is part of the MACB/MARble model. How? I take easily recognizable objects, verify them, and draw boxes around them. This dataset is an improvement over the previous version, because first the grids are uploaded, then the *.bbox files. You see what you get. Tags that are not visible had been incorrectly recognized.
nyuuzyou/znanio-others
nyuuzyou
Dataset Card for Znanio.ru Other Educational Materials Dataset Summary This dataset contains 3,092 educational files from the platform [znanio.ru] (https://znanio.ru) that were not categorized in the main groups. Znanio.ru is a resource for teachers, educators, students, and parents that provides a variety of educational content. Languages The dataset is primarily in Russian, with potential multilingual content: Russian (ru): The majority of the… See the full description on the dataset page: https://huggingface.co/datasets/nyuuzyou/znanio-others.
denny3388/VHD11K
denny3388
T2Vs Meet VLMs: A Scalable Multimodal Dataset for Visual Harmfulness Recognition Chen Yeh*, You-Ming Chang*, Wei-Chen Chiu, Ning Yu Accepted to NeurIPS'24 Datasets and Benchmarks Track! Overview We propose a comprehensive and extensive harmful dataset, Visual Harmful Dataset 11K (VHD11K), consisting of 10,000 images and 1,000 videos, crawled from the Internet and generated by 4 generative models, across a total of 10 harmful categories covering a full… See the full description on the dataset page: https://huggingface.co/datasets/denny3388/VHD11K.
lianghsun/tw-law-article-evolution
lianghsun
Dataset Card for 中華民國台灣法規條文沿革資料集(tw-law-article-evolution) Dataset Summary 本資料是以每一個法規的歷史沿革做為主體去設計的資料集。 Supported Tasks and Leaderboards 本資料集可以運用在 (Continuous) Pre-training,讓模型學會中華民國的法規內容。 Languages 繁體中文。 Dataset Structure Data Instances (WIP) Data Fields (WIP) Data Splits (WIP) Dataset Creation Curation (WIP) Source Data 全國法規資料庫… See the full description on the dataset page: https://huggingface.co/datasets/lianghsun/tw-law-article-evolution.
nixiesearch/querygen-data-v4
nixiesearch
Nixiesearch querygen-v4 model training dataset A dataset used to train the not-yet-published querygen-v4 model from Nixiesearch. The dataset is a combination of multiple open query-document datasets in a format for Causal LLM training. Used datasets We use train splits from the following datasets: MSMARCO: 532751 rows HotpotQA: 170000 rows NQ: 58554 rows MIRACL en: 1193 rows SQUAD: 85710 rows TriviaQA: 60283 rows The train split is 900000 rows, and test split is… See the full description on the dataset page: https://huggingface.co/datasets/nixiesearch/querygen-data-v4.
shay681/Legal_Clauses
shay681
Dataset Card for Legal_Clauses Dataset Dataset Summary This dataset is a subset of the LevMuchnik/SupremeCourtOfIsrael dataset. It has been processed using the shay681/HeBERT_finetuned_Legal_Clauses model, which extracted the legal clauses from the text column. It can be loaded with the dataset package: import datasets data = datasets.load_dataset('shay681/Legal_Clauses') Dataset Structure The dataset is a json lines file with each line… See the full description on the dataset page: https://huggingface.co/datasets/shay681/Legal_Clauses.
shay681/Precedents
shay681
Dataset Card for Precedents Dataset Dataset Summary This dataset is a subset of the LevMuchnik/SupremeCourtOfIsrael dataset. It has been processed using the shay681/HeBERT_finetuned_Precedents model, which extracted the precedents from the text column. It can be loaded with the dataset package: import datasets data = datasets.load_dataset('shay681/Precedents') Dataset Structure The dataset is a json lines file with each line corresponding to a… See the full description on the dataset page: https://huggingface.co/datasets/shay681/Precedents.
yfyeung/libriheavy
yfyeung
Cut statistics: ╒═══════════════════════════╤═════════════╕ │ Cuts count: │ 12380505 │ ├───────────────────────────┼─────────────┤… See the full description on the dataset page: https://huggingface.co/datasets/yfyeung/libriheavy.
giuliadc/pubmed-filtered-1733rows
giuliadc
Original data from: https://github.com/armancohan/long-summarization The first 3000 rows of the test split of the original dataset were processed and filtered as follows. This resulted in 1733 filtered rows. In the original dataset, some sentences appear several times in the same article, even if they're only contained once in the original research paper. For this reason, all dataset rows where the same sentence appeared more than once where removed. In the original dataset, every sentence is… See the full description on the dataset page: https://huggingface.co/datasets/giuliadc/pubmed-filtered-1733rows.
open-llm-leaderboard/Qwen__Qwen2-VL-7B-Instruct-details
open-llm-leaderboard
Dataset Card for Evaluation run of Qwen/Qwen2-VL-7B-Instruct Dataset automatically created during the evaluation run of model Qwen/Qwen2-VL-7B-Instruct The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Qwen__Qwen2-VL-7B-Instruct-details.
giuliadc/arxiv-filtered-978rows
giuliadc
Original data from: https://github.com/armancohan/long-summarization The first n (with n ~ 3000) rows of the test split of the original dataset were processed and filtered as follows. This resulted in 978 filtered rows. In the original dataset, some sentences appear several times in the same article, even if they're only contained once in the original research paper. For this reason, all dataset rows where the same sentence appeared more than once where removed. In the original dataset, every… See the full description on the dataset page: https://huggingface.co/datasets/giuliadc/arxiv-filtered-978rows.
Rapidata/image-preference-demo
Rapidata
Image dataset for preference aquisition demo This dataset provides the files used to run the example that we use in this blog post to illustrate how easily you can set up and run the annotation process to collect a huge preference dataset using Rapidata's API. The goal is to collect human preferences based on pairwise image matchups. The dataset contains: Generated images: A selection of example images generated using Flux.1 and Stable Diffusion. The images are provided in a… See the full description on the dataset page: https://huggingface.co/datasets/Rapidata/image-preference-demo.
cyberorigin/cyber_take_the_item
cyberorigin
CyberOrigin Dataset Our data includes information from home services, the logistics industry, and laboratory scenarios. For more details, please refer to our Offical Data Website contents of dataset: cyber_take_the_item # dataset root path └── data/ ├── metadata_ID1_240808.json ├── segment_ids_ID1_240808.bin # for each frame segment_ids uniquely points to the segment index that frame i came from. You may want to use this to separate non-contiguous frames from… See the full description on the dataset page: https://huggingface.co/datasets/cyberorigin/cyber_take_the_item.
cyberorigin/cyber_pipette
cyberorigin
CyberOrigin Dataset Our data includes information from home services, the logistics industry, and laboratory scenarios. For more details, please refer to our Offical Data Website contents of dataset: cyber_pipette # dataset root path └── data/ ├── metadata_ID1_240808.json ├── segment_ids_ID1_240808.bin # for each frame segment_ids uniquely points to the segment index that frame i came from. You may want to use this to separate non-contiguous frames from different… See the full description on the dataset page: https://huggingface.co/datasets/cyberorigin/cyber_pipette.
cy948/ksdoc-airscript
cy948
Human Annotation Example We invite some domain experts who has code experience on AirScript to add annotations for the code snippets in lines. For example: Data annotation example /*本示例判断如果活动工作表上区域 B1:B10 中第二个(AboveAverage)条件格式的类型为xlAboveAverageCondition,则删除该条件格式。*/ function test() { +// 从工作表上区域 B1:B10 中选择第二个条件格式 let aboveAverage = ActiveSheet.Range("B1:B10").FormatConditions.Item(2) +// 若条件格式的类型为 `xlAboveAverageCondition` if (aboveAverage.Type ==… See the full description on the dataset page: https://huggingface.co/datasets/cy948/ksdoc-airscript.
Teklia/Newspapers-finlam
Teklia
Newspaper segmentation dataset: Finlam Dataset Summary The Finlam dataset includes 149 French newspapers from the 19th to 20th centuries. Each newspaper contains multiple pages. Page images are resized to a fixed height of 2000 pixels. Each page contains multiple zones, with different information such as polygon, text, class, and order. Split set images newspapers train 623 129 val 50 10 test 48 10 Languages Most… See the full description on the dataset page: https://huggingface.co/datasets/Teklia/Newspapers-finlam.
QCRI/ThatiAR
QCRI
ThatiAR: Subjectivity Detection in Arabic News Sentences Along with the paper, we release the dataset and other experimental resources. Please find the attached directory structure below. Files Description data/ subjectivity_2024_dev.tsv: Development set for subjectivity detection in Arabic news sentences. subjectivity_2024_test.tsv: Test set for subjectivity detection in Arabic news sentences. subjectivity_2024_train.tsv: Training set for subjectivity detection… See the full description on the dataset page: https://huggingface.co/datasets/QCRI/ThatiAR.
szanella/MICO-DDP
szanella
MICO Differential Privacy Distinguisher challenge dataset Mico Argentatus (Silvery Marmoset) - William Warby/Flickr For the accompanying code, visit the GitHub repository of the competition: https://github.com/microsoft/MICO/. Getting Started The starting kit notebook for this task is available at: https://github.com/microsoft/MICO/tree/main/starting-kit. In the starting kit notebook you will find a walk-through of how to load the data and make your first… See the full description on the dataset page: https://huggingface.co/datasets/szanella/MICO-DDP.
SALT-NLP/Sketch2Code-hf
SALT-NLP
The Sketch2Code dataset consists of 731 human-drawn sketches paired with 484 real-world webpages from the Design2Code dataset, serving to benchmark Vision-Language Models (VLMs) on converting rudimentary sketches into web design prototypes. See the dataset in raw files here. Note that all images in these webpages are replaced by a blue placeholder image (rick.jpg). Please refer to our Project Page for more detailed information.
mussacharles60/mcv-sw-female-dataset
mussacharles60
Dataset Card for "mcv-sw-female" More Information needed
dino-zavr1/koch_test
dino-zavr1
This dataset was created using LeRobot.
mgrbyte/llyw-cymru-cy-en
mgrbyte
Llyw Cymru dataset This is a dataset comprised of aligned sentences scraped from Welsh Government websites. The data in this dataset is made available under the Open Government License. Mae hon yn set ddata sy'n cynnwys brawddegau wedi'u halinio wedi'u crafu o wefannau Llywodraeth Cymru. Mae'r data yn y set ddata hon ar gael o dan y Trwydded Llywodraeth Agored. Example Usage Use the datasets module from huggingface to load the dataset: import datasets ds =… See the full description on the dataset page: https://huggingface.co/datasets/mgrbyte/llyw-cymru-cy-en.
wave-on-discord/HelpSteer2-incoherent
wave-on-discord
Dataset Card for "HelpSteer2-incoherent" More Information needed
navimii/UraniumTech_WorldMorph
navimii
status trained on SD1.5 collection Date 26/06/2023 to 27/06/2023 Original intent the illustration of a fictional world in a state of adaptation to the aftermath of uranium abuse.
ychen/Generated-Empathetic-Dialogues-v0.1-Smol
ychen
Generated Empathetic Conversations v0.1 - Smol This is dataset contains 10K rows of multi-round empathetic conversations convering a diverse set of topics. Highlights Multi-round conversation It's not single-turn. The user and the assistant works together to gradually unfold the conversation. The average number of turns is 5, with a standard deviation of approximately 1.59 turns. A turn consists of two messages with one by the user, and another by… See the full description on the dataset page: https://huggingface.co/datasets/ychen/Generated-Empathetic-Dialogues-v0.1-Smol.
dudosya/floor_plan
dudosya
There are 1600 low res images in the zip file of floor plans. The csv file containst the labels that were acquired by web scraping the website dom4m.kz
mastilva/CVCreole
mastilva
license: afl-3.0 task_categories: automatic-speech-recognition language: af tags: medical pretty_name: CVCreole size_categories: 10K<n<100K
ringos/raw_ultrafeedback_binarized-Llama-3.1-8B-gemma2-rm
ringos
Dataset Card for "ultrafeedback_binarized-Llama-3.1-8B-gemma2-rm" More Information needed
hongguoguo/Low-bit-Cutlass-GEMM
hongguoguo
This is a dataset for low-bit Cutlass template evaluation, which contains the computation time of 19,350 GEMM executions for nine different CUTLASS GEMM templates for each low-bit data type on RTX 3080, RTX3090, and A100 NVIDIA Ampere GPUs. All GEMM time data are in milliseconds.
cs-mshah/SynMirror
cs-mshah
Dataset Card for SynMirror This repository hosts the data for Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror Reflections.SynMirror is a first-of-its-kind large scale synthetic dataset on mirror reflections, with diverse mirror types, objects, camera poses, HDRI backgrounds and floor textures. Dataset Details Dataset Description SynMirror consists of samples rendered from 3D assets of two widely used 3D object datasets -… See the full description on the dataset page: https://huggingface.co/datasets/cs-mshah/SynMirror.
WestlakeNLP/Research-14K
WestlakeNLP
CycleResearcher: Automated Research via Reinforcement Learning with Iterative Feedback HomePage: https://wengsyx.github.io/Researcher/ Researcher-14K Dataset The research-14k dataset is designed to capture both structured outlines and detailed main text from academic papers. The construction process involves three main steps: 1. Data Collection and Preprocessing We first compile accepted papers from major ML conferences (ICLR, NeurIPS, ICML, ACL… See the full description on the dataset page: https://huggingface.co/datasets/WestlakeNLP/Research-14K.
abhinav1kumar/ProcTHOR_Normal
abhinav1kumar
ProcTHOR_Normal Provides Normals of the ProcTHOR dataset in a friendly manner. Dataset Structure ├── data │ ├── train │ │ └── normal │ ├── val │ │ └── normal │ └── val_low │ └── normal Curation Rationale We use Kornia to generate these normals. Source Data From the ProcTHOR Simulator.
benoitfavre/Abuseval
benoitfavre
Abuseval Note: this is not the official valid/test split. Abusive language detection, with three labels: NOTABU (not abusive), EXP (explicit), IMP (implicit) English tweets. User names, urls, etc have been replaced. See: https://github.com/tommasoc80/AbuseEval, https://github.com/joeykay9/offenseval
Deddy/cerita_17_tahun_indonesia
Deddy
Cerita Sebelum Tidur 17 Tahun: Cerita & Narasi Indonesia Dataset ID: Deddy/cerita_17_tahun_indonesia 📖 Deskripsi Cerita Sebelum Tidur adalah kumpulan cerita naratif berbahasa Indonesia yang beragam, mulai dari cerita tentang pengalaman pribadi, kisah fiksi, hingga tema seperti penyiksaan, tukar pasangan, dan lain-lain. Dataset ini mencakup 1.500+ entri yang dikelompokkan berdasarkan kategori, judul, dan konten. Tujuan utama dari dataset ini adalah untuk… See the full description on the dataset page: https://huggingface.co/datasets/Deddy/cerita_17_tahun_indonesia.
shenxq/OneVision
shenxq
Image training data of LongVU downloaded from https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data
Dehaim/Libpy
Dehaim
Join the project community on our server! Pwnagotchi is an A2C-based "AI" leveraging bettercap that learns from its surrounding WiFi environment to maximize the crackable WPA key material it captures (either passively, or by performing authentication and association attacks). This material is collected as PCAP files containing any form of handshake supported by hashcat, including PMKIDs, full and half WPA handshakes. Instead of merely playing Super… See the full description on the dataset page: https://huggingface.co/datasets/Dehaim/Libpy.
shenxq/VideoChat2
shenxq
Video training data of LongVU downloaded from https://huggingface.co/datasets/OpenGVLab/VideoChat2-IT Video Please download the original videos from the provided links: BDD100K: bdd.zip ShareGPTVideo: https://huggingface.co/datasets/ShareGPTVideo/train_video_and_instruction/tree/main/train_300k CLEVRER: clevrer_qa.zip DiDeMo: didemo.zip EgoQA: https://huggingface.co/datasets/ynhe/videochat2_data/resolve/main/egoqa_split_videos.zip Kinetics-710: k400.zip MovieChat: moviechat.zip… See the full description on the dataset page: https://huggingface.co/datasets/shenxq/VideoChat2.
openmodelinitiative/Rapidata-Finding_the_Subjective_Truth
openmodelinitiative
This data was provided by Rapidata for open sourcing by the Open Model Initiative You can learn more about Rapidata's global preferencing & human labeling solutions at https://rapidata.ai/ This folder contains the data behind the paper "Finding the subjective Truth - Collecting 2 million votes for comprehensive gen-ai model evaluation" The paper can be found here: https://arxiv.org/html/2409.11904 Rapidata-Benchmark_v1.0.tsv: Contains the 282 prompts that were used to generate the images with… See the full description on the dataset page: https://huggingface.co/datasets/openmodelinitiative/Rapidata-Finding_the_Subjective_Truth.
open-llm-leaderboard/Langboat__Mengzi3-8B-Chat-details
open-llm-leaderboard
Dataset Card for Evaluation run of Langboat/Mengzi3-8B-Chat Dataset automatically created during the evaluation run of model Langboat/Mengzi3-8B-Chat The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Langboat__Mengzi3-8B-Chat-details.
Nech-C/mineralimage5K-98
Nech-C
This dataset is a subset of the MineralImage5k dataset introduced by Nesteruk et al. in MineralImage5k: A benchmark for zero-shot raw mineral visual recognition and description. Their GitHub repo and link to the paper. All splits of this dataset are stratified.
open-llm-leaderboard/Youlln__ECE-PRYMMAL-0.5B-FT-V4-MUSR-details
open-llm-leaderboard
Dataset Card for Evaluation run of Youlln/ECE-PRYMMAL-0.5B-FT-V4-MUSR Dataset automatically created during the evaluation run of model Youlln/ECE-PRYMMAL-0.5B-FT-V4-MUSR The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Youlln__ECE-PRYMMAL-0.5B-FT-V4-MUSR-details.
harpreetsahota/videos-to-test-trackers
harpreetsahota
Dataset Card for 2024.10.21.14.20.58 This is a FiftyOne dataset with 8 samples. Installation If you haven't already, install FiftyOne: pip install -U fiftyone Usage import fiftyone as fo from fiftyone.utils.huggingface import load_from_hub # Load the dataset # Note: other available arguments include 'max_samples', etc dataset = load_from_hub("harpreetsahota/videos-to-test-trackers") # Launch the App session = fo.launch_app(dataset)… See the full description on the dataset page: https://huggingface.co/datasets/harpreetsahota/videos-to-test-trackers.
open-llm-leaderboard/Youlln__ECE-PRYMMAL-0.5B-FT-V3-MUSR-details
open-llm-leaderboard
Dataset Card for Evaluation run of Youlln/ECE-PRYMMAL-0.5B-FT-V3-MUSR Dataset automatically created during the evaluation run of model Youlln/ECE-PRYMMAL-0.5B-FT-V3-MUSR The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Youlln__ECE-PRYMMAL-0.5B-FT-V3-MUSR-details.
CopyleftCultivars/syntheticdata-distiset-farming-chemistry
CopyleftCultivars
CREATED BY CALEB DELEEUW @Solshine Dataset Card for syntheticdata-distiset-farming-chemistry This dataset has been created with distilabel. Dataset Summary This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI: distilabel pipeline run --config "https://huggingface.co/datasets/Solshine/syntheticdata-distiset-farming-chemistry/raw/main/pipeline.yaml" or explore the… See the full description on the dataset page: https://huggingface.co/datasets/CopyleftCultivars/syntheticdata-distiset-farming-chemistry.
open-llm-leaderboard/zelk12__MT4-gemma-2-9B-details
open-llm-leaderboard
Dataset Card for Evaluation run of zelk12/MT4-gemma-2-9B Dataset automatically created during the evaluation run of model zelk12/MT4-gemma-2-9B The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT4-gemma-2-9B-details.
Rutyrru/Titanico
Rutyrru
Extraído de https://github.com/anthony-wang/BestPractices/tree/master/data. Campos: Formula (string) T (float64): Temperatura (K) CP (float64): Capacidad calorífica (J/mol K)
Devvrathans/edubot
Devvrathans
hi
JavIndra/attack-cves-data
JavIndra
Dataset Card for Dataset Name The dataset combines information from MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) and CVE (Common Vulnerabilities and Exposures) to provide a comprehensive resource for training language models on cybersecurity concepts and vulnerabilities. Dataset Details Dataset Description Each entry in the dataset typically includes: MITRE ATT&CK tactics and techniques CVE identifiers and descriptions… See the full description on the dataset page: https://huggingface.co/datasets/JavIndra/attack-cves-data.
open-llm-leaderboard/allknowingroger__Weirdslerp2-25B-details
open-llm-leaderboard
Dataset Card for Evaluation run of allknowingroger/Weirdslerp2-25B Dataset automatically created during the evaluation run of model allknowingroger/Weirdslerp2-25B The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/allknowingroger__Weirdslerp2-25B-details.
open-llm-leaderboard/allknowingroger__Qwenslerp2-14B-details
open-llm-leaderboard
Dataset Card for Evaluation run of allknowingroger/Qwenslerp2-14B Dataset automatically created during the evaluation run of model allknowingroger/Qwenslerp2-14B The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/allknowingroger__Qwenslerp2-14B-details.
open-llm-leaderboard/allknowingroger__Qwenslerp3-14B-details
open-llm-leaderboard
Dataset Card for Evaluation run of allknowingroger/Qwenslerp3-14B Dataset automatically created during the evaluation run of model allknowingroger/Qwenslerp3-14B The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/allknowingroger__Qwenslerp3-14B-details.
open-llm-leaderboard/allknowingroger__Qwen2.5-slerp-14B-details
open-llm-leaderboard
Dataset Card for Evaluation run of allknowingroger/Qwen2.5-slerp-14B Dataset automatically created during the evaluation run of model allknowingroger/Qwen2.5-slerp-14B The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/allknowingroger__Qwen2.5-slerp-14B-details.
freQuensy23/russian_words
freQuensy23
Датасет популярных частоупотребимых слов на русском языке
NingLab/MMECInstruct
NingLab
Introduction MMECInstruct comprises 7 tasks, including answerability prediction, category classification, product relation prediction, product substitute identification, multiclass product classification, sentiment analysis, and sequential recommendation. MMECInstruct is split into training sets, validation sets, IND test sets, and OOD test sets. Dataset Sources Repository: GitHub Homepage: CASLIE Data Split The statistics for the MMECInstruct… See the full description on the dataset page: https://huggingface.co/datasets/NingLab/MMECInstruct.
zlicastro/zanya-custom-raw-dataset
zlicastro
Zanya's Custom RAW Dataset Repository: https://huggingface.co/datasets/zlicastro/zanya-custom-raw-dataset Dataset Summary This dataset contains images I've taken in RAW formats alongside the accompanying .jpg It is organized into subfolders based on the camera type.
jotone/ss_members_dataset
jotone
SS members dataset This dataset contains information about some of the members of the SS during the Third Reich. Source This dataset was made by parsing this page and all its subpages: https://www.dws-xip.com/reich/biografie/numery/numerA.html Disclaimer This dataset is provided solely for archival and educational purposes. I do not support or endorse Nazist ideology, the actions of the SS, or any related beliefs. The information contained within… See the full description on the dataset page: https://huggingface.co/datasets/jotone/ss_members_dataset.
open-llm-leaderboard/zelk12__MT5-gemma-2-9B-details
open-llm-leaderboard
Dataset Card for Evaluation run of zelk12/MT5-gemma-2-9B Dataset automatically created during the evaluation run of model zelk12/MT5-gemma-2-9B The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT5-gemma-2-9B-details.
lil-lab/respect
lil-lab
https://arxiv.org/abs/2410.13852
haotiansun014/PRM_DS
haotiansun014
Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Dataset Details Dataset Description Curated by: [More Information Needed] Funded by [optional]: [More Information Needed] Shared by [optional]: [More Information Needed] Language(s) (NLP): [More Information Needed] License: [More Information Needed] Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/haotiansun014/PRM_DS.
aractingi/reach_cube_sim
aractingi
This dataset was created using LeRobot.
open-llm-leaderboard/allknowingroger__Qwen2.5-42B-AGI-details
open-llm-leaderboard
Dataset Card for Evaluation run of allknowingroger/Qwen2.5-42B-AGI Dataset automatically created during the evaluation run of model allknowingroger/Qwen2.5-42B-AGI The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/allknowingroger__Qwen2.5-42B-AGI-details.
BrunoHays/mixed_multilingual_commonvoice_all_languages_100k
BrunoHays
Build from mozilla commonvoice 13 using the script commited in this repo. Used to teach a model to ignore languages that are not french
SiliangZ/ultrachat_200k_mistral_sft_temp07
SiliangZ
Dataset Card for "ultrachat_200k_mistral_sft_temp07" More Information needed
Dashon/test
Dashon
Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Dataset Details Dataset Description Curated by: [More Information Needed] Funded by [optional]: [More Information Needed] Shared by [optional]: [More Information Needed] Language(s) (NLP): [More Information Needed] License: [More Information Needed] Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/Dashon/test.
GoofyGoof/Web_UI
GoofyGoof
Web UI Dataset This dataset contains web pages, their screenshots across different devices, and images extracted from the web pages. Scrolling videos are stored separately in the 'video' folder. It is intended for use in machine learning tasks related to web design, computer vision, and data analysis. Dataset Summary Total web pages: 34 Total images: 492 Total screenshots: 102 Total videos: 34 Contents For each web page, the dataset includes: URL… See the full description on the dataset page: https://huggingface.co/datasets/GoofyGoof/Web_UI.
lianghsun/tw-law-article-num-convention
lianghsun
Dataset Card for 中華民國台灣法規條文號碼習慣資料集(tw-law-article-num-convention) Dataset Summary 本資料是將中華民國台灣常見的條文號碼命名方式收集而成,目前收集憲法及法律階級,還未處理命令。 Supported Tasks and Leaderboards 本資料集可以運用在 (Continuous) Pre-training,讓模型學會中華民國的法規條文號碼命名習慣。 Languages 繁體中文。 Dataset Structure Data Instances ...(WIP)... Data Fields ...(WIP)... Data Splits ...(WIP)... Dataset Creation… See the full description on the dataset page: https://huggingface.co/datasets/lianghsun/tw-law-article-num-convention.
ashwinvk94/so100_test
ashwinvk94
This dataset was created using LeRobot.
RolandMinrui/ScaleBio-Baseline
RolandMinrui
This is the baseline datasets for the ScaleBio
MrPotter64/uplimit-hw1
MrPotter64
Dataset Card for uplimit-hw1 This dataset has been created with distilabel. Dataset Summary This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI: distilabel pipeline run --config "https://huggingface.co/datasets/MrPotter64/uplimit-hw1/raw/main/pipeline.yaml" or explore the configuration: distilabel pipeline info --config… See the full description on the dataset page: https://huggingface.co/datasets/MrPotter64/uplimit-hw1.
andrewliao11/Q-Spatial-Bench
andrewliao11
Dataset Card for Q-Spatial Bench Q-Spatial Bench is a benchmark designed to measure the quantitative spatial reasoning 📏 in large vision-language models. 🔥The paper associated with Q-Spatial Bench is accepted by EMNLP 2024 main track! Our paper: Reasoning Paths with Reference Objects Elicit Quantitative Spatial Reasoning in Large Vision-Language Models [arXiv link] Project website: [link] Dataset Details Q-Spatial Bench is a benchmark designed to measure… See the full description on the dataset page: https://huggingface.co/datasets/andrewliao11/Q-Spatial-Bench.
WhiskyNick/koch_test
WhiskyNick
This dataset was created using LeRobot.
open-llm-leaderboard/ymcki__gemma-2-2b-jpn-it-abliterated-18-ORPO-details
open-llm-leaderboard
Dataset Card for Evaluation run of ymcki/gemma-2-2b-jpn-it-abliterated-18-ORPO Dataset automatically created during the evaluation run of model ymcki/gemma-2-2b-jpn-it-abliterated-18-ORPO The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/ymcki__gemma-2-2b-jpn-it-abliterated-18-ORPO-details.
open-llm-leaderboard/Qwen__Qwen2-VL-72B-Instruct-details
open-llm-leaderboard
Dataset Card for Evaluation run of Qwen/Qwen2-VL-72B-Instruct Dataset automatically created during the evaluation run of model Qwen/Qwen2-VL-72B-Instruct The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An… See the full description on the dataset page: https://huggingface.co/datasets/open-llm-leaderboard/Qwen__Qwen2-VL-72B-Instruct-details.
WhiskyNick/car_in_case
WhiskyNick
This dataset was created using LeRobot.
WhiskyNick/match_in_case
WhiskyNick
This dataset was created using LeRobot.
Jack51003/custom-dataset
Jack51003
Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. Dataset Details Dataset Description Curated by: [More Information Needed] Funded by [optional]: [More Information Needed] Shared by [optional]: [More Information Needed] Language(s) (NLP): [More Information Needed] License: [More Information Needed] Dataset Sources [optional]… See the full description on the dataset page: https://huggingface.co/datasets/Jack51003/custom-dataset.