symurbench_datasets / README.md
ai-forever's picture
Update README.md
e87e8d7 verified
metadata
license: mit
tags:
  - symbolic-music
  - music-information-retrieval
  - classification
  - retrieval
  - benchmark

SyMuRBench Datasets and Precomputed Features

This repository contains datasets and precomputed features for SyMuRBench, a benchmark for symbolic music understanding models. It includes metadata and MIDI files for multiple classification and retrieval tasks, along with pre-extracted music21 and jSymbolic features.

You can install and use the full pipeline via: 👉 https://github.com/Mintas/SyMuRBench


Overview

SyMuRBench supports evaluation across diverse symbolic music tasks, including composer, genre, emotion, and instrument classification, as well as score-performance retrieval. This Hugging Face dataset provides:

  • Dataset metadata (CSV files)
  • MIDI files organized by task
  • Precomputed music21 and jSymbolic features
  • Configuration-ready structure for immediate use in benchmarking

Tasks Description

Task Name Source Dataset Task Type # of Classes # of Files Default Metrics
ComposerClassificationASAP ASAP Multiclass Classification 7 197 weighted f1 score, balanced accuracy
GenreClassificationMMD MetaMIDI Multiclass Classification 7 2,795 weighted f1 score, balanced accuracy
GenreClassificationWMTX WikiMT-X Multiclass Classification 8 985 weighted f1 score, balanced accuracy
EmotionClassificationEMOPIA Emopia Multiclass Classification 4 191 weighted f1 score, balanced accuracy
EmotionClassificationMIREX MIREX Multiclass Classification 5 163 weighted f1 score, balanced accuracy
InstrumentDetectionMMD MetaMIDI Multilabel Classification 128 4,675 weighted f1 score
ScorePerformanceRetrievalASAP ASAP Retrieval - 438 (219 pairs) R@1, R@5, R@10, Median Rank

Precomputed Features

Precomputed features are available in the data/features/ folder:

  • music21_full_dataset.parquet
  • jsymbolic_full_dataset.parquet

Each file contains a unified table with:

  • midi_file: Filename of the MIDI
  • task: Corresponding task name
  • E_0 to E_N: Feature vector

Example

midi_file task E_0 E_1 ... E_672 E_673
Q1_0vLPYiPN7qY_1.mid EmotionClassificationEMOPIA 0.0 0.0 ... 0.0 0.0
Q1_4dXC1cC7crw_0.mid EmotionClassificationEMOPIA 0.0 0.0 ... 0.0 0.0

File Structure

The dataset is distributed as a ZIP archive:

data/datasets.zip

After extraction, the structure is:

datasets/
├── composer_and_retrieval_datasets/
│   ├── metadata_composer_dataset.csv
│   ├── metadata_retrieval_dataset.csv
│   └── ... (MIDI files organized in subfolders)
├── genre_dataset/
│   ├── metadata_genre_dataset.csv
│   └── midis/
├── wikimtx_dataset/
│   ├── metadata_wikimtx_dataset.csv
│   └── midis/
├── emopia_dataset/
│   ├── metadata_emopia_dataset.csv
│   └── midis/
├── mirex_dataset/
│   ├── metadata_mirex_dataset.csv
│   └── midis/
└── instrument_dataset/
    ├── metadata_instrument_dataset.csv
    └── midis/
  • CSV files: Contain filename and label (or pair info for retrieval).
  • MIDI files: Used as input for feature extractors.

How to Use

You can download and extract everything using the built-in utility:

from symurbench.utils import load_datasets

load_datasets(output_folder="./data", load_features=True)

This will:

  • Download datasets.zip and extract it
  • Optionally download precomputed features
  • Update config paths automatically

License

This dataset is released under the MIT License.


Citation

If you use SyMuRBench in your work, please cite:

@inproceedings{symurbench2025,
  author    = {Petr Strepetov and Dmitrii Kovalev},
  title     = {SyMuRBench: Benchmark for Symbolic Music Representations},
  booktitle = {Proceedings of the 3rd International Workshop on Multimedia Content Generation and Evaluation: New Methods and Practice (McGE '25)},
  year      = {2025},
  pages     = {9},
  publisher = {ACM},
  address   = {Dublin, Ireland},
  doi       = {10.1145/3746278.3759392}
}