metadata
language:
- en
license: mit
size_categories:
- <1K
task_categories:
- audio-classification
- audio-to-audio
tags:
- emotional-speech
- audio-quality
- perceptual-evaluation
- codec-evaluation
- mushra
- visqol
- polqa
- benchmark
EARS-EMO-OpenACE: A Full-band Coded Emotional Speech Quality Dataset
Dataset Description
This dataset contains full-band coded emotional speech samples at 16 kbps with human perceptual quality ratings and objective quality metrics. It is designed for research in audio quality assessment, emotion recognition, and codec evaluation.
Key Features
- Coded audio sample using Opus, LC3, LC3Plus, and EVS codecs
- Full-band emotional speech samples (48kHz, 24-bit), total 252 files:
- Size of 6 speakers x 6 emotions x 4 codecs = 144 scored audio files
- Size of 6 speakers x 6 emotions x 2 MUSHRA anchors = 72 scored files
- Size of 6 speakers x 6 emotions x 1 reference = 36 reference files
- Human MUSHRA ratings (perceptual quality) obtained from 36 listeners
- Reference and coded audio pairs available for each emotion and codec
- Computed VISQOL objective quality scores
- Computed POLQA objective quality scores
Applications
- Audio codec evaluation
- Perceptual quality modeling
- Emotion-aware audio processing
- Quality metric validation
Dataset Structure
EARS-EMO-OpenACE/
├── metadata.csv # Main dataset metadata
├── dataset_summary.json # Dataset statistics
├── README.md # This file
└── [speaker]/ # Speaker directories
└── emo_[emotion]_freeform/ # Emotion directories
├── reference.wav # Reference audio
├── [codec].wav # Coded audio files
└── ...
Metadata Schema
Column | Description | Scale/Range |
---|---|---|
dataset |
Dataset identifier | EARS-EMO-OpenACE |
speaker |
Speaker ID | p102, p103, p104, p105, p106, p107 |
emotion |
Emotional expression | Anger, Ecstasy, Fear, Neutral, Pain, Sadness |
codec |
Audio codec used | EVS, LC3, LC3Plus, Opus, LowAnchor, MidAnchor |
reference_file |
Path to reference audio | Relative path |
distorted_file |
Path to coded audio | Relative path |
reference_mushra_rating |
Human quality rating for reference | 0-100 (MUSHRA scale) |
distorted_mushra_rating |
Human quality rating for coded audio | 0-100 (MUSHRA scale) |
mushra_rating_difference |
Quality degradation | Reference - Distorted |
visqol_score |
VISQOL objective quality score | 1-5 (higher = better) |
polqa_score |
POLQA objective quality score | 1-5 (higher = better) |
Codec Information
- Reference: EARS source file
- EVS: Enhanced Voice Services codec
- LC3: Low Complexity Communication Codec
- LC3Plus: Enhanced version of LC3
- Opus: Open-source audio codec
- LowAnchor: Low-quality anchor (lp3500) - Human ratings only
- MidAnchor: Mid-quality anchor (lp7000) - Human ratings only
Quality Metrics
MUSHRA Ratings
- Scale: 0-100 (higher = better quality)
- Method: Multiple Stimuli with Hidden Reference and Anchor
- Raters: Trained human listeners
- Reference: Original uncompressed audio (typically ~95-100)
VISQOL Scores
- Scale: 1-5 (higher = better quality)
- Method: Virtual Speech Quality Objective Listener
- Type: Objective perceptual quality metric
- Coverage: Available for most codecs (excluding anchors)
POLQA Scores
- Scale: 1-5 (higher = better quality)
- Method: Perceptual Objective Listening Quality Assessment
- Standard: ITU-T P.863
- Coverage: Available for most codecs (excluding anchors)
Usage Examples
Load Dataset
import pandas as pd
# Load metadata
metadata = pd.read_csv('metadata.csv')
# Filter by emotion
anger_samples = metadata[metadata['emotion'] == 'Anger']
# Filter by codec
opus_samples = metadata[metadata['codec'] == 'Opus']
# Get high-quality samples (MUSHRA > 80)
high_quality = metadata[metadata['distorted_mushra_rating'] > 80]
Audio Loading
import librosa
# Load audio file
audio_path = metadata.iloc[0]['distorted_file']
audio, sr = librosa.load(audio_path, sr=None)
Correlation Analysis
The following table shows Pearson correlations between objective metrics and human MUSHRA ratings:
Metric | Description | Correlation (r) | p-value | Sample Size | Significance |
---|---|---|---|---|---|
VISQOL | Virtual Speech Quality Objective Listener | 0.7034 | 8.3513e-23 | 144 | *** |
POLQA | Perceptual Objective Listening Quality Assessment | 0.7939 | 2.9297e-32 | 143 | *** |
Significance levels: *** p<0.001, ** p<0.01, * p<0.05, n.s. = not significant
Note: Correlations computed excluding anchor codecs (lp3500/lp7000) which only have human ratings.
Citation
If you use this dataset in your research, please cite the following paper:
@inproceedings{OpenACE-Coldenhoff2025,
author={Coldenhoff, Jozef and Granqvist, Niclas and Cernak, Milos},
booktitle={ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={{OpenACE: An Open Benchmark for Evaluating Audio Coding Performance}},
year={2025},
pages={1-5},
keywords={Codecs;Speech coding;Audio coding;Working environment noise;Benchmark testing;Data augmentation;Data models;Vectors;Reverberation;Speech processing;audio coding;benchmarks;deep learning;speech processing},
doi={10.1109/ICASSP49660.2025.10889159}
}
License
Contact
Milos Cernak, milos.cernak at ieee dot org
August 1, 2025