File size: 5,943 Bytes
c631b0f
31acd60
 
c631b0f
31acd60
 
c631b0f
 
 
 
 
 
 
 
 
 
 
31acd60
c631b0f
 
7c2ea84
 
31acd60
 
7c2ea84
 
1182a63
7c2ea84
c631b0f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7c2ea84
 
 
526c4bc
7c2ea84
 
 
 
 
 
526c4bc
7c2ea84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31acd60
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
---
language:
- en
license: mit
size_categories:
- <1K
task_categories:
- audio-classification
- audio-to-audio
tags:
- emotional-speech
- audio-quality
- perceptual-evaluation
- codec-evaluation
- mushra
- visqol
- polqa
- benchmark
---

# EARS-EMO-OpenACE: A Full-band Coded Emotional Speech Quality Dataset

[Paper](https://huggingface.co/papers/2409.08374) | [Code](https://github.com/JozefColdenhoff/OpenACE)

## Dataset Description

This dataset contains full-band coded emotional speech samples at 16 kbps with human perceptual quality ratings and objective quality metrics. It is designed for research in audio quality assessment, emotion recognition, and codec evaluation.

## Key Features
- Coded audio sample using Opus, LC3, LC3Plus, and EVS codecs
- Full-band emotional speech samples (48kHz, 24-bit), total 252 files:
- Size of 6 speakers x 6 emotions x 4 codecs = 144 scored audio files
- Size of 6 speakers x 6 emotions x 2 MUSHRA anchors = 72 scored files
- Size of 6 speakers x 6 emotions x 1 reference = 36 reference files 
- Human MUSHRA ratings (perceptual quality) obtained from 36 listeners
- Reference and coded audio pairs available for each emotion and codec
- Computed VISQOL objective quality scores
- Computed POLQA objective quality scores

## Applications
- Audio codec evaluation
- Perceptual quality modeling
- Emotion-aware audio processing
- Quality metric validation

## Dataset Structure

```
EARS-EMO-OpenACE/
├── metadata.csv              # Main dataset metadata
├── dataset_summary.json      # Dataset statistics
├── README.md                 # This file
└── [speaker]/                # Speaker directories
    └── emo_[emotion]_freeform/   # Emotion directories
        ├── reference.wav         # Reference audio
        ├── [codec].wav           # Coded audio files
        └── ...
```

## Metadata Schema

| Column | Description | Scale/Range |
|--------|-------------|-------------|
| `dataset` | Dataset identifier | EARS-EMO-OpenACE |
| `speaker` | Speaker ID | p102, p103, p104, p105, p106, p107 |
| `emotion` | Emotional expression | Anger, Ecstasy, Fear, Neutral, Pain, Sadness |
| `codec` | Audio codec used | EVS, LC3, LC3Plus, Opus, LowAnchor, MidAnchor |
| `reference_file` | Path to reference audio | Relative path |
| `distorted_file` | Path to coded audio | Relative path |
| `reference_mushra_rating` | Human quality rating for reference | 0-100 (MUSHRA scale) |
| `distorted_mushra_rating` | Human quality rating for coded audio | 0-100 (MUSHRA scale) |
| `mushra_rating_difference` | Quality degradation | Reference - Distorted |
| `visqol_score` | VISQOL objective quality score | 1-5 (higher = better) |
| `polqa_score` | POLQA objective quality score | 1-5 (higher = better) |

## Codec Information

- **Reference**: EARS source file
- **EVS**: Enhanced Voice Services codec
- **LC3**: Low Complexity Communication Codec
- **LC3Plus**: Enhanced version of LC3
- **Opus**: Open-source audio codec
- **LowAnchor**: Low-quality anchor (lp3500) - Human ratings only
- **MidAnchor**: Mid-quality anchor (lp7000) - Human ratings only

## Quality Metrics

### MUSHRA Ratings
- **Scale**: 0-100 (higher = better quality)
- **Method**: Multiple Stimuli with Hidden Reference and Anchor
- **Raters**: Trained human listeners
- **Reference**: Original uncompressed audio (typically ~95-100)

### VISQOL Scores
- **Scale**: 1-5 (higher = better quality)
- **Method**: Virtual Speech Quality Objective Listener
- **Type**: Objective perceptual quality metric
- **Coverage**: Available for most codecs (excluding anchors)

### POLQA Scores
- **Scale**: 1-5 (higher = better quality)
- **Method**: Perceptual Objective Listening Quality Assessment
- **Standard**: ITU-T P.863
- **Coverage**: Available for most codecs (excluding anchors)

## Usage Examples

### Load Dataset
```python
import pandas as pd

# Load metadata
metadata = pd.read_csv('metadata.csv')

# Filter by emotion
anger_samples = metadata[metadata['emotion'] == 'Anger']

# Filter by codec
opus_samples = metadata[metadata['codec'] == 'Opus']

# Get high-quality samples (MUSHRA > 80)
high_quality = metadata[metadata['distorted_mushra_rating'] > 80]
```

### Audio Loading
```python
import librosa

# Load audio file
audio_path = metadata.iloc[0]['distorted_file']
audio, sr = librosa.load(audio_path, sr=None)
```


## Correlation Analysis

The following table shows Pearson correlations between objective metrics and human MUSHRA ratings:

| Metric | Description | Correlation (r) | p-value | Sample Size | Significance |
|--------|-------------|-----------------|---------|-------------|--------------|
| VISQOL | Virtual Speech Quality Objective Listener | 0.7034 | 8.3513e-23 | 144 | *** |
| POLQA | Perceptual Objective Listening Quality Assessment | 0.7939 | 2.9297e-32 | 143 | *** |

**Significance levels:** *** p<0.001, ** p<0.01, * p<0.05, n.s. = not significant

**Note:** Correlations computed excluding anchor codecs (lp3500/lp7000) which only have human ratings.


## Citation

If you use this dataset in your research, please cite the following [paper](https://arxiv.org/abs/2409.08374):

```bibtex
@inproceedings{OpenACE-Coldenhoff2025,
  author={Coldenhoff, Jozef and Granqvist, Niclas and Cernak, Milos},
  booktitle={ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  title={{OpenACE: An Open Benchmark for Evaluating Audio Coding Performance}}, 
  year={2025},
  pages={1-5},
  keywords={Codecs;Speech coding;Audio coding;Working environment noise;Benchmark testing;Data augmentation;Data models;Vectors;Reverberation;Speech processing;audio coding;benchmarks;deep learning;speech processing},
  doi={10.1109/ICASSP49660.2025.10889159}
}
```

## License

[MIT License](https://github.com/JozefColdenhoff/OpenACE/blob/main/LICENSE)

## Contact

Milos Cernak, milos.cernak at ieee dot org

August 1, 2025