Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# PyTorch Implementation of Audio Flamingo 2
|
| 2 |
+
|
| 3 |
+
**Sreyan Ghosh, Zhifeng Kong, Sonal Kumar, S Sakshi, Jaehyeon Kim, Wei Ping, Rafael Valle, Dinesh Manocha, Bryan Catanzaro**
|
| 4 |
+
|
| 5 |
+
[[paper]](https://arxiv.org/abs/2503.03983) [[Demo website]](https://research.nvidia.com/labs/adlr/AF2/) [[GitHub]](https://github.com/NVIDIA/audio-flamingo)
|
| 6 |
+
|
| 7 |
+
This repo contains the PyTorch implementation of [Audio Flamingo 2: An Audio-Language Model with Long-Audio Understanding and Expert Reasoning Abilities](). Audio Flamingo 2 achieves the state-of-the-art performance across over 20 benchmarks, with only a 3B parameter small language model. It is improved from our previous [Audio Flamingo](https://arxiv.org/abs/2402.01831).
|
| 8 |
+
|
| 9 |
+
- We introduce two datasets, AudioSkills for expert audio reasoning, and LongAudio for long audio understanding, to advance the field of audio understanding.
|
| 10 |
+
|
| 11 |
+
- Audio Flamingo 2 has advanced audio understanding and reasoning capabilities. Especially, Audio Flamingo 2 has expert audio reasoning abilities, and can understand long audio up to 5 minutes.
|
| 12 |
+
|
| 13 |
+
- Audio Flamingo 2 outperforms larger and proprietary LALMs across 20+ benchmarks, despite being smaller (3B) and trained exclusively on public datasets.
|
| 14 |
+
|
| 15 |
+
## Main Results
|
| 16 |
+
|
| 17 |
+
Audio Flamingo 2 outperforms prior SOTA models including GAMA, Audio Flamingo, Qwen-Audio, Qwen2-Audio, LTU, LTU-AS, SALMONN, AudioGPT, Gemini Flash v2, Gemini Pro v1.5, and GPT-4o-audio on a number of understanding and reasoning benchmarks.
|
| 18 |
+
|
| 19 |
+
<div align="center">
|
| 20 |
+
<img class="img-full" src="assets/af2_radar.png" width="300">
|
| 21 |
+
</div>
|
| 22 |
+
|
| 23 |
+
<div align="center">
|
| 24 |
+
<img class="img-full" src="assets/af2_table2.png" width="400">
|
| 25 |
+
</div>
|
| 26 |
+
|
| 27 |
+
## Audio Flamingo 2 Architecture
|
| 28 |
+
|
| 29 |
+
Audio Flamingo 2 uses a cross-attention architecture similar to [Audio Flamingo](https://arxiv.org/abs/2402.01831) and [Flamingo](https://arxiv.org/abs/2204.14198). Audio Flamingo 2 can take up to 5 minutes of audio inputs.
|
| 30 |
+
|
| 31 |
+
<div align="center">
|
| 32 |
+
<img class="img-full" src="assets/af2_arch.png" width="800">
|
| 33 |
+
</div>
|
| 34 |
+
|
| 35 |
+
## License
|
| 36 |
+
|
| 37 |
+
- The checkpoints are for non-commercial use only (see NVIDIA OneWay Noncommercial License). They are also subject to the [Qwen Research license](https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE), the [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and the original licenses accompanying each training dataset.
|
| 38 |
+
- Notice: Audio Flamingo 2 is built with Qwen-2.5. Qwen is licensed under the Qwen RESEARCH LICENSE AGREEMENT, Copyright (c) Alibaba Cloud. All Rights Reserved.
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
## Citation
|
| 42 |
+
|
| 43 |
+
- Audio Flamingo
|
| 44 |
+
```
|
| 45 |
+
@inproceedings{kong2024audio,
|
| 46 |
+
title={Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities},
|
| 47 |
+
author={Kong, Zhifeng and Goel, Arushi and Badlani, Rohan and Ping, Wei and Valle, Rafael and Catanzaro, Bryan},
|
| 48 |
+
booktitle={International Conference on Machine Learning},
|
| 49 |
+
pages={25125--25148},
|
| 50 |
+
year={2024},
|
| 51 |
+
organization={PMLR}
|
| 52 |
+
}
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
- Audio Flamingo 2
|
| 56 |
+
```
|
| 57 |
+
@article{ghosh2025audio,
|
| 58 |
+
title={Audio Flamingo 2: An Audio-Language Model with Long-Audio Understanding and Expert Reasoning Abilities},
|
| 59 |
+
author={Ghosh, Sreyan and Kong, Zhifeng and Kumar, Sonal and Sakshi, S and Kim, Jaehyeon and Ping, Wei and Valle, Rafael and Manocha, Dinesh and Catanzaro, Bryan},
|
| 60 |
+
journal={arXiv preprint arXiv:2503.03983},
|
| 61 |
+
year={2025}
|
| 62 |
+
}
|
| 63 |
+
```
|