Uni-MoE
Collection
The first version of Unifying Multimodal LLMs: Uni-MoE
•
10 items
•
Updated
•
1
audio
audioduration (s) 9.74
240
|
---|
If our dataset is useful for your research, please cite our work:
@article{li2025uni,
title={Uni-moe: Scaling unified multimodal llms with mixture of experts},
author={Li, Yunxin and Jiang, Shenyuan and Hu, Baotian and Wang, Longyue and Zhong, Wanqi and Luo, Wenhan and Ma, Lin and Zhang, Min},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2025},
publisher={IEEE}
}