MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models Paper • 2306.13394 • Published Jun 23, 2023
CAPro: Webly Supervised Learning with Cross-Modality Aligned Prototypes Paper • 2310.09761 • Published Oct 15, 2023 • 1
FoPro: Few-Shot Guided Robust Webly-Supervised Prototypical Learning Paper • 2212.00465 • Published Dec 1, 2022
T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs Paper • 2411.19951 • Published Nov 29, 2024
Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification Paper • 2412.00876 • Published Dec 1, 2024
FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression Paper • 2412.04317 • Published Dec 5, 2024 • 1
Freeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLM Paper • 2411.00774 • Published Nov 1, 2024
Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracy Paper • 2502.05177 • Published Feb 7, 2025 • 2
VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model Paper • 2505.03739 • Published May 6, 2025 • 10
What You Perceive Is What You Conceive: A Cognition-Inspired Framework for Open Vocabulary Image Segmentation Paper • 2505.19569 • Published May 26, 2025
Solving the Catastrophic Forgetting Problem in Generalized Category Discovery Paper • 2501.05272 • Published Jan 9, 2025 • 1
Aligning and Prompting Everything All at Once for Universal Visual Perception Paper • 2312.02153 • Published Dec 4, 2023
VITA-VLA: Efficiently Teaching Vision-Language Models to Act via Action Expert Distillation Paper • 2510.09607 • Published Oct 10, 2025 • 2
Human-MME: A Holistic Evaluation Benchmark for Human-Centric Multimodal Large Language Models Paper • 2509.26165 • Published Sep 30, 2025
Few-Shot Image Quality Assessment via Adaptation of Vision-Language Models Paper • 2409.05381 • Published Sep 9, 2024
VITA-E: Natural Embodied Interaction with Concurrent Seeing, Hearing, Speaking, and Acting Paper • 2510.21817 • Published Oct 21, 2025 • 41
SeqTR: A Simple yet Universal Network for Visual Grounding Paper • 2203.16265 • Published Mar 30, 2022
SMART: Shot-Aware Multimodal Video Moment Retrieval with Audio-Enhanced MLLM Paper • 2511.14143 • Published Nov 18, 2025
Youtu-VL: Unleashing Visual Potential via Unified Vision-Language Supervision Paper • 2601.19798 • Published Jan 27 • 43
SE-Search: Self-Evolving Search Agent via Memory and Dense Reward Paper • 2603.03293 • Published Feb 6