## 🌟 Overview
> *"True intelligence lies not in what we see, but in what we understand; not in remembering moments, but in grasping eternity."*
Between seeing and understanding lies a profound abyss of cognition. Do multimodal large models truly **think** with videos, or are they merely performing visual theatrics?
When humans watch videos, we don't just "see" – we **think**. We understand the passage of time, capture the coherence of actions, and perceive the essence of things. However, existing video benchmarks often resemble image benchmarks, with questions like *"What action does the person perform in the video?"* or *"What color is the woman's dress in the video?"* For such questions, models typically only need to scan a few key frames to answer, without requiring deep temporal, spatial, and interactive reasoning.
We present **GLIMPSE**, a benchmark specifically designed to evaluate whether LVLMs can truly **"think with videos"**. Unlike previous benchmarks, GLIMPSE emphasizes comprehensive video understanding that goes beyond static image cues. It contains **3,269 videos** and over **4,342 highly vision-centric questions**, covering **11 categories** including trajectory analysis, temporal reasoning, and forensic detection.
🎯 **Key Features:**
- **Human-Crafted Questions**: All questions are meticulously designed by human annotators, requiring viewing the complete video and reasoning over full video context – what we call **"thinking with videos"**
- **Beyond Frame Scanning**: These questions cannot be answered by scanning selected frames or relying solely on text
- **Rigorous Validation**: GLIMPSE achieves 94.82% accuracy in human evaluation, but current LVLMs face significant challenges
- **Challenging for SOTA**: Even the best-performing model GPT-o3 only achieves 66.43% accuracy
---
## 🔧 Dataset Details