Paper title and link

The model was presented in the paper Reinforcing Video Reasoning with Focused Thinking.

Paper abstract

The abstract of the paper is the following:

Recent advancements in reinforcement learning, particularly through Group Relative Policy Optimization (GRPO), have significantly improved multimodal large language models for complex reasoning tasks. However, two critical limitations persist: 1) they often produce unfocused, verbose reasoning chains that obscure salient spatiotemporal cues and 2) binary rewarding fails to account for partially correct answers, resulting in high reward variance and inefficient learning. In this paper, we propose TW-GRPO, a novel framework that enhances visual reasoning with focused thinking and dense reward granularity. Specifically, we employs a token weighting mechanism that prioritizes tokens with high informational density (estimated by intra-group information entropy), suppressing redundant tokens like generic reasoning prefixes. Furthermore, we reformulate RL training by shifting from single-choice to multi-choice QA tasks, where soft rewards enable finer-grained gradient estimation by distinguishing partial correctness. Additionally, we propose question-answer inversion, a data augmentation strategy to generate diverse multi-choice samples from existing benchmarks. Experiments demonstrate state-of-the-art performance on several video reasoning and general understanding benchmarks. Notably, TW-GRPO achieves 50.4% accuracy on CLEVRER (18.8% improvement over Video-R1) and 65.8% on MMVU. Our codes are available at \href{ this https URL }.

This repository contains the model as presented in "Reinforcing Video Reasoning with Focused Thinking".

For training and evaluation, please refer to the Code: https://github.com/longmalongma/TW-GRPO

If you find this project useful in your research, please consider cite:

@article{dang2025reinforcing,
  title={Reinforcing Video Reasoning with Focused Thinking},
  author={Dang, Jisheng and Wu, Jingze and Wang, Teng and Lin, Xuanhui and Zhu, Nannan and Chen, Hongbo and Zheng, Wei-Shi and Wang, Meng and Chua, Tat-Seng},
  journal={arXiv preprint arXiv:2505.24718},
  year={2025}
}
Downloads last month
7
Safetensors
Model size
8.29B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Falconss1/TW-GRPO

Finetuned
(594)
this model