Datasets:
metadata
language:
- en
tags:
- multimodal
- reinforcement-learning
- reflection
- reasoning
- dataset
license: mit
task_categories:
- question-answering
pretty_name: SRPO Dataset
size_categories:
- 10K<n<100K
SRPO Dataset: Reflection-Aware RL Training Data
This repository provides the multimodal reasoning dataset used in the paper:
SRPO: Enhancing Multimodal LLM Reasoning via Reflection-Aware Reinforcement Learning
We release two versions of the dataset:
- 39K version (
modified_39Krelease.jsonl
+images.zip
) - Enhanced 47K+ version (
47K_release_plus.jsonl
+47K_release_plus.zip
)
Both follow the same unified format, containing multimodal (image–text) reasoning data with self-reflection supervision. The 47K+ version further incorporates high-quality external datasets, such as PhyX and We-Math 2.0, to strengthen physical and mathematical reasoning.
📂 Data Format
The data is stored in JSON Lines (.jsonl
) format. Each sample includes an ID, a multimodal input (image + text), and the ground-truth answer.
Example:
{
"id": "12",
"message": "[{\"role\": \"user\", \"content\": [{\"type\": \"image\", \"image\": \"/path/to/images/Processed-65d5feaa-714b-4a86-97e4-dc72802c4593-0.jpg\"}, {\"type\": \"text\", \"text\": \"<image>\\nAre there more berries with two leaves or with one leaf?\"}]}]",
"answer": "\\boxed{Two leaves}"
}
- id: Unique sample identifier
- message: Conversation-style user input, combining image reference and textual query
- answer: Ground-truth answer in LaTeX-style format
📂 Citation
@article{wan2025srpo,
title={Srpo: Enhancing multimodal llm reasoning via reflection-aware reinforcement learning},
author={Wan, Zhongwei and Dou, Zhihao and Liu, Che and Zhang, Yu and Cui, Dongfei and Zhao, Qinjian and Shen, Hui and Xiong, Jing and Xin, Yi and Jiang, Yifan and others},
journal={arXiv preprint arXiv:2506.01713},
year={2025}
}
@article{shen2025phyx,
title={PhyX: Does Your Model Have the "Wits" for Physical Reasoning?},
author={Shen, Hui and Wu, Taiqiang and Han, Qi and Hsieh, Yunta and Wang, Jizhou and Zhang, Yuyue and Cheng, Yuxin and Hao, Zijian and Ni, Yuansheng and Wang, Xin and others},
journal={arXiv preprint arXiv:2505.15929},
year={2025}
}