Datasets:

Modalities:
Text
Formats:
webdataset
ArXiv:
Libraries:
Datasets
WebDataset
AaronHuangWei commited on
Commit
365280c
·
verified ·
1 Parent(s): 30321f5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +137 -0
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="center" width="100%">
2
+ <img src="https://raw.githubusercontent.com/NVlabs/Long-RL/main/assets/long-rl-logo.png" alt="Stanford-Alpaca" style="width: 100%; min-width: 300px; display: block; margin: auto;">
3
+ </p>
4
+
5
+ # Long-RL: Scaling RL to Long Sequences (Evaluation Dataset - for research only)
6
+
7
+ [![Paper](https://img.shields.io/badge/Paper-Arvix%20Link-green)](https://arxiv.org/abs/2507.07966)
8
+ [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-yellow.svg)](https://github.com/NVlabs/Long-RL/blob/main/LICENSE)
9
+
10
+ <div align="center">
11
+
12
+ [![Watch the video](https://raw.githubusercontent.com/NVlabs/Long-RL/main/assets/demo_video_first_frame.png)](https://www.youtube.com/watch?v=ykbblK2jiEg)
13
+
14
+ </div>
15
+
16
+ ## Data Distribution
17
+
18
+ <p align="center" width="100%">
19
+ <img src="https://raw.githubusercontent.com/NVlabs/Long-RL/main/assets/data_distribution.png" alt="Stanford-Alpaca" style="width: 100%; min-width: 300px; display: block; margin: auto;">
20
+ </p>
21
+
22
+ We strategically construct a high-quality dataset with CoT annotations for long video reasoning, named LongVideo-Reason. Leveraging a powerful VLM (NVILA-8B) and a leading open-source reasoning LLM, we develop a dataset comprising 52K high-quality Question-Reasoning-Answer pairs for long videos. We use 18K high-quality samples for Long-CoT-SFT to initialize the model's reasoning and instruction-following abilities, and 33K samples with an additional 110K video data for reinforcement learning. This two-stage training combines high-quality reasoning annotations with reinforcement learning, enabling LongVILA-R1 to achieve superior and generalized video reasoning. We also manually curate a balanced set of 1K long-video samples to build a new benchmark, LongVideo-Reason-eval, that evaluates performance from four perspectives: Temporal, Goal and Purpose, Spatial, and Plot and Narrative, for a comprehensive assessment.
23
+
24
+
25
+ **LongVideo-Reason (Train, 52k) [[Data Link](https://huggingface.co/datasets/LongVideo-Reason/longvideo-reason)]**
26
+
27
+ **LongVideo-Reason-eval (Test, 1k) [[Data Link](https://huggingface.co/datasets/LongVideo-Reason/longvideo_eval_videos)]**
28
+
29
+
30
+ ## Installation
31
+
32
+ ```bash
33
+ git clone https://github.com/NVlabs/Long-RL.git
34
+ cd Long-RL
35
+ pip install -e .
36
+ ```
37
+ If you want to train Qwen-Omni models, please
38
+ ```bash
39
+ bash vllm_replace.sh
40
+ ```
41
+
42
+ ## Training
43
+ ### Single node
44
+ For single node (within 8 GPUs), you can refer to the training scripts in the `examples` directory. For example,
45
+ ```bash
46
+ bash examples/new_supports/qwen2_5_vl_3b_video_grpo.sh $VIDEO_PATH
47
+ ```
48
+
49
+ ### Multi-nodes
50
+ For jobs that requires multi-nodes, you can refer to the ways mentioned in the EasyR1 repo, [here](https://github.com/hiyouga/EasyR1/tree/main?tab=readme-ov-file#how-to-run-70b-model-in-multi-node-environment).
51
+
52
+ We provide additional examples for `sbatch` scripts like, where `TRAIN_SCRIPT` is the script to train on single node, `NNODES` is the number of nodes required.
53
+ ```bash
54
+ bash scripts/srun_multi_nodes.sh $TRAIN_SCRIPT $NNODES
55
+ ```
56
+
57
+ For example,
58
+ ```bash
59
+ bash scripts/srun_multi_nodes.sh examples/new_supports/qwen2_5_vl_3b_video_grpo.sh 2
60
+ ```
61
+
62
+ ### Merge Checkpoint in Hugging Face Format
63
+ This follows the ways in the EasyR1 repo.
64
+ ```bash
65
+ python3 scripts/model_merger.py --local_dir checkpoints/easy_r1/exp_name/global_step_1/actor
66
+ ```
67
+
68
+ ## Evaluation
69
+ We provide the instruction on evaluating models on our `LongVideo-Reason` benchmark in the `eval` [directory](https://github.com/NVlabs/Long-RL/tree/main/eval).
70
+
71
+
72
+ ## Testing on LongVideo-Reason-eval
73
+ In this section, we release the scripts for testing on our LongVideo-Reason-eval set. More details about the training set can be found [here](https://github.com/NVlabs/Long-RL/issues/1).
74
+
75
+ You can find the videos for testing [here](https://huggingface.co/datasets/LongVideo-Reason/longvideo_eval_videos/tree/main). Please download them, and `tar -zxvf` them into a directory named `longvila_videos`.
76
+ ```
77
+ ├── $VIDEO_DIR
78
+ │ ├── longvila_videos
79
+ │ │ │── mp4/webm/mkv videos
80
+ ```
81
+
82
+
83
+ `$VIDEO_DIR` is the parent directory of `longvila_videos`. For different models, you need to customize the `model_generate` function accordingly. The model generations and output metrics will be saved in `runs_${$MODEL_PATH}`.
84
+ ```bash
85
+ python eval.py \
86
+ --model-path $MODEL_PATH \
87
+ --data-path LongVideo-Reason/longvideo-reason@test \
88
+ --video-dir $VIDEO_DIR \
89
+ --output-dir runs_${$MODEL_PATH}
90
+ ```
91
+
92
+
93
+
94
+ ## Core Contributors
95
+ [Yukang Chen](https://yukangchen.com/), [Wei Huang](https://aaron-weihuang.com/), [Shuai Yang](https://andysonys.github.io), [Qinghao Hu](https://tonyhao.xyz/), [Baifeng Shi](https://bfshi.github.io/), [Hanrong Ye](https://sites.google.com/site/yhrspace/home), [Ligeng Zhu](https://lzhu.me/).
96
+
97
+ We welcome all possible contributions and will acknowledge all contributors clearly.
98
+
99
+ ## Citation
100
+ Please consider to cite our paper and this framework, if they are helpful in your research.
101
+
102
+ ```bibtex
103
+ @misc{long-rl,
104
+ title = {Long-RL: Scaling RL to Long Sequences},
105
+ author = {Yukang Chen, Wei Huang, Shuai Yang, Qinghao Hu, Baifeng Shi, Hanrong Ye, Ligeng Zhu, Zhijian Liu, Pavlo Molchanov, Jan Kautz, Xiaojuan Qi, Sifei Liu,Hongxu Yin, Yao Lu, Song Han},
106
+ year = {2025},
107
+ publisher = {GitHub},
108
+ journal = {GitHub repository},
109
+ howpublished = {\url{https://github.com/NVlabs/Long-RL}},
110
+ }
111
+ ```
112
+ ```bibtex
113
+ @article{chen2025longvila-r1,
114
+ title={Scaling RL to Long Videos},
115
+ author={Yukang Chen and Wei Huang and Baifeng Shi and Qinghao Hu and Hanrong Ye and Ligeng Zhu and Zhijian Liu and Pavlo Molchanov and Jan Kautz and Xiaojuan Qi and Sifei Liu and Hongxu Yin and Yao Lu and Song Han},
116
+ year={2025},
117
+ eprint={2507.07966},
118
+ archivePrefix={arXiv},
119
+ primaryClass={cs.CV}
120
+ }
121
+ ```
122
+ ```bibtex
123
+ @inproceedings{chen2024longvila,
124
+ title={LongVILA: Scaling Long-Context Visual Language Models for Long Videos},
125
+ author={Yukang Chen and Fuzhao Xue and Dacheng Li and Qinghao Hu and Ligeng Zhu and Xiuyu Li and Yunhao Fang and Haotian Tang and Shang Yang and Zhijian Liu and Ethan He and Hongxu Yin and Pavlo Molchanov and Jan Kautz and Linxi Fan and Yuke Zhu and Yao Lu and Song Han},
126
+ booktitle={The International Conference on Learning Representations (ICLR)},
127
+ year={2025},
128
+ }
129
+ ```
130
+
131
+ ## Acknowledgement
132
+ - [EasyR1](https://github.com/hiyouga/EasyR1): the codebase we built upon. Thanks for their wonderful work.
133
+ - [verl](https://github.com/volcengine/verl): the RL training framework we built upon.
134
+ - [vllm](https://github.com/vllm-project/vllm): we built upon vllm for the rollout engine.
135
+ - [Flow-GRPO](https://github.com/yifan123/flow_grpo): we refer to the Flow-GRPO for the image/video generation RL part.
136
+ - [Shot2story](https://arxiv.org/abs/2312.10300): we curate 18K long videos from the Shot2Story.
137
+