wren93 commited on
Commit
a1e1233
·
verified ·
1 Parent(s): d8a516c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md CHANGED
@@ -31,3 +31,118 @@ configs:
31
  - split: test
32
  path: data/test-*
33
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  - split: test
32
  path: data/test-*
33
  ---
34
+
35
+ # VideoEval-Pro
36
+
37
+ VideoEval-Pro is a robust and realistic long video understanding benchmark containing open-ended, short-answer QA problems. The dataset is constructed by reformatting questions from four existing long video understanding MCQ benchmarks: Video-MME, MLVU, LVBench, and LongVideoBench into free-form questions.
38
+
39
+ The evaluation code and scripts are available at: [TIGER-AI-Lab/VideoEval-Pro](https://github.com/TIGER-AI-Lab/VideoEval-Pro)
40
+
41
+ ## Task Types
42
+ VideoEval-Pro contains various types of video understanding tasks. The distribution of task types is shown below:
43
+
44
+ ![Task Type Distribution](assets/task_types.png)
45
+
46
+
47
+ ## Dataset Structure
48
+ Each example in the dataset contains:
49
+ - `video`: Name (path) of the video file
50
+ - `question`: The question about the video content
51
+ - `options`: Original options from the source benchmark
52
+ - `answer`: The correct MCQ answer
53
+ - `answer_text`: The correct free-form answer
54
+ - `meta`: Additional metadata from the source benchmark
55
+ - `source`: Source benchmark
56
+ - `qa_subtype`: Question task subtype
57
+ - `qa_type`: Question task type
58
+
59
+ ## Evaluation Steps
60
+
61
+ 1. **Download and Prepare Videos**
62
+ ```bash
63
+ # Navigate to videos directory
64
+ cd videos
65
+
66
+ # Merge all split tar.gz files into a single archive
67
+ cat videos_part_*.tar.gz > videos_merged.tar.gz
68
+
69
+ # Extract the merged archive
70
+ tar -xzf videos_merged.tar.gz
71
+
72
+ # [Optional] Clean up the split files and merged archive
73
+ rm videos_part_*.tar.gz videos_merged.tar.gz
74
+
75
+ # After extraction, you will get a directory containing all videos
76
+ # The path to this directory will be used as --video_root in evaluation
77
+ # For example: 'VideoEval-Pro/videos'
78
+ ```
79
+
80
+ 2. **[Optional] Pre-extract Frames**
81
+ To improve efficiency, you can pre-extract frames from videos. The extracted frames should be organized as follows:
82
+ ```
83
+ frames_root/
84
+ ├── video_name_1/ # Directory name is thevideo name
85
+ │ ├── 000001.jpg # Frame images
86
+ │ ├── 000002.jpg
87
+ │ └── ...
88
+ ├── video_name_2/
89
+ │ ├── 000001.jpg
90
+ │ ├── 000002.jpg
91
+ │ └── ...
92
+ └── ...
93
+ ```
94
+
95
+ After frame extraction, the path to the frames will be used as `--frames_root`. Set `--using_frames True` when running the evaluation script.
96
+
97
+ 3. **Setup Evaluation Environment**
98
+ ```bash
99
+ # Clone the repository from the GitHub repository
100
+ git clone https://github.com/TIGER-AI-Lab/VideoEval-Pro
101
+ cd VideoEval-Pro
102
+
103
+ # Create conda environment from requirements.txt (there are different requirements files for different models)
104
+ conda create -n videoevalpro --file requirements.txt
105
+ conda activate videoevalpro
106
+ ```
107
+
108
+ 4. **Run Evaluation**
109
+ ```bash
110
+ cd VideoEval-Pro
111
+
112
+ # Set PYTHONPATH
113
+ export PYTHONPATH=.
114
+
115
+ # Run evaluation script with the following parameters:
116
+ # --video_root: Path to video files folder
117
+ # --frames_root: Path to video frames folder [For using_frames]
118
+ # --output_path: Path to save output results
119
+ # --using_frames: Whether to use pre-extracted frames
120
+ # --model_path: Path to model
121
+ # --device: Device to run inference on
122
+ # --num_frames: Number of frames to sample from video
123
+ # --max_retries: Maximum number of retries for failed inference
124
+ # --num_threads: Number of threads for parallel processing
125
+
126
+ python tools/*_chat.py \
127
+ --video_root <path_to_videos> \
128
+ --frames_root <path_to_frames> \
129
+ --output_path <path_to_save_results> \
130
+ --using_frames <True/False> \
131
+ --model_path <model_name_or_path> \
132
+ --device <device> \
133
+ --num_frames <number_of_frames> \
134
+ --max_retries <max_retries> \
135
+ --num_threads <num_threads>
136
+
137
+ E.g.:
138
+ python tools/qwen_chat.py \
139
+ --video_root ./videos \
140
+ --frames_root ./frames \
141
+ --output_path ./results/qwen_results.jsonl \
142
+ --using_frames False \
143
+ --model_path Qwen/Qwen2-VL-7B-Instruct \
144
+ --device cuda \
145
+ --num_frames 32 \
146
+ --max_retries 10 \
147
+ --num_threads 1
148
+ ```