Datasets:
File size: 5,436 Bytes
f5f3712 b97bedb 35b5ef3 f5f3712 a1e1233 b97bedb a1e1233 a38a853 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
---
dataset_info:
features:
- name: video
dtype: string
- name: question
dtype: string
- name: options
list: string
- name: answer
dtype: string
- name: answer_text
dtype: string
- name: meta
dtype: string
- name: source
dtype: string
- name: qa_subtype
dtype: string
- name: qa_type
dtype: string
splits:
- name: test
num_bytes: 515277
num_examples: 1289
download_size: 174366
dataset_size: 515277
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
task_categories:
- video-text-to-text
---
# VideoEval-Pro
VideoEval-Pro is a robust and realistic long video understanding benchmark containing open-ended, short-answer QA problems. The dataset is constructed by reformatting questions from four existing long video understanding MCQ benchmarks: Video-MME, MLVU, LVBench, and LongVideoBench into free-form questions. The paper can be found [here](https://huggingface.co/papers/2505.14640).
The evaluation code and scripts are available at: [TIGER-AI-Lab/VideoEval-Pro](https://github.com/TIGER-AI-Lab/VideoEval-Pro)
## Dataset Structure
Each example in the dataset contains:
- `video`: Name (path) of the video file
- `question`: The question about the video content
- `options`: Original options from the source benchmark
- `answer`: The correct MCQ answer
- `answer_text`: The correct free-form answer
- `meta`: Additional metadata from the source benchmark
- `source`: Source benchmark
- `qa_subtype`: Question task subtype
- `qa_type`: Question task type
## Evaluation Steps
1. **Download and Prepare Videos**
```bash
# Navigate to videos directory
cd videos
# Merge all split tar.gz files into a single archive
cat videos_part_*.tar.gz > videos_merged.tar.gz
# Extract the merged archive
tar -xzf videos_merged.tar.gz
# [Optional] Clean up the split files and merged archive
rm videos_part_*.tar.gz videos_merged.tar.gz
# After extraction, you will get a directory containing all videos
# The path to this directory will be used as --video_root in evaluation
# For example: 'VideoEval-Pro/videos'
```
2. **[Optional] Pre-extract Frames**
To improve efficiency, you can pre-extract frames from videos. The extracted frames should be organized as follows:
```
frames_root/
├── video_name_1/ # Directory name is thevideo name
│ ├── 000001.jpg # Frame images
│ ├── 000002.jpg
│ └── ...
├── video_name_2/
│ ├── 000001.jpg
│ ├── 000002.jpg
│ └── ...
└── ...
```
After frame extraction, the path to the frames will be used as `--frames_root`. Set `--using_frames True` when running the evaluation script.
3. **Setup Evaluation Environment**
```bash
# Clone the repository from the GitHub repository
git clone https://github.com/TIGER-AI-Lab/VideoEval-Pro
cd VideoEval-Pro
# Create conda environment from requirements.txt (there are different requirements files for different models)
conda create -n videoevalpro --file requirements.txt
conda activate videoevalpro
```
4. **Run Evaluation**
```bash
cd VideoEval-Pro
# Set PYTHONPATH
export PYTHONPATH=.
# Run evaluation script with the following parameters:
# --video_root: Path to video files folder
# --frames_root: Path to video frames folder [For using_frames]
# --output_path: Path to save output results
# --using_frames: Whether to use pre-extracted frames
# --model_path: Path to model
# --device: Device to run inference on
# --num_frames: Number of frames to sample from video
# --max_retries: Maximum number of retries for failed inference
# --num_threads: Number of threads for parallel processing
python tools/*_chat.py \
--video_root <path_to_videos> \
--frames_root <path_to_frames> \
--output_path <path_to_save_results> \
--using_frames <True/False> \
--model_path <model_name_or_path> \
--device <device> \
--num_frames <number_of_frames> \
--max_retries <max_retries> \
--num_threads <num_threads>
E.g.:
python tools/qwen_chat.py \
--video_root ./videos \
--frames_root ./frames \
--output_path ./results/qwen_results.jsonl \
--using_frames False \
--model_path Qwen/Qwen2-VL-7B-Instruct \
--device cuda \
--num_frames 32 \
--max_retries 10 \
--num_threads 1
```
5. **Judge the results**
```bash
cd VideoEval-Pro
# Set PYTHONPATH
export PYTHONPATH=.
# Run judge script *gpt4o_judge.py* with the following parameters:
# --input_path: Path to save output results
# --output_path: Path to judged results
# --model_name: Version of the judge model
# --num_threads: Number of threads for parallel processing
python tools/gpt4o_judge.py \
--input_path <path_to_saved_results> \
--output_path <path_to_judged_results> \
--model_name <model_version> \
--num_threads <num_threads>
E.g.:
python tools/gpt4o_judge.py \
--input_path ./results/qwen_results.jsonl \
--output_path ./results/qwen_results_judged.jsonl \
--model_name gpt-4o-2024-08-06 \
--num_threads 1
```
**Note: the released results are judged by *gpt-4o-2024-08-06*** |