Update README.md
Browse files
README.md
CHANGED
@@ -41,9 +41,10 @@ Additionally, our repository provides more tools to benefit the research communi
|
|
41 |
|
42 |
## 🔥 Updates
|
43 |
- [x] **\[2025.03.26\]** Release inference and training code.
|
44 |
-
- [
|
45 |
-
- [ ]
|
46 |
-
- [ ]
|
|
|
47 |
|
48 |
## 🏃🏼 Inference
|
49 |
<details open>
|
@@ -129,7 +130,7 @@ Additionally, `./data/dot_single_video` contains code for processing raw videos
|
|
129 |
|
130 |
Simply run the following command to train MotionPro:
|
131 |
```
|
132 |
-
train_server_1.sh
|
133 |
```
|
134 |
In addition to loading video data from folders, we also support [WebDataset](https://rom1504.github.io/webdataset/), allowing videos to be read directly from tar files for training. This can be enabled by modifying the config file:
|
135 |
```
|
@@ -140,14 +141,30 @@ Furthermore, to train the **MotionPro-Dense** model, simply modify the `train_de
|
|
140 |
|
141 |
</details>
|
142 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
143 |
## 🌟 Star and Citation
|
144 |
-
If you find our work helpful for your research, please consider giving a star⭐ on this repository and citing our work
|
145 |
```
|
146 |
@inproceedings{2025motionpro,
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
}
|
152 |
```
|
153 |
|
@@ -155,4 +172,5 @@ If you find our work helpful for your research, please consider giving a star⭐
|
|
155 |
## 💖 Acknowledgement
|
156 |
<span id="acknowledgement"></span>
|
157 |
|
158 |
-
Our code is inspired by several works, including [SVD](https://github.com/Stability-AI/generative-models), [DragNUWA](https://github.com/ProjectNUWA/DragNUWA), [DOT](https://github.com/16lemoing/dot), [Cotracker](https://github.com/facebookresearch/co-tracker). Thanks to all the contributors!
|
|
|
|
41 |
|
42 |
## 🔥 Updates
|
43 |
- [x] **\[2025.03.26\]** Release inference and training code.
|
44 |
+
- [x] **\[2025.04.08\]** Release MC-Bench and evaluation code.
|
45 |
+
- [ ] Upload gradio demo usage video.
|
46 |
+
- [ ] Upload annotation tool for image-trajectory pair construction.
|
47 |
+
|
48 |
|
49 |
## 🏃🏼 Inference
|
50 |
<details open>
|
|
|
130 |
|
131 |
Simply run the following command to train MotionPro:
|
132 |
```
|
133 |
+
bash train_server_1.sh
|
134 |
```
|
135 |
In addition to loading video data from folders, we also support [WebDataset](https://rom1504.github.io/webdataset/), allowing videos to be read directly from tar files for training. This can be enabled by modifying the config file:
|
136 |
```
|
|
|
141 |
|
142 |
</details>
|
143 |
|
144 |
+
|
145 |
+
## 📝Evaluation
|
146 |
+
|
147 |
+
|
148 |
+
<summary><strong>MC-Bench</strong></summary>
|
149 |
+
|
150 |
+
Simply download 🤗[MC-Bench](https://huggingface.co/HiDream-ai/MotionPro/blob/main/data/MC-Bench.tar), extract the files, and place them in the `./data` directory.
|
151 |
+
|
152 |
+
<summary><strong>Run eval script</strong></summary>
|
153 |
+
|
154 |
+
Simply execute the following command to evaluate MotionPro on MC-Bench and Webvid:
|
155 |
+
```
|
156 |
+
bash eval_model.sh
|
157 |
+
```
|
158 |
+
|
159 |
+
|
160 |
## 🌟 Star and Citation
|
161 |
+
If you find our work helpful for your research, please consider giving a star⭐ on this repository and citing our work.
|
162 |
```
|
163 |
@inproceedings{2025motionpro,
|
164 |
+
title={{MotionPro: A Precise Motion Controller for Image-to-Video Generation}},
|
165 |
+
author={Zhongwei Zhang and Fuchen Long and Zhaofan Qiu and Yingwei Pan and Wu Liu and Ting Yao and Tao Mei},
|
166 |
+
booktitle={CVPR},
|
167 |
+
year={2025}
|
168 |
}
|
169 |
```
|
170 |
|
|
|
172 |
## 💖 Acknowledgement
|
173 |
<span id="acknowledgement"></span>
|
174 |
|
175 |
+
Our code is inspired by several works, including [SVD](https://github.com/Stability-AI/generative-models), [DragNUWA](https://github.com/ProjectNUWA/DragNUWA), [DOT](https://github.com/16lemoing/dot), [Cotracker](https://github.com/facebookresearch/co-tracker). Thanks to all the contributors!
|
176 |
+
|