Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,51 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
## Cinemo: Consistent and Controllable Image Animation with Motion Diffusion Models
|
| 6 |
+
|
| 7 |
+
This repo contains pre-trained weights for our paper exploring image animation with motion diffusion models (Cinemo). You can find more visualizations on our [project page](https://maxin-cn.github.io/cinemo_project/).
|
| 8 |
+
|
| 9 |
+
In this project, we propose a novel method called Cinemo, which can perform motion-controllable image animation with strong consistency and smoothness. To improve motion smoothness, Cinemo learns the distribution of motion residuals, rather than directly generating subsequent frames. Additionally, a structural similarity index-based method is proposed to control the motion intensity. Furthermore, we propose a noise refinement technique based on discrete cosine transformation to ensure temporal consistency. These three methods help Cinemo generate highly consistent, smooth, and motion-controlled image animation results. Compared to previous methods, Cinemo offers simpler and more precise user control and better generative performance.
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
## News
|
| 13 |
+
- (🔥 New) Jun. 2, 2024. 💥 The inference code is released. The checkpoint can be found [here](https://huggingface.co/maxin-cn/Cinemo/tree/main).
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
## Setup
|
| 17 |
+
|
| 18 |
+
First, download and set up the repo:
|
| 19 |
+
|
| 20 |
+
```bash
|
| 21 |
+
git clone https://github.com/maxin-cn/Cinemo
|
| 22 |
+
cd Cinemo
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
We provide an [`environment.yml`](environment.yml) file that can be used to create a Conda environment. If you only want
|
| 26 |
+
to run pre-trained models locally on CPU, you can remove the `cudatoolkit` and `pytorch-cuda` requirements from the file.
|
| 27 |
+
|
| 28 |
+
```bash
|
| 29 |
+
conda env create -f environment.yml
|
| 30 |
+
conda activate cinemo
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
## Animation
|
| 35 |
+
|
| 36 |
+
You can sample from our **pre-trained Cinemo models** with [`animation.py`](pipelines/animation.py). Weights for our pre-trained Cinemo model can be found [here](https://huggingface.co/maxin-cn/Cinemo/tree/main). The script has various arguments to adjust sampling steps, change the classifier-free guidance scale, etc:
|
| 37 |
+
|
| 38 |
+
```bash
|
| 39 |
+
bash pipelines/animation.sh
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
## Other Applications
|
| 43 |
+
|
| 44 |
+
You can also utilize Cinemo for other applications, such as motion transfer and video editing:
|
| 45 |
+
|
| 46 |
+
```bash
|
| 47 |
+
bash pipelines/video_editing.sh
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
## Acknowledgments
|
| 51 |
+
Cinemo has been greatly inspired by the following amazing works and teams: [LaVie](https://github.com/Vchitect/LaVie) and [SEINE](https://github.com/Vchitect/SEINE), we thank all the contributors for open-sourcing.
|