bubbliiiing commited on
Commit
4ffa5b4
·
1 Parent(s): b26307a

Update Readme

Browse files
Files changed (2) hide show
  1. README.md +160 -251
  2. README_en.md +207 -0
README.md CHANGED
@@ -9,290 +9,199 @@ tags:
9
  - video
10
  - video-generation
11
  ---
12
- # Wan2.1
13
 
14
- <p align="center">
15
- <img src="assets/logo.png" width="400"/>
16
- <p>
17
-
18
- <p align="center">
19
- 💜 <a href=""><b>Wan</b></a> &nbsp&nbsp | &nbsp&nbsp 🖥️ <a href="https://github.com/Wan-Video/Wan2.1">GitHub</a> &nbsp&nbsp | &nbsp&nbsp🤗 <a href="https://huggingface.co/Wan-AI/">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/Wan-AI">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="">Paper (Coming soon)</a> &nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://wanxai.com">Blog</a> &nbsp&nbsp | &nbsp&nbsp💬 <a href="https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg">WeChat Group</a>&nbsp&nbsp | &nbsp&nbsp 📖 <a href="https://discord.gg/p5XbdQV7">Discord</a>&nbsp&nbsp
20
- <br>
21
-
22
- -----
23
-
24
- [**Wan: Open and Advanced Large-Scale Video Generative Models**]("#") <be>
25
-
26
- In this repository, we present **Wan2.1**, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation. **Wan2.1** offers these key features:
27
- - 👍 **SOTA Performance**: **Wan2.1** consistently outperforms existing open-source models and state-of-the-art commercial solutions across multiple benchmarks.
28
- - 👍 **Supports Consumer-grade GPUs**: The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer-grade GPUs. It can generate a 5-second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization). Its performance is even comparable to some closed-source models.
29
- - 👍 **Multiple Tasks**: **Wan2.1** excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation.
30
- - 👍 **Visual Text Generation**: **Wan2.1** is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications.
31
- - 👍 **Powerful Video VAE**: **Wan-VAE** delivers exceptional efficiency and performance, encoding and decoding 1080P videos of any length while preserving temporal information, making it an ideal foundation for video and image generation.
32
-
33
-
34
- This repository hosts our T2V-1.3B model, a versatile solution for video generation that is compatible with nearly all consumer-grade GPUs. In this way, we hope that **Wan2.1** can serve as an easy-to-use tool for more creative teams in video creation, providing a high-quality foundational model for academic teams with limited computing resources. This will facilitate both the rapid development of the video creation community and the swift advancement of video technology.
35
-
36
-
37
- ## Video Demos
38
-
39
- <div align="center">
40
- <video width="80%" controls>
41
- <source src="https://cloud.video.taobao.com/vod/Jth64Y7wNoPcJki_Bo1ZJTDBvNjsgjlVKsNs05Fqfps.mp4" type="video/mp4">
42
- Your browser does not support the video tag.
43
- </video>
44
- </div>
45
-
46
-
47
- ## 🔥 Latest News!!
48
-
49
- * Feb 25, 2025: 👋 We've released the inference code and weights of Wan2.1.
50
-
51
-
52
- ## 📑 Todo List
53
- - Wan2.1 Text-to-Video
54
- - [x] Multi-GPU Inference code of the 14B and 1.3B models
55
- - [x] Checkpoints of the 14B and 1.3B models
56
- - [x] Gradio demo
57
- - [ ] Diffusers integration
58
- - [ ] ComfyUI integration
59
- - Wan2.1 Image-to-Video
60
- - [x] Multi-GPU Inference code of the 14B model
61
- - [x] Checkpoints of the 14B model
62
- - [x] Gradio demo
63
- - [ ] Diffusers integration
64
- - [ ] ComfyUI integration
65
-
66
-
67
- ## Quickstart
68
-
69
- #### Installation
70
- Clone the repo:
71
- ```
72
- git clone https://github.com/Wan-Video/Wan2.1.git
73
- cd Wan2.1
74
- ```
75
-
76
- Install dependencies:
77
- ```
78
- # Ensure torch >= 2.4.0
79
- pip install -r requirements.txt
80
- ```
81
-
82
-
83
- #### Model Download
84
-
85
- | Models | Download Link | Notes |
86
- | --------------|-------------------------------------------------------------------------------|-------------------------------|
87
- | T2V-14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-T2V-14B) | Supports both 480P and 720P
88
- | I2V-14B-720P | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-I2V-14B-720P) | Supports 720P
89
- | I2V-14B-480P | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-I2V-14B-480P) | Supports 480P
90
- | T2V-1.3B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-T2V-1.3B) | Supports 480P
91
-
92
-
93
- > 💡Note: The 1.3B model is capable of generating videos at 720P resolution. However, due to limited training at this resolution, the results are generally less stable compared to 480P. For optimal performance, we recommend using 480P resolution.
94
-
95
-
96
- Download models using 🤗 huggingface-cli:
97
- ```
98
- pip install "huggingface_hub[cli]"
99
- huggingface-cli download Wan-AI/Wan2.1-T2V-1.3B --local-dir ./Wan2.1-T2V-1.3B
100
- ```
101
-
102
- Download models using 🤖 modelscope-cli:
103
- ```
104
- pip install modelscope
105
- modelscope download Wan-AI/Wan2.1-T2V-1.3B --local_dir ./Wan2.1-T2V-1.3B
106
- ```
107
-
108
- #### Run Text-to-Video Generation
109
-
110
- This repository supports two Text-to-Video models (1.3B and 14B) and two resolutions (480P and 720P). The parameters and configurations for these models are as follows:
111
-
112
- <table>
113
- <thead>
114
- <tr>
115
- <th rowspan="2">Task</th>
116
- <th colspan="2">Resolution</th>
117
- <th rowspan="2">Model</th>
118
- </tr>
119
- <tr>
120
- <th>480P</th>
121
- <th>720P</th>
122
- </tr>
123
- </thead>
124
- <tbody>
125
- <tr>
126
- <td>t2v-14B</td>
127
- <td style="color: green;">✔️</td>
128
- <td style="color: green;">✔️</td>
129
- <td>Wan2.1-T2V-14B</td>
130
- </tr>
131
- <tr>
132
- <td>t2v-1.3B</td>
133
- <td style="color: green;">✔️</td>
134
- <td style="color: red;">❌</td>
135
- <td>Wan2.1-T2V-1.3B</td>
136
- </tr>
137
- </tbody>
138
  </table>
139
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
 
141
- ##### (1) Without Prompt Extention
 
 
 
142
 
143
- To facilitate implementation, we will start with a basic version of the inference process that skips the [prompt extension](#2-using-prompt-extention) step.
144
 
145
- - Single-GPU inference
146
 
147
- ```
148
- python generate.py --task t2v-1.3B --size 832*480 --ckpt_dir ./Wan2.1-T2V-1.3B --sample_shift 8 --sample_guide_scale 6 --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
149
- ```
150
 
151
- If you encounter OOM (Out-of-Memory) issues, you can use the `--offload_model True` and `--t5_cpu` options to reduce GPU memory usage. For example, on an RTX 4090 GPU:
 
152
 
153
  ```
154
- python generate.py --task t2v-1.3B --size 832*480 --ckpt_dir ./Wan2.1-T2V-1.3B --offload_model True --t5_cpu --sample_shift 8 --sample_guide_scale 6 --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
155
- ```
156
-
157
- > 💡Note: If you are using the `T2V-1.3B` model, we recommend setting the parameter `--sample_guide_scale 6`. The `--sample_shift parameter` can be adjusted within the range of 8 to 12 based on the performance.
158
 
159
- - Multi-GPU inference using FSDP + xDiT USP
 
160
 
161
- ```
162
- pip install "xfuser>=0.4.1"
163
- torchrun --nproc_per_node=8 generate.py --task t2v-1.3B --size 832*480 --ckpt_dir ./Wan2.1-T2V-1.3B --dit_fsdp --t5_fsdp --ulysses_size 8 --sample_shift 8 --sample_guide_scale 6 --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
164
- ```
165
 
 
 
166
 
167
- ##### (2) Using Prompt Extention
 
 
168
 
169
- Extending the prompts can effectively enrich the details in the generated videos, further enhancing the video quality. Therefore, we recommend enabling prompt extension. We provide the following two methods for prompt extension:
 
 
 
170
 
171
- - Use the Dashscope API for extension.
172
- - Apply for a `dashscope.api_key` in advance ([EN](https://www.alibabacloud.com/help/en/model-studio/getting-started/first-api-call-to-qwen) | [CN](https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen)).
173
- - Configure the environment variable `DASH_API_KEY` to specify the Dashscope API key. For users of Alibaba Cloud's international site, you also need to set the environment variable `DASH_API_URL` to 'https://dashscope-intl.aliyuncs.com/api/v1'. For more detailed instructions, please refer to the [dashscope document](https://www.alibabacloud.com/help/en/model-studio/developer-reference/use-qwen-by-calling-api?spm=a2c63.p38356.0.i1).
174
- - Use the `qwen-plus` model for text-to-video tasks and `qwen-vl-max` for image-to-video tasks.
175
- - You can modify the model used for extension with the parameter `--prompt_extend_model`. For example:
176
- ```
177
- DASH_API_KEY=your_key python generate.py --task t2v-1.3B --size 832*480 --ckpt_dir ./Wan2.1-T2V-1.3B --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage" --use_prompt_extend --prompt_extend_method 'dashscope' --prompt_extend_target_lang 'ch'
178
  ```
179
 
180
- - Using a local model for extension.
 
 
181
 
182
- - By default, the Qwen model on HuggingFace is used for this extension. Users can choose based on the available GPU memory size.
183
- - For text-to-video tasks, you can use models like `Qwen/Qwen2.5-14B-Instruct`, `Qwen/Qwen2.5-7B-Instruct` and `Qwen/Qwen2.5-3B-Instruct`
184
- - For image-to-video tasks, you can use models like `Qwen/Qwen2.5-VL-7B-Instruct` and `Qwen/Qwen2.5-VL-3B-Instruct`.
185
- - Larger models generally provide better extension results but require more GPU memory.
186
- - You can modify the model used for extension with the parameter `--prompt_extend_model` , allowing you to specify either a local model path or a Hugging Face model. For example:
 
 
187
 
188
- ```
189
- python generate.py --task t2v-1.3B --size 832*480 --ckpt_dir ./Wan2.1-T2V-1.3B --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage" --use_prompt_extend --prompt_extend_method 'local_qwen' --prompt_extend_target_lang 'ch'
190
- ```
 
 
 
 
191
 
192
- ##### (3) Runing local gradio
193
 
194
- ```
195
- cd gradio
196
- # if one uses dashscope’s API for prompt extension
197
- DASH_API_KEY=your_key python t2v_1.3B_singleGPU.py --prompt_extend_method 'dashscope' --ckpt_dir ./Wan2.1-T2V-1.3B
198
 
199
- # if one uses a local model for prompt extension
200
- python t2v_1.3B_singleGPU.py --prompt_extend_method 'local_qwen' --ckpt_dir ./Wan2.1-T2V-1.3B
 
 
 
 
 
 
 
201
  ```
202
 
 
203
 
 
204
 
205
- ## Evaluation
206
-
207
- We employ our **Wan-Bench** framework to evaluate the performance of the T2V-1.3B model, with the results displayed in the table below. The results indicate that our smaller 1.3B model surpasses the overall metrics of larger open-source models, demonstrating the effectiveness of **WanX2.1**'s architecture and the data construction pipeline.
208
-
209
- <div align="center">
210
- <img src="assets/vben_1.3b_vs_sota.png" alt="" style="width: 80%;" />
211
- </div>
212
-
213
-
214
-
215
- ## Computational Efficiency on Different GPUs
216
-
217
- We test the computational efficiency of different **Wan2.1** models on different GPUs in the following table. The results are presented in the format: **Total time (s) / peak GPU memory (GB)**.
218
-
219
-
220
- <div align="center">
221
- <img src="assets/comp_effic.png" alt="" style="width: 80%;" />
222
- </div>
223
-
224
- > The parameter settings for the tests presented in this table are as follows:
225
- > (1) For the 1.3B model on 8 GPUs, set `--ring_size 8` and `--ulysses_size 1`;
226
- > (2) For the 14B model on 1 GPU, use `--offload_model True`;
227
- > (3) For the 1.3B model on a single 4090 GPU, set `--offload_model True --t5_cpu`;
228
- > (4) For all testings, no prompt extension was applied, meaning `--use_prompt_extend` was not enabled.
229
-
230
- -------
231
-
232
- ## Introduction of Wan2.1
233
-
234
- **Wan2.1** is designed on the mainstream diffusion transformer paradigm, achieving significant advancements in generative capabilities through a series of innovations. These include our novel spatio-temporal variational autoencoder (VAE), scalable training strategies, large-scale data construction, and automated evaluation metrics. Collectively, these contributions enhance the model’s performance and versatility.
235
-
236
-
237
- ##### (1) 3D Variational Autoencoders
238
- We propose a novel 3D causal VAE architecture, termed **Wan-VAE** specifically designed for video generation. By combining multiple strategies, we improve spatio-temporal compression, reduce memory usage, and ensure temporal causality. **Wan-VAE** demonstrates significant advantages in performance efficiency compared to other open-source VAEs. Furthermore, our **Wan-VAE** can encode and decode unlimited-length 1080P videos without losing historical temporal information, making it particularly well-suited for video generation tasks.
239
-
240
-
241
- <div align="center">
242
- <img src="assets/video_vae_res.jpg" alt="" style="width: 80%;" />
243
- </div>
244
-
245
-
246
- ##### (2) Video Diffusion DiT
247
-
248
- **Wan2.1** is designed using the Flow Matching framework within the paradigm of mainstream Diffusion Transformers. Our model's architecture uses the T5 Encoder to encode multilingual text input, with cross-attention in each transformer block embedding the text into the model structure. Additionally, we employ an MLP with a Linear layer and a SiLU layer to process the input time embeddings and predict six modulation parameters individually. This MLP is shared across all transformer blocks, with each block learning a distinct set of biases. Our experimental findings reveal a significant performance improvement with this approach at the same parameter scale.
249
-
250
- <div align="center">
251
- <img src="assets/video_dit_arch.jpg" alt="" style="width: 80%;" />
252
- </div>
253
-
254
-
255
- | Model | Dimension | Input Dimension | Output Dimension | Feedforward Dimension | Frequency Dimension | Number of Heads | Number of Layers |
256
- |--------|-----------|-----------------|------------------|-----------------------|---------------------|-----------------|------------------|
257
- | 1.3B | 1536 | 16 | 16 | 8960 | 256 | 12 | 30 |
258
- | 14B | 5120 | 16 | 16 | 13824 | 256 | 40 | 40 |
259
-
260
-
261
-
262
- ##### Data
263
-
264
- We curated and deduplicated a candidate dataset comprising a vast amount of image and video data. During the data curation process, we designed a four-step data cleaning process, focusing on fundamental dimensions, visual quality and motion quality. Through the robust data processing pipeline, we can easily obtain high-quality, diverse, and large-scale training sets of images and videos.
265
-
266
- ![figure1](assets/data_for_diff_stage.jpg "figure1")
267
-
268
-
269
- ##### Comparisons to SOTA
270
- We compared **Wan2.1** with leading open-source and closed-source models to evaluate the performace. Using our carefully designed set of 1,035 internal prompts, we tested across 14 major dimensions and 26 sub-dimensions. Then we calculated the total score through a weighted average based on the importance of each dimension. The detailed results are shown in the table below. These results demonstrate our model's superior performance compared to both open-source and closed-source models.
271
-
272
- ![figure1](assets/vben_vs_sota.png "figure1")
273
-
274
 
275
- ## Citation
276
- If you find our work helpful, please cite us.
 
277
 
278
- ```
279
- @article{wan2.1,
280
- title = {Wan: Open and Advanced Large-Scale Video Generative Models},
281
- author = {Wan Team},
282
- journal = {},
283
- year = {2025}
284
- }
285
- ```
286
 
287
- ## License Agreement
288
- The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generate contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the [license](LICENSE.txt).
289
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
290
 
291
- ## Acknowledgements
292
 
293
- We would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [Qwen](https://huggingface.co/Qwen), [umt5-xxl](https://huggingface.co/google/umt5-xxl), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research.
294
 
 
 
 
295
 
 
 
 
 
 
 
 
296
 
297
- ## Contact Us
298
- If you would like to leave a message to our research or product teams, feel free to join our [Discord](https://discord.gg/p5XbdQV7) or [WeChat groups](https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg)!
 
9
  - video
10
  - video-generation
11
  ---
 
12
 
13
+ # Wan-Fun
14
+
15
+ 😊 Welcome!
16
+
17
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-yellow)](https://huggingface.co/spaces/alibaba-pai/Wan-Fun-1.3b)
18
+
19
+ [English](./README_en.md) | [简体中文](./README.md)
20
+
21
+ # 目录
22
+ - [目录](#目录)
23
+ - [模型地址](#模型地址)
24
+ - [视频作品](#视频作品)
25
+ - [快速启动](#快速启动)
26
+ - [如何使用](#如何使用)
27
+ - [参考文献](#参考文献)
28
+ - [许可证](#许可证)
29
+
30
+ # 模型地址
31
+ V1.0:
32
+ | 名称 | 存储空间 | Hugging Face | Model Scope | 描述 |
33
+ |--|--|--|--|--|
34
+ | Wan2.1-Fun-1.3B-InP | 13.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-1.3B-InP) | Wan2.1-Fun-1.3B文图生视频权重,以多分辨率训练,支持首尾图预测。 |
35
+ | Wan2.1-Fun-14B-InP | 20.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-14B-InP) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-14B-InP) | Wan2.1-Fun-14B文图生视频权重,以多分辨率训练,支持首尾图预测。 |
36
+
37
+ # 视频作品
38
+
39
+ ### Wan2.1-Fun-14B-InP
40
+
41
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
42
+ <tr>
43
+ <td>
44
+ <video src="https://github.com/user-attachments/assets/4e10d491-f1cf-4b08-a7c5-1e01e5418140" width="100%" controls autoplay loop></video>
45
+ </td>
46
+ <td>
47
+ <video src="https://github.com/user-attachments/assets/bd72a276-e60e-4b5d-86c1-d0f67e7425b9" width="100%" controls autoplay loop></video>
48
+ </td>
49
+ <td>
50
+ <video src="https://github.com/user-attachments/assets/cb7aef09-52c2-4973-80b4-b2fb63425044" width="100%" controls autoplay loop></video>
51
+ </td>
52
+ <td>
53
+ <video src="https://github.com/user-attachments/assets/f7e363a9-be09-4b72-bccf-cce9c9ebeb9b" width="100%" controls autoplay loop></video>
54
+ </td>
55
+ </tr>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  </table>
57
 
58
+ ### Wan2.1-Fun-1.3B-InP
59
+
60
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
61
+ <tr>
62
+ <td>
63
+ <video src="https://github.com/user-attachments/assets/28f3e720-8acc-4f22-a5d0-ec1c571e9466" width="100%" controls autoplay loop></video>
64
+ </td>
65
+ <td>
66
+ <video src="https://github.com/user-attachments/assets/fb6e4cb9-270d-47cd-8501-caf8f3e91b5c" width="100%" controls autoplay loop></video>
67
+ </td>
68
+ <td>
69
+ <video src="https://github.com/user-attachments/assets/989a4644-e33b-4f0c-b68e-2ff6ba37ac7e" width="100%" controls autoplay loop></video>
70
+ </td>
71
+ <td>
72
+ <video src="https://github.com/user-attachments/assets/9c604fa7-8657-49d1-8066-b5bb198b28b6" width="100%" controls autoplay loop></video>
73
+ </td>
74
+ </tr>
75
+ </table>
76
 
77
+ # 快速启动
78
+ ### 1. 云使用: AliyunDSW/Docker
79
+ #### a. 通过阿里云 DSW
80
+ DSW 有免费 GPU 时间,用户可申请一次,申请后3个月内有效。
81
 
82
+ 阿里云在[Freetier](https://free.aliyun.com/?product=9602825&crowd=enterprise&spm=5176.28055625.J_5831864660.1.e939154aRgha4e&scm=20140722.M_9974135.P_110.MO_1806-ID_9974135-MID_9974135-CID_30683-ST_8512-V_1)提供免费GPU时间,获取并在阿里云PAI-DSW中使用,5分钟内即可启动CogVideoX-Fun。
83
 
84
+ [![DSW Notebook](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/dsw.png)](https://gallery.pai-ml.com/#/preview/deepLearning/cv/cogvideox_fun)
85
 
86
+ #### b. 通过ComfyUI
87
+ 我们的ComfyUI界面如下,具体查看[ComfyUI README](comfyui/README.md)。
88
+ ![workflow graph](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1/cogvideoxfunv1_workflow_i2v.jpg)
89
 
90
+ #### c. 通过docker
91
+ 使用docker的情况下,请保证机器中已经正确安装显卡驱动与CUDA环境,然后以此执行以下命令:
92
 
93
  ```
94
+ # pull image
95
+ docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:cogvideox_fun
 
 
96
 
97
+ # enter image
98
+ docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:cogvideox_fun
99
 
100
+ # clone code
101
+ git clone https://github.com/aigc-apps/CogVideoX-Fun.git
 
 
102
 
103
+ # enter CogVideoX-Fun's dir
104
+ cd CogVideoX-Fun
105
 
106
+ # download weights
107
+ mkdir models/Diffusion_Transformer
108
+ mkdir models/Personalized_Model
109
 
110
+ # Please use the hugginface link or modelscope link to download the model.
111
+ # CogVideoX-Fun
112
+ # https://huggingface.co/alibaba-pai/CogVideoX-Fun-V1.1-5b-InP
113
+ # https://modelscope.cn/models/PAI/CogVideoX-Fun-V1.1-5b-InP
114
 
115
+ # Wan
116
+ # https://huggingface.co/alibaba-pai/Wan2.1-Fun-14B-InP
117
+ # https://modelscope.cn/models/PAI/Wan2.1-Fun-14B-InP
 
 
 
 
118
  ```
119
 
120
+ ### 2. 本地安装: 环境检查/下载/安装
121
+ #### a. 环境检查
122
+ 我们已验证该库可在以下环境中执行:
123
 
124
+ Windows 的详细信息:
125
+ - 操作系统 Windows 10
126
+ - python: python3.10 & python3.11
127
+ - pytorch: torch2.2.0
128
+ - CUDA: 11.8 & 12.1
129
+ - CUDNN: 8+
130
+ - GPU: Nvidia-3060 12G & Nvidia-3090 24G
131
 
132
+ Linux 的详细信息:
133
+ - 操作系统 Ubuntu 20.04, CentOS
134
+ - python: python3.10 & python3.11
135
+ - pytorch: torch2.2.0
136
+ - CUDA: 11.8 & 12.1
137
+ - CUDNN: 8+
138
+ - GPU:Nvidia-V100 16G & Nvidia-A10 24G & Nvidia-A100 40G & Nvidia-A100 80G
139
 
140
+ 我们需要大约 60GB 的可用磁盘空间,请检查!
141
 
142
+ #### b. 权重放置
143
+ 我们最好将[权重](#model-zoo)按照指定路径进行放置:
 
 
144
 
145
+ ```
146
+ 📦 models/
147
+ ├── 📂 Diffusion_Transformer/
148
+ │ ├── 📂 CogVideoX-Fun-V1.1-2b-InP/
149
+ │ ├── 📂 CogVideoX-Fun-V1.1-5b-InP/
150
+ │ ├── 📂 Wan2.1-Fun-14B-InP
151
+ │ └── 📂 Wan2.1-Fun-1.3B-InP/
152
+ ├── 📂 Personalized_Model/
153
+ │ └── your trained trainformer model / your trained lora model (for UI load)
154
  ```
155
 
156
+ # 如何使用
157
 
158
+ <h3 id="video-gen">1. 生成 </h3>
159
 
160
+ #### a、显存节省方案
161
+ 由于Wan2.1的参数非常大,我们需要考虑显存节省方案,以节省显存适应消费级显卡。我们给每个预测文件都提供了GPU_memory_mode,可以在model_cpu_offload,model_cpu_offload_and_qfloat8,sequential_cpu_offload中进行选择。该方案同样适用于CogVideoX-Fun的生成。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
162
 
163
+ - model_cpu_offload代表整个模型在使用后会进入cpu,可以节省部分显存。
164
+ - model_cpu_offload_and_qfloat8代表整个模型在使用后会进入cpu,并且对transformer模型进行了float8的量化,可以节省更多的显存。
165
+ - sequential_cpu_offload代表模型的每一层在使用后会进入cpu,速度较慢,节省大量显存。
166
 
167
+ qfloat8会部分降低模型的性能,但可以节省更多的显存。如果显存足够,推荐使用model_cpu_offload。
 
 
 
 
 
 
 
168
 
169
+ #### b、通过comfyui
170
+ 具体查看[ComfyUI README](comfyui/README.md)
171
 
172
+ #### c、运行python文件
173
+ - 步骤1:下载对应[权重](#model-zoo)放入models文件夹。
174
+ - 步骤2:根据不同的权重与预测目标使用不同的文件进行预测。当前该库支持CogVideoX-Fun、Wan2.1和Wan2.1-Fun,在examples文件夹下用文件夹名以区分,不同模型支持的功能不同,请视具体情况予以区分。以CogVideoX-Fun为例。
175
+ - 文生视频:
176
+ - 使用examples/cogvideox_fun/predict_t2v.py文件中修改prompt、neg_prompt、guidance_scale和seed。
177
+ - 而后运行examples/cogvideox_fun/predict_t2v.py文件,等待生成结果,结果保存在samples/cogvideox-fun-videos文件夹中。
178
+ - 图生视频:
179
+ - 使用examples/cogvideox_fun/predict_i2v.py文件中修改validation_image_start、validation_image_end、prompt、neg_prompt、guidance_scale和seed。
180
+ - validation_image_start是视频的开始图片,validation_image_end是视频的结尾图片。
181
+ - 而后运行examples/cogvideox_fun/predict_i2v.py文件,等待生成结果,结果保存在samples/cogvideox-fun-videos_i2v文件夹中。
182
+ - 视频生视频:
183
+ - 使用examples/cogvideox_fun/predict_v2v.py文件中修改validation_video、validation_image_end、prompt、neg_prompt、guidance_scale和seed。
184
+ - validation_video是视频生视频的参考视频。您可以使用以下视频运行演示:[演示视频](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1/play_guitar.mp4)
185
+ - 而后运行examples/cogvideox_fun/predict_v2v.py文件,等待生成结果,结果保存在samples/cogvideox-fun-videos_v2v文件夹中。
186
+ - 普通控制生视频(Canny、Pose、Depth等):
187
+ - 使用examples/cogvideox_fun/predict_v2v_control.py文件中修改control_video、validation_image_end、prompt、neg_prompt、guidance_scale和seed。
188
+ - control_video是控制生视频的控制视频,是使用Canny、Pose、Depth等算子提取后的视频。您可以使用以下视频运行演示:[演示视频](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1.1/pose.mp4)
189
+ - 而后运行examples/cogvideox_fun/predict_v2v_control.py文件,等待生成结果,结果保存在samples/cogvideox-fun-videos_v2v_control文件夹中。
190
+ - 步骤3:如果想结合自己训练的其他backbone与Lora,则看情况修改examples/{model_name}/predict_t2v.py中的examples/{model_name}/predict_i2v.py和lora_path。
191
 
192
+ #### d、通过ui界面
193
 
194
+ webui支持文生视频、图生视频、视频生视频和普通控制生视频(Canny、Pose、Depth等)。当前该库支持CogVideoX-Fun、Wan2.1和Wan2.1-Fun,在examples文件夹下用文件夹名以区分,不同模型支持的功能不同,请视具体情况予以区分。以CogVideoX-Fun为例。
195
 
196
+ - 步骤1:下载对应[权重](#model-zoo)放入models文件夹。
197
+ - 步骤2:运行examples/cogvideox_fun/app.py文件,进入gradio页面。
198
+ - 步骤3:根据页面选择生成模型,填入prompt、neg_prompt、guidance_scale和seed等,点击生成,等待生成结果,结果保存在sample文件夹中。
199
 
200
+ # 参考文献
201
+ - CogVideo: https://github.com/THUDM/CogVideo/
202
+ - EasyAnimate: https://github.com/aigc-apps/EasyAnimate
203
+ - Wan2.1: https://github.com/Wan-Video/Wan2.1/
204
+
205
+ # 许可证
206
+ 本项目采用 [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).
207
 
 
 
README_en.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ pipeline_tag: text-to-video
7
+ library_name: diffusers
8
+ tags:
9
+ - video
10
+ - video-generation
11
+ ---
12
+
13
+ # Wan-Fun
14
+
15
+ 😊 Welcome!
16
+
17
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-yellow)](https://huggingface.co/spaces/alibaba-pai/Wan-Fun-1.3b)
18
+
19
+ [English](./README_en.md) | [简体中文](./README.md)
20
+
21
+ # Table of Contents
22
+ - [Table of Contents](#table-of-contents)
23
+ - [Model zoo](#model-zoo)
24
+ - [Video Result](#video-result)
25
+ - [Quick Start](#quick-start)
26
+ - [How to use](#how-to-use)
27
+ - [Reference](#reference)
28
+ - [License](#license)
29
+
30
+ # Model zoo
31
+ V1.0:
32
+ | Name | Storage Space | Hugging Face | Model Scope | Description |
33
+ |--|--|--|--|--|
34
+ | Wan2.1-Fun-1.3B-InP | 13.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-1.3B-InP) | Wan2.1-Fun-1.3B text-to-video weights, trained at multiple resolutions, supporting start and end frame prediction. |
35
+ | Wan2.1-Fun-14B-InP | 20.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-14B-InP) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-14B-InP) | Wan2.1-Fun-14B text-to-video weights, trained at multiple resolutions, supporting start and end frame prediction. |
36
+
37
+ # Video Result
38
+
39
+ ### Wan2.1-Fun-14B-InP
40
+
41
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
42
+ <tr>
43
+ <td>
44
+ <video src="https://github.com/user-attachments/assets/4e10d491-f1cf-4b08-a7c5-1e01e5418140" width="100%" controls autoplay loop></video>
45
+ </td>
46
+ <td>
47
+ <video src="https://github.com/user-attachments/assets/bd72a276-e60e-4b5d-86c1-d0f67e7425b9" width="100%" controls autoplay loop></video>
48
+ </td>
49
+ <td>
50
+ <video src="https://github.com/user-attachments/assets/cb7aef09-52c2-4973-80b4-b2fb63425044" width="100%" controls autoplay loop></video>
51
+ </td>
52
+ <td>
53
+ <video src="https://github.com/user-attachments/assets/f7e363a9-be09-4b72-bccf-cce9c9ebeb9b" width="100%" controls autoplay loop></video>
54
+ </td>
55
+ </tr>
56
+ </table>
57
+
58
+ ### Wan2.1-Fun-1.3B-InP
59
+
60
+ <table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
61
+ <tr>
62
+ <td>
63
+ <video src="https://github.com/user-attachments/assets/28f3e720-8acc-4f22-a5d0-ec1c571e9466" width="100%" controls autoplay loop></video>
64
+ </td>
65
+ <td>
66
+ <video src="https://github.com/user-attachments/assets/fb6e4cb9-270d-47cd-8501-caf8f3e91b5c" width="100%" controls autoplay loop></video>
67
+ </td>
68
+ <td>
69
+ <video src="https://github.com/user-attachments/assets/989a4644-e33b-4f0c-b68e-2ff6ba37ac7e" width="100%" controls autoplay loop></video>
70
+ </td>
71
+ <td>
72
+ <video src="https://github.com/user-attachments/assets/9c604fa7-8657-49d1-8066-b5bb198b28b6" width="100%" controls autoplay loop></video>
73
+ </td>
74
+ </tr>
75
+ </table>
76
+
77
+ # Quick Start
78
+ ### 1. Cloud usage: AliyunDSW/Docker
79
+ #### a. From AliyunDSW
80
+ DSW has free GPU time, which can be applied once by a user and is valid for 3 months after applying.
81
+
82
+ Aliyun provide free GPU time in [Freetier](https://free.aliyun.com/?product=9602825&crowd=enterprise&spm=5176.28055625.J_5831864660.1.e939154aRgha4e&scm=20140722.M_9974135.P_110.MO_1806-ID_9974135-MID_9974135-CID_30683-ST_8512-V_1), get it and use in Aliyun PAI-DSW to start CogVideoX-Fun within 5min!
83
+
84
+ [![DSW Notebook](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/dsw.png)](https://gallery.pai-ml.com/#/preview/deepLearning/cv/cogvideox_fun)
85
+
86
+ #### b. From ComfyUI
87
+ Our ComfyUI is as follows, please refer to [ComfyUI README](comfyui/README.md) for details.
88
+ ![workflow graph](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1/cogvideoxfunv1_workflow_i2v.jpg)
89
+
90
+ #### c. From docker
91
+ If you are using docker, please make sure that the graphics card driver and CUDA environment have been installed correctly in your machine.
92
+
93
+ Then execute the following commands in this way:
94
+
95
+ ```
96
+ # pull image
97
+ docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:cogvideox_fun
98
+
99
+ # enter image
100
+ docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:cogvideox_fun
101
+
102
+ # clone code
103
+ git clone https://github.com/aigc-apps/CogVideoX-Fun.git
104
+
105
+ # enter CogVideoX-Fun's dir
106
+ cd CogVideoX-Fun
107
+
108
+ # download weights
109
+ mkdir models/Diffusion_Transformer
110
+ mkdir models/Personalized_Model
111
+
112
+ # Please use the hugginface link or modelscope link to download the model.
113
+ # CogVideoX-Fun
114
+ # https://huggingface.co/alibaba-pai/CogVideoX-Fun-V1.1-5b-InP
115
+ # https://modelscope.cn/models/PAI/CogVideoX-Fun-V1.1-5b-InP
116
+
117
+ # Wan
118
+ # https://huggingface.co/alibaba-pai/Wan2.1-Fun-14B-InP
119
+ # https://modelscope.cn/models/PAI/Wan2.1-Fun-14B-InP
120
+ ```
121
+
122
+ ### 2. Local install: Environment Check/Downloading/Installation
123
+ #### a. Environment Check
124
+ We have verified this repo execution on the following environment:
125
+
126
+ The detailed of Windows:
127
+ - OS: Windows 10
128
+ - python: python3.10 & python3.11
129
+ - pytorch: torch2.2.0
130
+ - CUDA: 11.8 & 12.1
131
+ - CUDNN: 8+
132
+ - GPU: Nvidia-3060 12G & Nvidia-3090 24G
133
+
134
+ The detailed of Linux:
135
+ - OS: Ubuntu 20.04, CentOS
136
+ - python: python3.10 & python3.11
137
+ - pytorch: torch2.2.0
138
+ - CUDA: 11.8 & 12.1
139
+ - CUDNN: 8+
140
+ - GPU:Nvidia-V100 16G & Nvidia-A10 24G & Nvidia-A100 40G & Nvidia-A100 80G
141
+
142
+ We need about 60GB available on disk (for saving weights), please check!
143
+
144
+ #### b. Weights
145
+ We'd better place the [weights](#model-zoo) along the specified path:
146
+
147
+ ```
148
+ 📦 models/
149
+ ├── 📂 Diffusion_Transformer/
150
+ │ ├── 📂 CogVideoX-Fun-V1.1-2b-InP/
151
+ │ ├── 📂 CogVideoX-Fun-V1.1-5b-InP/
152
+ │ ├── 📂 Wan2.1-Fun-14B-InP
153
+ │ └── 📂 Wan2.1-Fun-1.3B-InP/
154
+ ├── 📂 Personalized_Model/
155
+ │ └── your trained trainformer model / your trained lora model (for UI load)
156
+ ```
157
+
158
+ # How to Use
159
+
160
+ <h3 id="video-gen">1. Generation</h3>
161
+
162
+ #### a. GPU Memory Optimization
163
+ Since Wan2.1 has a very large number of parameters, we need to consider memory optimization strategies to adapt to consumer-grade GPUs. We provide `GPU_memory_mode` for each prediction file, allowing you to choose between `model_cpu_offload`, `model_cpu_offload_and_qfloat8`, and `sequential_cpu_offload`. This solution is also applicable to CogVideoX-Fun generation.
164
+
165
+ - `model_cpu_offload`: The entire model is moved to the CPU after use, saving some GPU memory.
166
+ - `model_cpu_offload_and_qfloat8`: The entire model is moved to the CPU after use, and the transformer model is quantized to float8, saving more GPU memory.
167
+ - `sequential_cpu_offload`: Each layer of the model is moved to the CPU after use. It is slower but saves a significant amount of GPU memory.
168
+
169
+ `qfloat8` may slightly reduce model performance but saves more GPU memory. If you have sufficient GPU memory, it is recommended to use `model_cpu_offload`.
170
+
171
+ #### b. Using ComfyUI
172
+ For details, refer to [ComfyUI README](comfyui/README.md).
173
+
174
+ #### c. Running Python Files
175
+ - **Step 1**: Download the corresponding [weights](#model-zoo) and place them in the `models` folder.
176
+ - **Step 2**: Use different files for prediction based on the weights and prediction goals. This library currently supports CogVideoX-Fun, Wan2.1, and Wan2.1-Fun. Different models are distinguished by folder names under the `examples` folder, and their supported features vary. Use them accordingly. Below is an example using CogVideoX-Fun:
177
+ - **Text-to-Video**:
178
+ - Modify `prompt`, `neg_prompt`, `guidance_scale`, and `seed` in the file `examples/cogvideox_fun/predict_t2v.py`.
179
+ - Run the file `examples/cogvideox_fun/predict_t2v.py` and wait for the results. The generated videos will be saved in the folder `samples/cogvideox-fun-videos`.
180
+ - **Image-to-Video**:
181
+ - Modify `validation_image_start`, `validation_image_end`, `prompt`, `neg_prompt`, `guidance_scale`, and `seed` in the file `examples/cogvideox_fun/predict_i2v.py`.
182
+ - `validation_image_start` is the starting image of the video, and `validation_image_end` is the ending image of the video.
183
+ - Run the file `examples/cogvideox_fun/predict_i2v.py` and wait for the results. The generated videos will be saved in the folder `samples/cogvideox-fun-videos_i2v`.
184
+ - **Video-to-Video**:
185
+ - Modify `validation_video`, `validation_image_end`, `prompt`, `neg_prompt`, `guidance_scale`, and `seed` in the file `examples/cogvideox_fun/predict_v2v.py`.
186
+ - `validation_video` is the reference video for video-to-video generation. You can use the following demo video: [Demo Video](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1/play_guitar.mp4).
187
+ - Run the file `examples/cogvideox_fun/predict_v2v.py` and wait for the results. The generated videos will be saved in the folder `samples/cogvideox-fun-videos_v2v`.
188
+ - **Controlled Video Generation (Canny, Pose, Depth, etc.)**:
189
+ - Modify `control_video`, `validation_image_end`, `prompt`, `neg_prompt`, `guidance_scale`, and `seed` in the file `examples/cogvideox_fun/predict_v2v_control.py`.
190
+ - `control_video` is the control video extracted using operators such as Canny, Pose, or Depth. You can use the following demo video: [Demo Video](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1.1/pose.mp4).
191
+ - Run the file `examples/cogvideox_fun/predict_v2v_control.py` and wait for the results. The generated videos will be saved in the folder `samples/cogvideox-fun-videos_v2v_control`.
192
+ - **Step 3**: If you want to integrate other backbones or Loras trained by yourself, modify `lora_path` and relevant paths in `examples/{model_name}/predict_t2v.py` or `examples/{model_name}/predict_i2v.py` as needed.
193
+
194
+ #### d. Using the Web UI
195
+ The web UI supports text-to-video, image-to-video, video-to-video, and controlled video generation (Canny, Pose, Depth, etc.). This library currently supports CogVideoX-Fun, Wan2.1, and Wan2.1-Fun. Different models are distinguished by folder names under the `examples` folder, and their supported features vary. Use them accordingly. Below is an example using CogVideoX-Fun:
196
+
197
+ - **Step 1**: Download the corresponding [weights](#model-zoo) and place them in the `models` folder.
198
+ - **Step 2**: Run the file `examples/cogvideox_fun/app.py` to access the Gradio interface.
199
+ - **Step 3**: Select the generation model on the page, fill in `prompt`, `neg_prompt`, `guidance_scale`, and `seed`, click "Generate," and wait for the results. The generated videos will be saved in the `sample` folder.
200
+
201
+ # Reference
202
+ - CogVideo: https://github.com/THUDM/CogVideo/
203
+ - EasyAnimate: https://github.com/aigc-apps/EasyAnimate
204
+ - Wan2.1: https://github.com/Wan-Video/Wan2.1/
205
+
206
+ # License
207
+ This project is licensed under the [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).