Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ inference:
|
|
19 |
|
20 |
## Overview
|
21 |
|
22 |
-
Wan2.1-T2V-14B-StepDistill-CfgDistill is an advanced text-to-video generation model built upon the Wan2.1-T2V-14B foundation. This approach allows the model to generate videos with significantly fewer inference steps (4
|
23 |
|
24 |
## Video Demos
|
25 |
|
@@ -34,7 +34,7 @@ Our training code is modified based on the [Self-Forcing](https://github.com/gua
|
|
34 |
Our inference framework utilizes [lightx2v](https://github.com/ModelTC/lightx2v), a highly efficient inference engine that supports multiple models. This framework significantly accelerates the video generation process while maintaining high quality output.
|
35 |
|
36 |
```bash
|
37 |
-
bash scripts/
|
38 |
```
|
39 |
|
40 |
## License Agreement
|
|
|
19 |
|
20 |
## Overview
|
21 |
|
22 |
+
Wan2.1-T2V-14B-StepDistill-CfgDistill is an advanced text-to-video generation model built upon the Wan2.1-T2V-14B foundation. This approach allows the model to generate videos with significantly fewer inference steps (4 steps) and without classifier-free guidance, substantially reducing video generation time while maintaining high quality outputs.
|
23 |
|
24 |
## Video Demos
|
25 |
|
|
|
34 |
Our inference framework utilizes [lightx2v](https://github.com/ModelTC/lightx2v), a highly efficient inference engine that supports multiple models. This framework significantly accelerates the video generation process while maintaining high quality output.
|
35 |
|
36 |
```bash
|
37 |
+
bash scripts/wan/run_wan_t2v_distill_4step_cfg.sh
|
38 |
```
|
39 |
|
40 |
## License Agreement
|