Diffusers
Safetensors
LuckyLiGY commited on
Commit
f10e48c
·
verified ·
1 Parent(s): 4bf8925

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -23
README.md CHANGED
@@ -1,9 +1,14 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
4
- # MagicTryOn: Harnessing Diffusion Transformer for Garment-Preserving Video Virtual Try-on
 
 
 
 
5
 
6
  <a href="https://arxiv.org/abs/2505.21325v2"><img src='https://img.shields.io/badge/arXiv-2501.11325-red?style=flat&logo=arXiv&logoColor=red' alt='arxiv'></a>&nbsp;
 
7
  <a href="https://vivocameraresearch.github.io/magictryon/"><img src='https://img.shields.io/badge/Project-Page-Green' alt='GitHub'></a>&nbsp;
8
  <a href="http://www.apache.org/licenses/LICENSE-2.0"><img src='https://img.shields.io/badge/License-CC BY--NC--SA--4.0-lightgreen?style=flat&logo=Lisence' alt='License'></a>&nbsp;
9
 
@@ -13,11 +18,11 @@ license: cc-by-nc-sa-4.0
13
  <img src="asset/model.png" width="100%" height="100%"/>
14
  </div>
15
 
16
- ## Updates
17
- - **`2025/06/06`**: 🎉 We are excited to announce that the ***code and weights*** of [**MagicTryOn**](https://github.com/vivoCameraResearch/Magic-TryOn/) have been released! Check it out! You can download the weights from 🤗[**HuggingFace**](https://huggingface.co/LuckyLiGY/MagicTryOn).
18
  - **`2025/05/27`**: Our [**Paper on ArXiv**](https://arxiv.org/abs/2505.21325v2) is available 🥳!
19
 
20
- ## To-Do List for MagicTryOn Release
21
  - ✅ Release the source code
22
  - ✅ Release the inference demo and pretrained weights
23
  - ✅ Release the customized try-on utilities
@@ -26,7 +31,7 @@ license: cc-by-nc-sa-4.0
26
  - [ ] Release the second version of the pretrained model weights
27
  - [ ] Update Gradio App.
28
 
29
- ## Installation
30
 
31
  Create a conda environment & Install requirments
32
  ```shell
@@ -37,12 +42,17 @@ pip install -r requirements.txt
37
  # or
38
  conda env create -f environment.yaml
39
  ```
40
- If you encounter an error while installing Flash Attention, please [**manually download**](https://github.com/Dao-AILab/flash-attention/releases) the installation package based on your Python version, CUDA version, and Torch version, and install it using ***pip install***.
41
 
 
 
 
 
 
42
 
43
- ## Demo Inference
44
  ### 1. Image TryOn
45
- You can directly run the following command to perform image try-on. If you want to modify some inference parameters, please make the changes inside the ***predict_image_tryon_up.py*** file.
46
  ```PowerShell
47
  CUDA_VISIBLE_DEVICES=0 python predict_image_tryon_up.py
48
 
@@ -50,7 +60,7 @@ CUDA_VISIBLE_DEVICES=1 python predict_image_tryon_low.py
50
  ```
51
 
52
  ### 2. Video TryOn
53
- You can directly run the following command to perform image try-on. If you want to modify some inference parameters, please make the changes inside the ***predict_video_tryon_up.py*** file.
54
  ```PowerShell
55
  CUDA_VISIBLE_DEVICES=0 python predict_video_tryon_up.py
56
 
@@ -67,17 +77,16 @@ Before performing customized try-on, you need to complete the following five ste
67
  ```
68
 
69
  2. **Cloth Line Map**
70
- Extract the structural lines or sketch of the garment using [**AniLines-Anime-Lineart-Extractor**](https://github.com/zhenglinpan/AniLines-Anime-Lineart-Extractor).
71
-
72
  ```PowerShell
73
  cd inference/customize/AniLines
74
  python infer.py --dir_in datasets/garment/vivo/vivo_garment --dir_out datasets/garment/vivo/vivo_garment_anilines --mode detail --binarize -1 --fp16 True --device cuda:1
75
  ```
76
 
77
  3. **Mask**
78
- Generate the agnostic mask of the garment, which is essential for region control during try-on. Please [**download**]() the required checkpoint for obtaining the agnostic mask. The checkpoint needs to be placed in the ***inference/customize/gen_mask/ckpt*** folder.
79
 
80
- (1) You need to rename your video to ***video.mp4***, and then construct the folders according to the following directory structure.
81
  ```
82
  ├── datasets
83
  │ ├── person
@@ -93,7 +102,7 @@ Before performing customized try-on, you need to complete the following five ste
93
  | | | | ├── 00002 ...
94
  ```
95
 
96
- (2) Using ***video2image.py*** to convert the video into image frames and save them to ***00001/images***.
97
 
98
  (3) Run the following command to obtain the agnostic mask.
99
 
@@ -107,9 +116,9 @@ Before performing customized try-on, you need to complete the following five ste
107
  # mask, _ = get_mask_location('dc', "dresses", model_parse, keypoints)
108
  ```
109
 
110
- After completing the above steps, you will obtain the agnostic masks for all video frames in the ***00001/masks*** folder.
111
  4. **Agnostic Representation**
112
- Construct an agnostic representation of the person by removing garment-specific features. You can directly run ***get_masked_person.py*** to obtain the Agnostic Representation. Make sure to modify the ***image_folder*** and ***mask_folder*** parameters. The resulting video frames will be stored in ***00001/agnostic***.
113
 
114
  5. **DensePose**
115
  Use DensePose to obtain UV-mapped dense human body coordinates for better spatial alignment.
@@ -121,20 +130,20 @@ Before performing customized try-on, you need to complete the following five ste
121
  cd inference/customize/detectron2/projects/DensePose
122
  bash run.sh
123
  ```
124
- (3) The generated results will be stored in the ***00001/image-densepose*** folder.
125
 
126
- After completing the above steps, run the ***image2video.py*** file to generate the required customized videos: ***mask.mp4***, ***agnostic.mp4***, and ***densepose.mp4***. Then, run the following command:
127
  ```PowerShell
128
  CUDA_VISIBLE_DEVICES=0 python predict_video_tryon_customize.py
129
  ```
130
 
131
- ## Acknowledgement
132
- Our code is modified based on [VideoX-Fun](https://github.com/aigc-apps/VideoX-Fun/tree/main). We adopt [Wan2.1-I2V-14B](https://github.com/Wan-Video/Wan2.1) as the base model. We use [SCHP](https://github.com/GoGoDuck912/Self-Correction-Human-Parsing/tree/master) and [openpose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) to generate masks. We use [detectron2](https://github.com/facebookresearch/detectron2) to generate densepose. Thanks to all the contributors!
133
 
134
- ## License
135
  All the materials, including code, checkpoints, and demo, are made available under the [Creative Commons BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. You are free to copy, redistribute, remix, transform, and build upon the project for non-commercial purposes, as long as you give appropriate credit and distribute your contributions under the same license.
136
 
137
- ## Citation
138
 
139
  ```bibtex
140
  @misc{li2025magictryon,
@@ -146,4 +155,4 @@ All the materials, including code, checkpoints, and demo, are made available und
146
  primaryClass={cs.CV},
147
  url={https://arxiv.org/abs/2505.21325},
148
  }
149
- ```
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
4
+ <h2 align="center">
5
+ <a href="https://arxiv.org/abs/2505.21325v2">
6
+ MagicTryOn: Harnessing Diffusion Transformer for Garment-Preserving Video Virtual Try-on
7
+ </a>
8
+ </h2>
9
 
10
  <a href="https://arxiv.org/abs/2505.21325v2"><img src='https://img.shields.io/badge/arXiv-2501.11325-red?style=flat&logo=arXiv&logoColor=red' alt='arxiv'></a>&nbsp;
11
+ <a href="https://huggingface.co/LuckyLiGY/MagicTryOn"><img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'></a>&nbsp;
12
  <a href="https://vivocameraresearch.github.io/magictryon/"><img src='https://img.shields.io/badge/Project-Page-Green' alt='GitHub'></a>&nbsp;
13
  <a href="http://www.apache.org/licenses/LICENSE-2.0"><img src='https://img.shields.io/badge/License-CC BY--NC--SA--4.0-lightgreen?style=flat&logo=Lisence' alt='License'></a>&nbsp;
14
 
 
18
  <img src="asset/model.png" width="100%" height="100%"/>
19
  </div>
20
 
21
+ ## 📣 News
22
+ - **`2025/06/09`**: 🎉 We are excited to announce that the ***code*** of [**MagicTryOn**](https://github.com/vivoCameraResearch/Magic-TryOn/) have been released! Check it out! ***The weights are on the way and are expected to be released on June 14***. You can download the weights from 🤗[**HuggingFace**](https://huggingface.co/LuckyLiGY/MagicTryOn) once they are open-sourced.
23
  - **`2025/05/27`**: Our [**Paper on ArXiv**](https://arxiv.org/abs/2505.21325v2) is available 🥳!
24
 
25
+ ## To-Do List for MagicTryOn Release
26
  - ✅ Release the source code
27
  - ✅ Release the inference demo and pretrained weights
28
  - ✅ Release the customized try-on utilities
 
31
  - [ ] Release the second version of the pretrained model weights
32
  - [ ] Update Gradio App.
33
 
34
+ ## 😍 Installation
35
 
36
  Create a conda environment & Install requirments
37
  ```shell
 
42
  # or
43
  conda env create -f environment.yaml
44
  ```
45
+ If you encounter an error while installing Flash Attention, please [**manually download**](https://github.com/Dao-AILab/flash-attention/releases) the installation package based on your Python version, CUDA version, and Torch version, and install it using `pip install flash_attn-2.7.3+cu12torch2.2cxx11abiFALSE-cp312-cp312-linux_x86_64.whl`.
46
 
47
+ Use the following command to download the weights:
48
+ ```PowerShell
49
+ cd Magic-TryOn
50
+ HF_ENDPOINT=https://hf-mirror.com huggingface-cli download LuckyLiGY/MagicTryOn --local-dir ./weights/MagicTryOn_14B_V1
51
+ ```
52
 
53
+ ## 😉 Demo Inference
54
  ### 1. Image TryOn
55
+ You can directly run the following command to perform image try-on demo. If you want to modify some inference parameters, please make the changes inside the `predict_image_tryon_up.py` file.
56
  ```PowerShell
57
  CUDA_VISIBLE_DEVICES=0 python predict_image_tryon_up.py
58
 
 
60
  ```
61
 
62
  ### 2. Video TryOn
63
+ You can directly run the following command to perform image try-on demo. If you want to modify some inference parameters, please make the changes inside the `predict_video_tryon_up.py` file.
64
  ```PowerShell
65
  CUDA_VISIBLE_DEVICES=0 python predict_video_tryon_up.py
66
 
 
77
  ```
78
 
79
  2. **Cloth Line Map**
80
+ Extract the structural lines or sketch of the garment using [**AniLines-Anime-Lineart-Extractor**](https://github.com/zhenglinpan/AniLines-Anime-Lineart-Extractor). Download the pre-trained models from this [**link**](https://drive.google.com/file/d/1oazs4_X1Hppj-k9uqPD0HXWHEQLb9tNR/view?usp=sharing) and put them in the `inference/customize/AniLines/weights` folder.
 
81
  ```PowerShell
82
  cd inference/customize/AniLines
83
  python infer.py --dir_in datasets/garment/vivo/vivo_garment --dir_out datasets/garment/vivo/vivo_garment_anilines --mode detail --binarize -1 --fp16 True --device cuda:1
84
  ```
85
 
86
  3. **Mask**
87
+ Generate the agnostic mask of the garment, which is essential for region control during try-on. Please [**download**](https://drive.google.com/file/d/1E2JC_650g69AYrN2ZCwc8oz8qYRo5t5s/view?usp=sharing) the required checkpoint for obtaining the agnostic mask. The checkpoint needs to be placed in the `inference/customize/gen_mask/ckpt` folder.
88
 
89
+ (1) You need to rename your video to `video.mp4`, and then construct the folders according to the following directory structure.
90
  ```
91
  ├── datasets
92
  │ ├── person
 
102
  | | | | ├── 00002 ...
103
  ```
104
 
105
+ (2) Using `video2image.py` to convert the video into image frames and save them to `datasets/person/customize/video/00001/images`.
106
 
107
  (3) Run the following command to obtain the agnostic mask.
108
 
 
116
  # mask, _ = get_mask_location('dc', "dresses", model_parse, keypoints)
117
  ```
118
 
119
+ After completing the above steps, you will obtain the agnostic masks for all video frames in the `datasets/person/customize/video/00001/masks` folder.
120
  4. **Agnostic Representation**
121
+ Construct an agnostic representation of the person by removing garment-specific features. You can directly run `get_masked_person.py` to obtain the Agnostic Representation. Make sure to modify the `--image_folder` and `--mask_folder` parameters. The resulting video frames will be stored in `datasets/person/customize/video/00001/agnostic`.
122
 
123
  5. **DensePose**
124
  Use DensePose to obtain UV-mapped dense human body coordinates for better spatial alignment.
 
130
  cd inference/customize/detectron2/projects/DensePose
131
  bash run.sh
132
  ```
133
+ (3) The generated results will be stored in the `datasets/person/customize/video/00001/image-densepose` folder.
134
 
135
+ After completing the above steps, run the `image2video.py` file to generate the required customized videos: `mask.mp4`, `agnostic.mp4`, and `densepose.mp4`. Then, run the following command:
136
  ```PowerShell
137
  CUDA_VISIBLE_DEVICES=0 python predict_video_tryon_customize.py
138
  ```
139
 
140
+ ## 😘 Acknowledgement
141
+ Our code is modified based on [VideoX-Fun](https://github.com/aigc-apps/VideoX-Fun/tree/main). We adopt [Wan2.1-I2V-14B](https://github.com/Wan-Video/Wan2.1) as the base model. We use [SCHP](https://github.com/GoGoDuck912/Self-Correction-Human-Parsing/tree/master), [openpose](https://github.com/CMU-Perceptual-Computing-Lab/openpose), and [DensePose](https://github.com/facebookresearch/DensePose) to generate masks. We use [detectron2](https://github.com/facebookresearch/detectron2) to generate densepose. Thanks to all the contributors!
142
 
143
+ ## 😊 License
144
  All the materials, including code, checkpoints, and demo, are made available under the [Creative Commons BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. You are free to copy, redistribute, remix, transform, and build upon the project for non-commercial purposes, as long as you give appropriate credit and distribute your contributions under the same license.
145
 
146
+ ## 🤩 Citation
147
 
148
  ```bibtex
149
  @misc{li2025magictryon,
 
155
  primaryClass={cs.CV},
156
  url={https://arxiv.org/abs/2505.21325},
157
  }
158
+ ```