Update README.md
Browse files
README.md
CHANGED
@@ -5,54 +5,59 @@ tasks:
|
|
5 |
- text-to-image-synthesis
|
6 |
|
7 |
#model-type:
|
8 |
-
|
9 |
#- gpt
|
10 |
|
11 |
#domain:
|
12 |
-
|
13 |
#- nlp
|
14 |
|
15 |
#language:
|
16 |
-
|
17 |
#- cn
|
18 |
|
19 |
#metrics:
|
20 |
-
|
21 |
#- CIDEr
|
22 |
|
23 |
#tags:
|
24 |
-
|
25 |
#- pretrained
|
26 |
|
27 |
#tools:
|
28 |
-
|
29 |
#- vllm
|
30 |
base_model_relation: finetune
|
31 |
base_model:
|
32 |
- Qwen/Qwen-Image
|
33 |
---
|
34 |
-
# Qwen-Image
|
35 |
|
36 |

|
37 |
|
38 |
-
##
|
39 |
|
40 |
-
|
|
|
|
|
|
|
41 |
|
42 |
-
|
|
|
|
|
43 |
|
44 |
-
##
|
45 |
|
46 |
-
|
47 |
|-|-|-|-|
|
48 |
-
|
49 |
-
|CFG
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
|
55 |
-
##
|
56 |
|
57 |
```shell
|
58 |
git clone https://github.com/modelscope/DiffSynth-Studio.git
|
@@ -75,7 +80,7 @@ pipe = QwenImagePipeline.from_pretrained(
|
|
75 |
],
|
76 |
tokenizer_config=ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="tokenizer/"),
|
77 |
)
|
78 |
-
prompt = "
|
79 |
image = pipe(prompt, seed=0, num_inference_steps=15, cfg_scale=1)
|
80 |
image.save("image.jpg")
|
81 |
```
|
|
|
5 |
- text-to-image-synthesis
|
6 |
|
7 |
#model-type:
|
8 |
+
## e.g., gpt, phi, llama, chatglm, baichuan, etc.
|
9 |
#- gpt
|
10 |
|
11 |
#domain:
|
12 |
+
## e.g., nlp, cv, audio, multi-modal
|
13 |
#- nlp
|
14 |
|
15 |
#language:
|
16 |
+
## Language code list: https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
|
17 |
#- cn
|
18 |
|
19 |
#metrics:
|
20 |
+
## e.g., CIDEr, BLEU, ROUGE, etc.
|
21 |
#- CIDEr
|
22 |
|
23 |
#tags:
|
24 |
+
## Various custom tags, including pretrained, fine-tuned, instruction-tuned, RL-tuned, etc.
|
25 |
#- pretrained
|
26 |
|
27 |
#tools:
|
28 |
+
## e.g., vllm, fastchat, llamacpp, AdaSeq, etc.
|
29 |
#- vllm
|
30 |
base_model_relation: finetune
|
31 |
base_model:
|
32 |
- Qwen/Qwen-Image
|
33 |
---
|
34 |
+
# Qwen-Image Full Distillation Accelerated Model
|
35 |
|
36 |

|
37 |
|
38 |
+
## Model Introduction
|
39 |
|
40 |
+
This model is a distilled and accelerated version of [Qwen-Image](https://www.modelscope.cn/models/Qwen/Qwen-Image).
|
41 |
+
The original model requires 40 inference steps and uses classifier-free guidance (CFG), resulting in a total of 80 forward passes.
|
42 |
+
The distilled accelerated model only requires 15 inference steps and does not need CFG, resulting in only 15 forward passes — **achieving about 5× speed-up**.
|
43 |
+
Of course, the number of inference steps can be further reduced if needed, but generation quality may decrease.
|
44 |
|
45 |
+
The training framework is built using [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio).
|
46 |
+
The training dataset consists of 16,000 images generated by the original model using randomly sampled prompts from [DiffusionDB](https://www.modelscope.cn/datasets/AI-ModelScope/diffusiondb).
|
47 |
+
Training was conducted for about 1 day on 8 × MI308X GPUs.
|
48 |
|
49 |
+
## Performance Comparison
|
50 |
|
51 |
+
| | Original Model | Original Model | Accelerated Model |
|
52 |
|-|-|-|-|
|
53 |
+
| Inference Steps | 40 | 15 | 15 |
|
54 |
+
| CFG Scale | 4 | 1 | 1 |
|
55 |
+
| Forward Passes | 80 | 15 | 15 |
|
56 |
+
| Example 1 |  |  |  |
|
57 |
+
| Example 2 |  |  |  |
|
58 |
+
| Example 3 |  |  |  |
|
59 |
|
60 |
+
## Inference Code
|
61 |
|
62 |
```shell
|
63 |
git clone https://github.com/modelscope/DiffSynth-Studio.git
|
|
|
80 |
],
|
81 |
tokenizer_config=ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="tokenizer/"),
|
82 |
)
|
83 |
+
prompt = "Delicate portrait, underwater girl, flowing blue dress, hair floating, clear light and shadows, bubbles surrounding, serene face, exquisite details, dreamy and beautiful."
|
84 |
image = pipe(prompt, seed=0, num_inference_steps=15, cfg_scale=1)
|
85 |
image.save("image.jpg")
|
86 |
```
|