SahilCarterr's picture
Update README.md
db33f70 verified
---
frameworks:
- Pytorch
tasks:
- text-to-image-synthesis
#model-type:
## e.g., gpt, phi, llama, chatglm, baichuan, etc.
#- gpt
#domain:
## e.g., nlp, cv, audio, multi-modal
#- nlp
#language:
## Language code list: https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
## e.g., CIDEr, BLEU, ROUGE, etc.
#- CIDEr
#tags:
## Various custom tags, including pretrained, fine-tuned, instruction-tuned, RL-tuned, etc.
#- pretrained
#tools:
## e.g., vllm, fastchat, llamacpp, AdaSeq, etc.
#- vllm
base_model_relation: finetune
base_model:
- Qwen/Qwen-Image
---
# Qwen-Image Full Distillation Accelerated Model
![](./assets/title.jpg)
## Model Introduction
This model is a distilled and accelerated version of [Qwen-Image](https://www.modelscope.cn/models/Qwen/Qwen-Image).
The original model requires 40 inference steps and uses classifier-free guidance (CFG), resulting in a total of 80 forward passes.
The distilled accelerated model only requires 15 inference steps and does not need CFG, resulting in only 15 forward passes — **achieving about 5× speed-up**.
Of course, the number of inference steps can be further reduced if needed, but generation quality may decrease.
The training framework is built using [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio).
The training dataset consists of 16,000 images generated by the original model using randomly sampled prompts from [DiffusionDB](https://www.modelscope.cn/datasets/AI-ModelScope/diffusiondb).
Training was conducted for about 1 day on 8 × MI308X GPUs.
## Performance Comparison
| | Original Model | Original Model | Accelerated Model |
|-|-|-|-|
| Inference Steps | 40 | 15 | 15 |
| CFG Scale | 4 | 1 | 1 |
| Forward Passes | 80 | 15 | 15 |
| Example 1 | ![](./assets/image_1_full.jpg) | ![](./assets/image_1_original.jpg) | ![](./assets/image_1_ours.jpg) |
| Example 2 | ![](./assets/image_2_full.jpg) | ![](./assets/image_2_original.jpg) | ![](./assets/image_2_ours.jpg) |
| Example 3 | ![](./assets/image_3_full.jpg) | ![](./assets/image_3_original.jpg) | ![](./assets/image_3_ours.jpg) |
## Inference Code
```shell
git clone https://github.com/modelscope/DiffSynth-Studio.git
cd DiffSynth-Studio
pip install -e .
```
```python
from diffsynth.pipelines.qwen_image import QwenImagePipeline, ModelConfig
import torch
pipe = QwenImagePipeline.from_pretrained(
torch_dtype=torch.bfloat16,
device="cuda",
model_configs=[
ModelConfig(model_id="DiffSynth-Studio/Qwen-Image-Distill-Full", origin_file_pattern="diffusion_pytorch_model*.safetensors"),
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="text_encoder/model*.safetensors"),
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="vae/diffusion_pytorch_model.safetensors"),
],
tokenizer_config=ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="tokenizer/"),
)
prompt = "Delicate portrait, underwater girl, flowing blue dress, hair floating, clear light and shadows, bubbles surrounding, serene face, exquisite details, dreamy and beautiful."
image = pipe(prompt, seed=0, num_inference_steps=15, cfg_scale=1)
image.save("image.jpg")
```
---
license: apache-2.0
---