SahilCarterr's picture
Update README.md
8c404d5 verified
---
frameworks:
- Pytorch
tasks:
- text-to-image-synthesis
#model-type:
##如 gpt、phi、llama、chatglm、baichuan 等
#- gpt
#domain:
##如 nlp、cv、audio、multi-modal
#- nlp
#language:
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
base_model:
- Qwen/Qwen-Image
base_model_relation: adapter
---
# Qwen-Image Image Structure Control Model
![](./assets/cover.png)
## Model Introduction
This model is a local image redraw model trained based on [Qwen-Image](https://www.modelscope.cn/models/Qwen/Qwen-Image) , with a model structure of ControlNet, capable of redrawing local areas of an image. The training framework is built on [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) , and the dataset used is [Qwen-Image-Self-Generated-Dataset](https://www.modelscope.cn/datasets/DiffSynth-Studio/Qwen-Image-Self-Generated-Dataset)。
This model is compatible with both [Qwen-Image](https://www.modelscope.cn/models/Qwen/Qwen-Image) and [Qwen-Image-Edit](https://www.modelscope.cn/models/Qwen/Qwen-Image-Edit),It can perform local redrawing on Qwen-Image and edit specified areas on Qwen-Image-Edit.
## Effect Demonstration
|Input Prompt|Input Image|Redrawn Image|
|-|-|-|
|A robot with wings and a hat standing in a colorful garden with flowers and butterflies.|![](./assets/image_1_1.jpg)|![](./assets/image_1_2.jpg)|
|A girl in a school uniform stands gracefully in front of a vibrant stained glass window with colorful geometric patterns.|![](./assets/image_2_1.jpg)|![](./assets/image_2_2.jpg)|
|A small wooden boat battles against towering, crashing waves in a stormy sea.|![](./assets/image_3_1.png)|![](./assets/image_3_2.png)|
## Limitations
- Inpaint models based on the ControlNet structure may result in disharmonious boundaries between the redrawn and non-redrawn areas.
- The model is trained on rectangular area redraw data, so its generalization to non-rectangular areas might not be optimal.
## Inference Code
```
git clone https://github.com/modelscope/DiffSynth-Studio.git
cd DiffSynth-Studio
pip install -e .
```
Qwen-Image:
```python
import torch
from PIL import Image
from modelscope import dataset_snapshot_download
from diffsynth.pipelines.qwen_image import QwenImagePipeline, ModelConfig, ControlNetInput
pipe = QwenImagePipeline.from_pretrained(
torch_dtype=torch.bfloat16,
device="cuda",
model_configs=[
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="transformer/diffusion_pytorch_model*.safetensors"),
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="text_encoder/model*.safetensors"),
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="vae/diffusion_pytorch_model.safetensors"),
ModelConfig(model_id="DiffSynth-Studio/Qwen-Image-Blockwise-ControlNet-Inpaint", origin_file_pattern="model.safetensors"),
],
tokenizer_config=ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="tokenizer/"),
)
dataset_snapshot_download(
dataset_id="DiffSynth-Studio/example_image_dataset",
local_dir="./data/example_image_dataset",
allow_file_pattern="inpaint/*.jpg"
)
prompt = "a cat with sunglasses"
controlnet_image = Image.open("./data/example_image_dataset/inpaint/image_1.jpg").convert("RGB").resize((1328, 1328))
inpaint_mask = Image.open("./data/example_image_dataset/inpaint/mask.jpg").convert("RGB").resize((1328, 1328))
image = pipe(
prompt, seed=0,
input_image=controlnet_image, inpaint_mask=inpaint_mask,
blockwise_controlnet_inputs=[ControlNetInput(image=controlnet_image, inpaint_mask=inpaint_mask)],
num_inference_steps=40,
)
image.save("image.jpg")
```
Qwen-Image-Edit:
```python
import torch
from PIL import Image
from modelscope import dataset_snapshot_download
from diffsynth.pipelines.qwen_image import QwenImagePipeline, ModelConfig, ControlNetInput
pipe = QwenImagePipeline.from_pretrained(
torch_dtype=torch.bfloat16,
device="cuda",
model_configs=[
ModelConfig(model_id="Qwen/Qwen-Image-Edit", origin_file_pattern="transformer/diffusion_pytorch_model*.safetensors"),
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="text_encoder/model*.safetensors"),
ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="vae/diffusion_pytorch_model.safetensors"),
ModelConfig(model_id="DiffSynth-Studio/Qwen-Image-Blockwise-ControlNet-Inpaint", origin_file_pattern="model.safetensors"),
],
tokenizer_config=None,
processor_config=ModelConfig(model_id="Qwen/Qwen-Image-Edit", origin_file_pattern="processor/"),
)
dataset_snapshot_download(
dataset_id="DiffSynth-Studio/example_image_dataset",
local_dir="./data/example_image_dataset",
allow_file_pattern="inpaint/*.jpg"
)
prompt = "Put sunglasses on this cat"
controlnet_image = Image.open("./data/example_image_dataset/inpaint/image_1.jpg").convert("RGB").resize((1328, 1328))
inpaint_mask = Image.open("./data/example_image_dataset/inpaint/mask.jpg").convert("RGB").resize((1328, 1328))
image = pipe(
prompt, seed=0,
input_image=controlnet_image, inpaint_mask=inpaint_mask,
blockwise_controlnet_inputs=[ControlNetInput(image=controlnet_image, inpaint_mask=inpaint_mask)],
num_inference_steps=40,
edit_image=controlnet_image, # add edit_image here.
)
image.save("image.jpg")
```
---
license: apache-2.0
---