ovedrive commited on
Commit
0066873
·
verified ·
1 Parent(s): c8c841d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -3
README.md CHANGED
@@ -19,9 +19,42 @@ You can use the original Qwen-Image-Edit parameters.
19
 
20
  This model is `not yet` available for inference at JustLab.ai
21
 
22
- Note: this model has not been tested by Justlab.
23
- IMPORTANT: You should only use the `transformer` and `text_encoder` other directory files should be replaced with original Qwen-Image-Edit.
24
- The repo will be updated to v1 to fix this.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  The original Qwen-Image attributions are included verbatim below.
27
 
 
19
 
20
  This model is `not yet` available for inference at JustLab.ai
21
 
22
+ Model tested: Working perfectly even with 20 steps.
23
+
24
+ Sample script.
25
+
26
+ ```
27
+ import os
28
+ from PIL import Image
29
+ import torch
30
+
31
+ from diffusers import QwenImageEditPipeline
32
+
33
+ model_path = "ovedrive/qwen-image-edit-4bit"
34
+ pipeline = QwenImageEditPipeline.from_pretrained(model_path, torch_dtype=torch.bfloat16)
35
+ print("pipeline loaded") # not true but whatever. do not move to cuda
36
+
37
+ pipeline.set_progress_bar_config(disable=None)
38
+ pipeline.enable_model_cpu_offload() #if you have enough VRAM remove this line for faster inference.
39
+ image = Image.open("./example.png").convert("RGB")
40
+ prompt = "Remove the lady head with white hair"
41
+ inputs = {
42
+ "image": image,
43
+ "prompt": prompt,
44
+ "generator": torch.manual_seed(0),
45
+ "true_cfg_scale": 4.0,
46
+ "negative_prompt": " ",
47
+ "num_inference_steps": 20,
48
+ }
49
+
50
+ with torch.inference_mode():
51
+ output = pipeline(**inputs)
52
+
53
+ output_image = output.images[0]
54
+ output_image.save("output_image_edit.png")
55
+ print("image saved at", os.path.abspath("output_image_edit.png"))
56
+ ```
57
+
58
 
59
  The original Qwen-Image attributions are included verbatim below.
60