快说!你们把工作流藏哪里啦?
I get this error with the above workflow:
TextEncodeQwenImageEdit
shape '[84, -1, 128]' is invalid for input of size 5114368
Using the same image for input gives me this:
TextEncodeQwenImageEdit
einsum(): subscript j has size 481 for operand 1 which does not broadcast with previously seen size 1443
Sorry if I'm being slow here but, where are you all getting that node? TextEncodeQwenImageEdit ? I have latest 0.3.50 and that doesn't have it, no luck with custom nodes search either. What am I missing ?
Sorry if I'm being slow here but, where are you all getting that node? TextEncodeQwenImageEdit ? I have latest 0.3.50 and that doesn't have it, no luck with custom nodes search either. What am I missing ?
I had the same issue, but it was fixed after pulling the latest code.
I’m on the ComfyUI portable version and updated it using update_comfyui.bat.
Here are the before/after screenshots:
And here’s the generated result on an 8GB GPU:
Sorry if I'm being slow here but, where are you all getting that node? TextEncodeQwenImageEdit ? I have latest 0.3.50 and that doesn't have it, no luck with custom nodes search either. What am I missing ?
same question, how to install it ......
Sorry if I'm being slow here but, where are you all getting that node? TextEncodeQwenImageEdit ? I have latest 0.3.50 and that doesn't have it, no luck with custom nodes search either. What am I missing ?
same question, how to install it ......
Turns out people who keep posting are the ones with repo installs, because the commit exists, but desktop installer didn't get the update yet. so if you must have it "nooow" you will have to install from the repo
I am using the original workflow and i have downloaded the Model, Clip, VAE and Lora twice to avoid corrupt files, but i only got this type of picture.
I am using a RTX 5090, do someone know, what conditions musst be fullfilled for this type of card to function with Qwen-Image-Edit?
from the image you shared I can spot that you have disabled Lighx2v lora, in which case qwen image will need at least 40 steps for a reasonable image, try either increasing steps or enabling lighx2v lora and lowering steps to 4 or 8 (depending on lora)
How can I implement inpainting using masking in ComfyUI? Could someone share a correct workflow setup that uses a base image and a mask (white = editable area, black = preserved area) to achieve proper inpainting results?
为什么还要用mask,我觉得2.5vl已经很聪明了呀
I completely agree. I have a dataset of night vision IR (black-and-white) images, and my goal is to perform face swapping or enhance facial features specifically within the face region. It’s important that the pixels outside the face remain unaltered. However, when I use the 2.5V model to improve the images, it tends to remove or distort the IR-specific characteristics. What I would like is to process a batch of these images and generate multiple variants while preserving the unique IR properties.
Sorry if I'm being slow here but, where are you all getting that node? TextEncodeQwenImageEdit ? I have latest 0.3.50 and that doesn't have it, no luck with custom nodes search either. What am I missing ?
same question, how to install it ......
Turns out people who keep posting are the ones with repo installs, because the commit exists, but desktop installer didn't get the update yet. so if you must have it "nooow" you will have to install from the repo
You need to update your ComfyUI, use "git pull", and then "pip install -r requirements.txt", to get the new version of comfyui, in this 0.3.51 version, they add TextEncodeQwenImageEdit.