gguf not working in multitalk model loader
i have the fp16 safetensor working but the gguf will not show in example workflow from wanvideowrapper using the multitalk model loader node
The nodes need to be up to date, on nightly version, the ability to load GGUF multitalk models was just added.
The nodes need to be up to date, on nightly version, the ability to load GGUF multitalk models was just added.
using ggufmodel same workflow
WanVideoSampler
WanVideoSampler.process() got an unexpected keyword argument 'infinitetalk_embeds'
The nodes need to be up to date, on nightly version, the ability to load GGUF multitalk models was just added.
using ggufmodel same workflow
WanVideoSampler
WanVideoSampler.process() got an unexpected keyword argument 'infinitetalk_embeds'
I have never had variable named infinitetalk_embeds
in my code, that is from the InfiniteTalk fork, you need to uninstall it and only use the main WanVideoWrapper.
The nodes need to be up to date, on nightly version, the ability to load GGUF multitalk models was just added.
using ggufmodel same workflow
WanVideoSampler
WanVideoSampler.process() got an unexpected keyword argument 'infinitetalk_embeds'I have never had variable named
infinitetalk_embeds
in my code, that is from the InfiniteTalk fork, you need to uninstall it and only use the main WanVideoWrapper.
i dont have any other except fork installed its the wanwrapper , just downloaded the workflow .. queue .. fail
tried with infinitetalk.safetensors.. same error, wan2.1 14b 480-q40.gguf
The nodes need to be up to date, on nightly version, the ability to load GGUF multitalk models was just added.
using ggufmodel same workflow
WanVideoSampler
WanVideoSampler.process() got an unexpected keyword argument 'infinitetalk_embeds'I have never had variable named
infinitetalk_embeds
in my code, that is from the InfiniteTalk fork, you need to uninstall it and only use the main WanVideoWrapper.i dont have any other except fork installed its the wanwrapper , just downloaded the workflow .. queue .. fail
tried with infinitetalk.safetensors.. same error, wan2.1 14b 480-q40.gguf
Like I said, I have never used variable named infinitetalk_embeds
in my code, so I know for a fact you are either not using my code or you are not using my workflow. There's InfiniteTalk example included in the wrapper, you are probably trying to use workflow meant for the fork.
The nodes need to be up to date, on nightly version, the ability to load GGUF multitalk models was just added.
using ggufmodel same workflow
WanVideoSampler
WanVideoSampler.process() got an unexpected keyword argument 'infinitetalk_embeds'I have never had variable named
infinitetalk_embeds
in my code, that is from the InfiniteTalk fork, you need to uninstall it and only use the main WanVideoWrapper.i dont have any other except fork installed its the wanwrapper , just downloaded the workflow .. queue .. fail
tried with infinitetalk.safetensors.. same error, wan2.1 14b 480-q40.ggufLike I said, I have never used variable named
infinitetalk_embeds
in my code, so I know for a fact you are either not using my code or you are not using my workflow. There's InfiniteTalk example included in the wrapper, you are probably trying to use workflow meant for the fork.
Not the OP: Interesting. I, too see the infinite talk input on mine but I've never tried to used infinite talk. I simply just updated the nodes. I'll have to check this when I get home to see what's up then.
The nodes need to be up to date, on nightly version, the ability to load GGUF multitalk models was just added.
using ggufmodel same workflow
WanVideoSampler
WanVideoSampler.process() got an unexpected keyword argument 'infinitetalk_embeds'I have never had variable named
infinitetalk_embeds
in my code, that is from the InfiniteTalk fork, you need to uninstall it and only use the main WanVideoWrapper.i dont have any other except fork installed its the wanwrapper , just downloaded the workflow .. queue .. fail
tried with infinitetalk.safetensors.. same error, wan2.1 14b 480-q40.ggufLike I said, I have never used variable named
infinitetalk_embeds
in my code, so I know for a fact you are either not using my code or you are not using my workflow. There's InfiniteTalk example included in the wrapper, you are probably trying to use workflow meant for the fork.
you are right it seems like the workflow is from MeiGen example
using this now https://github.com/kijai/ComfyUI-WanVideoWrapper/tree/main/example_workflows
it works now
The nodes need to be up to date, on nightly version, the ability to load GGUF multitalk models was just added.
got it loading but now it says the lightx2v lora wont work with the ggufs - i have the fp8s working good but wanting to get faster generation time - am i using the wrong lightx2v lora?
The nodes need to be up to date, on nightly version, the ability to load GGUF multitalk models was just added.
got it loading but now it says the lightx2v lora wont work with the ggufs - i have the fp8s working good but wanting to get faster generation time - am i using the wrong lightx2v lora?
They do work with GGUF, you just can't use the merge_loras
option with GGUF and that need to be disabled in the LoRA select node.
Nevermind I didn't switch the wrapper nodes to the nightly version. I'm a donut.