I'm willing to help with the FP16 version

#10
by TekeshiX - opened

Hello!
This page says the WAN2.2-14B-Rapid-AllInOne model has fp8 precision, will it be possible to create this model on the fp16 precision (this one I mean - https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_i2v_high_noise_14B_fp16.safetensors)?

If you don't have enough power to do that, I can get access to a 80-140 GB VRAM GPU and 200 GB RAM, so I can inject/create the model and give it to you and you can do whatever you want with it, but I'd need the instructions (what I need to do).
I just like learning and understanding new things, so there's no catch in-between. Too curious! 😊

I really want to merge a better model before spending time making more versions of this one. As decent as this one is, I'm hoping to do better...

I really want to merge a better model before spending time making more versions of this one. As decent as this one is, I'm hoping to do better...

Are you on Discord? Maybe I can help with testing the model and with the fp16 edition.

I do have a Discord server that I mostly used for my game development, having talked much about AI video there yet (which I've mostly done on other Discord servers):

https://discord.gg/hxKcRTTzeU

can we get e5m2 version of the model(s) since that is the only version that works with torch.compile for amd gpu's and older nvidia ones ? Normally I do this myself by downloading highest versions of the models and converting them in comfy but since this is a merge there is no source for me, tekeshix's model is also only i2v , I mostly need t2v.

Sign up or log in to comment