Update README.md
Browse files
README.md
CHANGED
@@ -39,7 +39,7 @@ tip: for **5b** model, use **pig-wan2-vae** [[1.41GB](https://huggingface.co/cal
|
|
39 |
tip: for **14b** model, use **pig-wan-vae** [[254MB](https://huggingface.co/calcuis/wan2-gguf/blob/main/pig_wan_vae_fp32-f16.gguf)]
|
40 |
|
41 |
### **update**
|
42 |
-
- upgrade your node (
|
43 |
- get more **umt5xxl** gguf encoder either [here](https://huggingface.co/calcuis/pig-encoder/tree/main) or [here](https://huggingface.co/chatpig/umt5xxl-encoder-gguf/tree/main)
|
44 |
|
45 |
### **reference**
|
|
|
39 |
tip: for **14b** model, use **pig-wan-vae** [[254MB](https://huggingface.co/calcuis/wan2-gguf/blob/main/pig_wan_vae_fp32-f16.gguf)]
|
40 |
|
41 |
### **update**
|
42 |
+
- upgrade your node (see last item from reference) for new/full quant support
|
43 |
- get more **umt5xxl** gguf encoder either [here](https://huggingface.co/calcuis/pig-encoder/tree/main) or [here](https://huggingface.co/chatpig/umt5xxl-encoder-gguf/tree/main)
|
44 |
|
45 |
### **reference**
|