lym00 commited on
Commit
3875d2a
·
verified ·
1 Parent(s): eb17fba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -3
README.md CHANGED
@@ -1,3 +1,37 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - MAGREF-Video/MAGREF
4
+ base_model_relation: quantized
5
+ library_name: gguf
6
+ tags:
7
+ - image-to-video
8
+ - quantized
9
+ language:
10
+ - en
11
+ license: apache-2.0
12
+ ---
13
+
14
+ This is a GGUF conversion of [MAGREF-Video/MAGREF](https://huggingface.co/MAGREF-Video/MAGREF).
15
+
16
+ All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF/tree/main/tools) GitHub repository.
17
+
18
+ ## Usage
19
+
20
+ The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
21
+
22
+ | Type | Name | Location | Download |
23
+ | ------------ | ----------------------------------- | ------------------------------ | ---------------- |
24
+ | Main Model | lym00/MAGREF_Wan2.1_I2V_14B-GGUF | `ComfyUI/models/unet` | GGUF (this repo) |
25
+ | Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
26
+ | CLIP Vision | clip_vision_h | `ComfyUI/models/clip_vision` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/clip_vision/clip_vision_h.safetensors) |
27
+ | VAE | Wan2_1_VAE_bf16 | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) |
28
+
29
+ [**ComfyUI example workflow**](https://huggingface.co/lym00/MAGREF_Wan2.1_I2V_14B-GGUF/blob/main/Magref_example_workflow.json)
30
+
31
+ ### Notes
32
+
33
+ *All original licenses and restrictions from the base models still apply.*
34
+
35
+ ## Reference
36
+
37
+ - For an overview of quantization types, please see the [GGUF quantization types](https://huggingface.co/docs/hub/gguf#quantization-types).