Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
This is an NF4 quantized model of Qwen-image-edit so it can run on GPUs using 20GB VRAM. You can run it on lower VRAM like 16GB.
|
2 |
There were other NF4 models but they made the mistake of blindly quantizing all layers in the transformer.
|
3 |
This one does not. We retain some layers at full precision in order to ensure that we get quality output.
|
@@ -10,16 +22,7 @@ Note: this model has not been tested by Justlab.
|
|
10 |
|
11 |
The original Qwen-Image attributions are included verabtim below.
|
12 |
|
13 |
-
|
14 |
-
license: apache-2.0
|
15 |
-
language:
|
16 |
-
- en
|
17 |
-
- zh
|
18 |
-
library_name: diffusers
|
19 |
-
pipeline_tag: image-to-image
|
20 |
-
base_model:
|
21 |
-
- Qwen/Qwen-Image-Edit
|
22 |
-
---
|
23 |
<p align="center">
|
24 |
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/qwen_image_edit_logo.png" width="400"/>
|
25 |
<p>
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- zh
|
6 |
+
library_name: diffusers
|
7 |
+
pipeline_tag: image-to-image
|
8 |
+
quantized_by: A Dujari
|
9 |
+
base_model:
|
10 |
+
- Qwen/Qwen-Image-Edit
|
11 |
+
---
|
12 |
+
|
13 |
This is an NF4 quantized model of Qwen-image-edit so it can run on GPUs using 20GB VRAM. You can run it on lower VRAM like 16GB.
|
14 |
There were other NF4 models but they made the mistake of blindly quantizing all layers in the transformer.
|
15 |
This one does not. We retain some layers at full precision in order to ensure that we get quality output.
|
|
|
22 |
|
23 |
The original Qwen-Image attributions are included verabtim below.
|
24 |
|
25 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
<p align="center">
|
27 |
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/qwen_image_edit_logo.png" width="400"/>
|
28 |
<p>
|