Update README.md
Browse files- .gitattributes +2 -0
- README.md +88 -0
- flux-dev_bf16.png +3 -0
- flux-dev_dit_bnb_4bit_t5_hqq_4bit.png +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
flux-dev_bf16.png filter=lfs diff=lfs merge=lfs -text
|
37 |
+
flux-dev_dit_bnb_4bit_t5_hqq_4bit.png filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: black-forest-labs/FLUX.1-dev
|
3 |
+
library_name: diffusers
|
4 |
+
base_model_relation: quantized
|
5 |
+
tags:
|
6 |
+
- quantization
|
7 |
+
---
|
8 |
+
|
9 |
+
# Visual comparison of Flux-dev model outputs using BF16 and BnB&Hqq 4bit quantization
|
10 |
+
|
11 |
+
<td style="text-align: center;">
|
12 |
+
BF16<br>
|
13 |
+
<medium-zoom background="rgba(0,0,0,.7)"><img src="./flux-dev_bf16.png" alt="Flux-dev output with BF16: Baroque, Futurist, Noir styles"></medium-zoom>
|
14 |
+
</td>
|
15 |
+
<td style="text-align: center;">
|
16 |
+
BnB 4-bit (DiT) & Hqq 4-bit (T5)<br>
|
17 |
+
<medium-zoom background="rgba(0,0,0,.7)"><img src="flux-dev_dit_bnb_4bit_t5_hqq_4bit.png" alt="BnB 4-bit (DiT) & Hqq 4-bit (T5) Output"></medium-zoom>
|
18 |
+
</td>
|
19 |
+
|
20 |
+
# Usage with Diffusers
|
21 |
+
|
22 |
+
To use this quantized FLUX.1 [dev] checkpoint, you need to install the 🧨 diffusers, transformers, bitsandbytes and hqq library:
|
23 |
+
|
24 |
+
```
|
25 |
+
pip install git+https://github.com/huggingface/diffusers.git@599c887 # add support for `PipelineQuantizationConfig`
|
26 |
+
pip install git+https://github.com/huggingface/transformers.git@3dbbf01 # add support for hqq quantized model in diffusers pipeline
|
27 |
+
pip install -U bitsandbytes
|
28 |
+
pip install -U hqq
|
29 |
+
```
|
30 |
+
|
31 |
+
After installing the required library, you can run the following script:
|
32 |
+
|
33 |
+
```python
|
34 |
+
from diffusers import FluxPipeline
|
35 |
+
|
36 |
+
pipe = FluxPipeline.from_pretrained(
|
37 |
+
"HighCwu/FLUX.1-dev-bnb-hqq-4bit",
|
38 |
+
torch_dtype=torch.bfloat16
|
39 |
+
)
|
40 |
+
|
41 |
+
prompt = "Baroque style, a lavish palace interior with ornate gilded ceilings, intricate tapestries, and dramatic lighting over a grand staircase."
|
42 |
+
|
43 |
+
pipe_kwargs = {
|
44 |
+
"prompt": prompt,
|
45 |
+
"height": 1024,
|
46 |
+
"width": 1024,
|
47 |
+
"guidance_scale": 3.5,
|
48 |
+
"num_inference_steps": 50,
|
49 |
+
"max_sequence_length": 512,
|
50 |
+
}
|
51 |
+
|
52 |
+
image = pipe(
|
53 |
+
**pipe_kwargs, generator=torch.manual_seed(0),
|
54 |
+
).images[0]
|
55 |
+
|
56 |
+
image.save("flux.png")
|
57 |
+
```
|
58 |
+
|
59 |
+
# How to generate this quantized checkpoint ?
|
60 |
+
|
61 |
+
This checkpoint was created with the following script using "black-forest-labs/FLUX.1-dev" checkpoint:
|
62 |
+
|
63 |
+
```python
|
64 |
+
|
65 |
+
import torch
|
66 |
+
|
67 |
+
assert torch.cuda.is_available() # force initialization of cuda
|
68 |
+
|
69 |
+
from diffusers import FluxPipeline
|
70 |
+
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
|
71 |
+
from diffusers.quantizers import PipelineQuantizationConfig
|
72 |
+
from transformers import HqqConfig as TransformersHqqConfig
|
73 |
+
|
74 |
+
pipeline_quant_config = PipelineQuantizationConfig(
|
75 |
+
quant_mapping={
|
76 |
+
"transformer": DiffusersBitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16),
|
77 |
+
"text_encoder_2": TransformersHqqConfig(nbits=4, group_size=64),
|
78 |
+
}
|
79 |
+
)
|
80 |
+
|
81 |
+
pipe = FluxPipeline.from_pretrained(
|
82 |
+
"black-forest-labs/FLUX.1-dev",
|
83 |
+
quantization_config=pipeline_quant_config,
|
84 |
+
torch_dtype=torch.bfloat16
|
85 |
+
)
|
86 |
+
|
87 |
+
pipe.save_pretrained("FLUX.1-dev-bnb-hqq-4bit")
|
88 |
+
```
|
flux-dev_bf16.png
ADDED
![]() |
Git LFS Details
|
flux-dev_dit_bnb_4bit_t5_hqq_4bit.png
ADDED
![]() |
Git LFS Details
|