Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -13,13 +13,16 @@ tags:
|
|
| 13 |
- Flux.1-dev
|
| 14 |
- image-generation
|
| 15 |
- Stable Diffusion
|
| 16 |
-
|
| 17 |
-
-
|
|
|
|
|
|
|
|
|
|
| 18 |
---
|
| 19 |
|
| 20 |
-
# FLUX.1-dev-ControlNet-Union-Pro-2.0 (
|
| 21 |
|
| 22 |
-
This repository contains an
|
| 23 |
|
| 24 |
# FP8 Quantization
|
| 25 |
This model has been quantized from the original BFloat16 format to FP8 format. The benefits include:
|
|
@@ -27,7 +30,7 @@ This model has been quantized from the original BFloat16 format to FP8 format. T
|
|
| 27 |
- **Faster Inference**: Potential speed improvements, especially on hardware with FP8 support
|
| 28 |
- **Minimal Quality Loss**: Carefully calibrated quantization process to preserve output quality
|
| 29 |
|
| 30 |
-
Note
|
| 31 |
|
| 32 |
# Keynotes
|
| 33 |
In comparison with [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro),
|
|
|
|
| 13 |
- Flux.1-dev
|
| 14 |
- image-generation
|
| 15 |
- Stable Diffusion
|
| 16 |
+
- quantization
|
| 17 |
+
- fp8
|
| 18 |
+
inference:
|
| 19 |
+
parameters:
|
| 20 |
+
torch_dtype: torch.float8_e4m3fn
|
| 21 |
---
|
| 22 |
|
| 23 |
+
# FLUX.1-dev-ControlNet-Union-Pro-2.0 (FP8 Quantized)
|
| 24 |
|
| 25 |
+
This repository contains an FP8 quantized version of the [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0) model. **This is NOT a fine-tuned model** but a direct quantization of the original BFloat16 model to FP8 format for optimized inference performance. We provide an [online demo](https://huggingface.co/spaces/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0).
|
| 26 |
|
| 27 |
# FP8 Quantization
|
| 28 |
This model has been quantized from the original BFloat16 format to FP8 format. The benefits include:
|
|
|
|
| 30 |
- **Faster Inference**: Potential speed improvements, especially on hardware with FP8 support
|
| 31 |
- **Minimal Quality Loss**: Carefully calibrated quantization process to preserve output quality
|
| 32 |
|
| 33 |
+
**Important Note**: This is a direct quantization of [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0) and preserves all the functionality of the original model. No fine-tuning or additional training has been performed.
|
| 34 |
|
| 35 |
# Keynotes
|
| 36 |
In comparison with [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro),
|