mdmachine commited on
Commit
4d59b08
·
verified ·
1 Parent(s): b378b22

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -15,8 +15,8 @@ This repository contains a merge of two FLUX models, built upon the base model [
15
 
16
  1. **GGUF Quantized Models (Q8_0)**:
17
  - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/)
18
- - flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized(https://huggingface.co/mdmachine/)
19
- - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized(https://huggingface.co/mdmachine/)
20
 
21
  2. **SAFETensors Format (fp8_34m3fn_fast)**:
22
  - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-fp8_e4m3fn_fast/tree/main)
 
15
 
16
  1. **GGUF Quantized Models (Q8_0)**:
17
  - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/)
18
+ - flux.1-lite-8B-alpha-Hyper-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/)
19
+ - [flux.1-lite-8B-alpha-Turbo-8.Steps-Detail.Plus.Steps.Q8_0_quantized](https://huggingface.co/mdmachine/)
20
 
21
  2. **SAFETensors Format (fp8_34m3fn_fast)**:
22
  - [flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-fp8_e4m3fn_fast](https://huggingface.co/mdmachine/flux.1-lite-8B-alpha-Hyper-16.Steps-Detail.Plus-fp8_e4m3fn_fast/tree/main)