Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
VoxCPM
Log In
Sign Up
amd-shark
/
sdxl-quant-fp8
like
0
Follow
AMD SHARK
20
Model card
Files
Files and versions
xet
Community
1
25e566b
sdxl-quant-fp8
42.2 GB
4 contributors
History:
21 commits
GiusFra
Create config.json
25e566b
verified
about 1 year ago
all_linear_sym_8_calib8
Fix names
about 1 year ago
all_sym_8_calib10
MI250 QKV fused and all layers sym, FP8 attention, guidance scale 8, calib steps 10
about 1 year ago
brevitas
updated quant_params with QKV fusion
about 1 year ago
linear_conv_fp8_sdpa_fp16_eq_bl
Create config.json
about 1 year ago
linear_conv_fp8_sdpa_fp16_no_eq_bl
Added models that are fully quantized with FP8.
about 1 year ago
linear_conv_fp8_sdpa_fp8_eq_bl
Updated sdpa fp8 models
about 1 year ago
linear_conv_fp8_sdpa_fp8_no_eq_bl
Updated sdpa fp8 models
about 1 year ago
.gitattributes
Safe
2.08 kB
Added models that are fully quantized with FP8.
about 1 year ago
attn.py
Safe
6.26 kB
Added SDPA math model & test
about 1 year ago
sdxl.json
Safe
2.19 MB
Upload sdxl.json with huggingface_hub
over 1 year ago
sdxl.safetensors
Safe
5.14 GB
xet
Upload sdxl.safetensors with huggingface_hub
over 1 year ago
test_attn.py
1.29 kB
Added SDPA math model & test
about 1 year ago