[W4A8 FP8 Quantization] Release of DeepSeek-V3.1 with SGLang Support – Near-Lossless & 1.56x Speed Boost!

#28
by Carson - opened

Hi there we are TMElyralab, the Acceleration Team from Tencent Music Entertainment (TME).

Model weights: https://huggingface.co/TMElyralab/DeepSeek-V3.1-AWQ-W4AFP8
Releated PR:https://github.com/sgl-project/sglang/pull/8573
Releated Project: https://github.com/TMElyralab/sglang/tree/lyra_w4afp8

How to Use

We integrated high-performance W4AFp8 kernels into SGLang (still awaited for CR).
Try it by:

  1. Clone SGLang and checkout to the PR above;
  2. Clone our forked project and checkout to 'lyra_w4afp8' branch.

Performance

We tested on 8*H20 (VRAM 96GB) with input/output length = 1000/1000, qps=64, max_concurrency=64, num_prompt=128.
Compared to the original model:

  • bs=64, output throughput has increased by 56%.
  • bs=128, output throughput has increased by 125%.

Performance should be like:
image.png

Disccusion & FAQ

We are welcome for open discussion or any other suggestion. If you encounter any problems while using it, feel free to contact us either here or by opening an issue in our project.

Sign up or log in to comment