|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: lmzheng/grok-1 |
|
|
--- |
|
|
|
|
|
|
|
|
# Grok-1-W4A8KV8 |
|
|
## Introduction |
|
|
This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset. |
|
|
## Quantization Stragegy |
|
|
- ***Quantized Layers***: All linear layers excluding "lm_head", "*.gate" |
|
|
- ***Weight***: FP8 symmetric per-tensor, additionally, INT4 symmetric per-channel for MoE linear |
|
|
- ***Activation***: FP8 symmetric per-tensor |
|
|
- ***KV Cache***: FP8 symmetric per-tensor |
|
|
### INT4 Packing |
|
|
Every eight `int4` values are packed into a single `int32` integeter following the sequence defined by `order_map = [0, 2, 4, 6, 1, 3, 5, 7]`. |
|
|
## Quick Start |
|
|
1. [Download and install Quark](https://quark.docs.amd.com/latest/install.html) |
|
|
2. Run the quantization script in the example folder using the following command line: |
|
|
```sh |
|
|
export MODEL_DIR = [local model checkpoint folder] or lmzheng/grok-1 |
|
|
python3 quantize_quark.py \ |
|
|
--model_dir $MODEL_DIR \ |
|
|
--output_dir grok-1-W4A8KV8 \ |
|
|
--quant_scheme TBD \ |
|
|
--kv_cache_dtype fp8 \ |
|
|
--num_calib_data 128 \ |
|
|
--model_export hf_format \ |
|
|
--multi_gpu \ |
|
|
--custom_mode fp8 |
|
|
``` |
|
|
## Deployment |
|
|
Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the SGLang backend. |
|
|
## Evaluation |
|
|
|
|
|
#### Evaluation scores |
|
|
<table> |
|
|
<tr> |
|
|
<td><strong>Benchmark</strong> |
|
|
</td> |
|
|
<td><strong>grok-1 </strong> |
|
|
</td> |
|
|
<td><strong>grok-1-W4A8KV8(this model)</strong> |
|
|
</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td>gsm8k |
|
|
</td> |
|
|
<td>0.821 |
|
|
</td> |
|
|
<td>0.817 |
|
|
</td> |
|
|
</tr> |
|
|
</table> |
|
|
|
|
|
#### License |
|
|
Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved. |
|
|
|