Update README.md
Browse files
README.md
CHANGED
|
@@ -13,6 +13,9 @@ base_model:
|
|
| 13 |
- **Preferred Operating System(s):** Linux
|
| 14 |
- **Inference Engine:** [vLLM](https://docs.vllm.ai/en/latest/)
|
| 15 |
- **Model Optimizer:** [AMD-Quark](https://quark.docs.amd.com/latest/index.html)
|
|
|
|
|
|
|
|
|
|
| 16 |
- **Calibration Dataset:** [Pile](https://huggingface.co/datasets/mit-han-lab/pile-val-backup)
|
| 17 |
|
| 18 |
The model is the quantized version of the Meta-Llama 3.1-405B-Instruct model, which is an auto-regressive language model that uses an optimized transformer architecture. For more information, please check [here](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct). The MXFP4 model is quantized with [AMD-Quark](https://quark.docs.amd.com/latest/index.html).
|
|
|
|
| 13 |
- **Preferred Operating System(s):** Linux
|
| 14 |
- **Inference Engine:** [vLLM](https://docs.vllm.ai/en/latest/)
|
| 15 |
- **Model Optimizer:** [AMD-Quark](https://quark.docs.amd.com/latest/index.html)
|
| 16 |
+
- **Weight quantization:** OCP MXFP4
|
| 17 |
+
- **Activation quantization:** OCP MXFP4
|
| 18 |
+
- **KV cache quantization:** OCP FP8
|
| 19 |
- **Calibration Dataset:** [Pile](https://huggingface.co/datasets/mit-han-lab/pile-val-backup)
|
| 20 |
|
| 21 |
The model is the quantized version of the Meta-Llama 3.1-405B-Instruct model, which is an auto-regressive language model that uses an optimized transformer architecture. For more information, please check [here](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct). The MXFP4 model is quantized with [AMD-Quark](https://quark.docs.amd.com/latest/index.html).
|