Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,8 @@ base_model_relation: quantized
|
|
| 13 |
|
| 14 |
This is [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format.
|
| 15 |
|
| 16 |
-
|
|
|
|
| 17 |
|
| 18 |
## Quantization Parameters
|
| 19 |
|
|
|
|
| 13 |
|
| 14 |
This is [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format.
|
| 15 |
|
| 16 |
+
> [!INFO]
|
| 17 |
+
> The model is optimized for inference on NPU using these [instructions.](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai/inference-with-genai-on-npu.html#export-an-llm-model-via-hugging-face-optimum-intel)
|
| 18 |
|
| 19 |
## Quantization Parameters
|
| 20 |
|