amokrov commited on
Commit
50250f1
·
verified ·
1 Parent(s): 18026d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -13,7 +13,8 @@ base_model_relation: quantized
13
 
14
  This is [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format.
15
 
16
- The model is optimized for inference on NPU using these [instructions.](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai/inference-with-genai-on-npu.html#export-an-llm-model-via-hugging-face-optimum-intel)
 
17
 
18
  ## Quantization Parameters
19
 
 
13
 
14
  This is [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model converted to the [OpenVINO™ IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format.
15
 
16
+ > [!NOTE]
17
+ > The model is optimized for inference on NPU using these [instructions.](https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai/inference-with-genai-on-npu.html#export-an-llm-model-via-hugging-face-optimum-intel)
18
 
19
  ## Quantization Parameters
20