dpastushenkov commited on
Commit
d85519b
·
verified ·
1 Parent(s): 7751d14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -32
README.md CHANGED
@@ -29,35 +29,6 @@ For more information on quantization, check the [OpenVINO model optimization gui
29
  The provided OpenVINO™ IR model is compatible with:
30
 
31
  * OpenVINO version 2025.1.0 and higher
32
- * Optimum Intel 1.23.0 and higher
33
-
34
- ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
35
-
36
-
37
- 1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
38
-
39
- ```
40
- pip install optimum[openvino]
41
- ```
42
-
43
- 2. Run model inference:
44
-
45
- ```
46
- from transformers import AutoTokenizer
47
- from optimum.intel.openvino import OVModelForCausalLM
48
-
49
- model_id = "OpenVINO/Mistral-7B-Instruct-v0.2-int4-ov-cw"
50
- tokenizer = AutoTokenizer.from_pretrained(model_id)
51
- model = OVModelForCausalLM.from_pretrained(model_id)
52
-
53
- inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
54
-
55
- outputs = model.generate(**inputs, max_length=200)
56
- text = tokenizer.batch_decode(outputs)[0]
57
- print(text)
58
- ```
59
-
60
- For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
61
 
62
  ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
63
 
@@ -71,8 +42,8 @@ pip install openvino-genai huggingface_hub
71
  ```
72
  import huggingface_hub as hf_hub
73
 
74
- model_id = "OpenVINO/Mistral-7B-Instruct-v0.2-int4-ov-cw"
75
- model_path = "Mistral-7B-Instruct-v0.2-int4-ov-cw"
76
 
77
  hf_hub.snapshot_download(model_id, local_dir=model_path)
78
 
@@ -83,7 +54,7 @@ hf_hub.snapshot_download(model_id, local_dir=model_path)
83
  ```
84
  import openvino_genai as ov_genai
85
 
86
- device = "CPU"
87
  pipe = ov_genai.LLMPipeline(model_path, device)
88
  print(pipe.generate("What is OpenVINO?", max_length=200))
89
  ```
 
29
  The provided OpenVINO™ IR model is compatible with:
30
 
31
  * OpenVINO version 2025.1.0 and higher
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
34
 
 
42
  ```
43
  import huggingface_hub as hf_hub
44
 
45
+ model_id = "OpenVINO/Mistral-7B-Instruct-v0.2-int4-cw-ov"
46
+ model_path = "Mistral-7B-Instruct-v0.2-int4-cw-ov"
47
 
48
  hf_hub.snapshot_download(model_id, local_dir=model_path)
49
 
 
54
  ```
55
  import openvino_genai as ov_genai
56
 
57
+ device = "NPU"
58
  pipe = ov_genai.LLMPipeline(model_path, device)
59
  print(pipe.generate("What is OpenVINO?", max_length=200))
60
  ```