katuni4ka commited on
Commit
57e0633
·
verified ·
1 Parent(s): 2129186

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -1
README.md CHANGED
@@ -28,7 +28,7 @@ The provided OpenVINO™ IR model is compatible with:
28
  * OpenVINO version 2024.1.0 and higher
29
  * Optimum Intel 1.16.0 and higher
30
 
31
- ## Running Model Inference
32
 
33
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
34
 
@@ -55,6 +55,37 @@ print(text)
55
 
56
  For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ## Legal information
59
 
60
  The original model is distributed under [apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license. More details can be found in [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
 
28
  * OpenVINO version 2024.1.0 and higher
29
  * Optimum Intel 1.16.0 and higher
30
 
31
+ ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
32
 
33
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
34
 
 
55
 
56
  For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
57
 
58
+ ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
59
+
60
+ 1. Install packages required for using OpenVINO GenAI.
61
+ ```
62
+ pip install openvino-genai huggingface_hub
63
+ ```
64
+
65
+ 2. Download model from HuggingFace Hub
66
+
67
+ ```
68
+ import huggingface_hub as hf_hub
69
+
70
+ model_id = "OpenVINO/TinyLlama-1.1B-Chat-v1.0-int4-ov"
71
+ model_path = "TinyLlama-1.1B-Chat-v1.0-int4-ov"
72
+
73
+ hf_hub.snapshot_download(model_id, local_dir=model_path)
74
+
75
+ ```
76
+
77
+ 3. Run model inference:
78
+
79
+ ```
80
+ import openvino_genai as ov_genai
81
+
82
+ device = "CPU"
83
+ pipe = ov_genai.LLMPipeline(model_path, device)
84
+ print(pipe.generate("What is OpenVINO?"))
85
+ ```
86
+
87
+ More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
88
+
89
  ## Legal information
90
 
91
  The original model is distributed under [apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license. More details can be found in [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).