parinitarahi commited on
Commit
8598fe0
·
verified ·
1 Parent(s): 6f5c8e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ license: mit
12
  This repository hosts the optimized versions of [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B/) and [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B/) to accelerate inference with ONNX Runtime.
13
  Optimized models are published here in [ONNX](https://onnx.ai) format to run with [ONNX Runtime](https://onnxruntime.ai/) on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets.
14
 
15
- To easily get started with the model, you can use our newly introduced ONNX Runtime Generate() API.
16
 
17
  ```bash
18
  # Download the model directly using the huggingface cli
 
12
  This repository hosts the optimized versions of [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B/) and [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B/) to accelerate inference with ONNX Runtime.
13
  Optimized models are published here in [ONNX](https://onnx.ai) format to run with [ONNX Runtime](https://onnxruntime.ai/) on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets.
14
 
15
+ To easily get started with the model, you can use our ONNX Runtime Generate() API. See instructions [here](https://github.com/microsoft/onnxruntime/blob/gh-pages/docs/genai/tutorials/deepseek-python.md)
16
 
17
  ```bash
18
  # Download the model directly using the huggingface cli