Update README.md
Browse files
README.md
CHANGED
|
@@ -107,7 +107,7 @@ vLLM also supports OpenAI-compatible serving. See the [documentation](https://do
|
|
| 107 |
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
|
| 108 |
|
| 109 |
```bash
|
| 110 |
-
|
| 111 |
--ipc=host \
|
| 112 |
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
|
| 113 |
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
|
|
|
|
| 107 |
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
|
| 108 |
|
| 109 |
```bash
|
| 110 |
+
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
|
| 111 |
--ipc=host \
|
| 112 |
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
|
| 113 |
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
|