hzhwcmhf commited on
Commit
6cbffae
·
verified ·
1 Parent(s): ddee1c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -249,9 +249,9 @@ After updating the config, proceed with either **vLLM** or **SGLang** for servin
249
  To run Qwen with 1M context support:
250
 
251
  ```bash
252
- git clone https://github.com/vllm-project/vllm.git
253
- cd vllm
254
- pip install -e .
255
  ```
256
 
257
  Then launch the server with Dual Chunk Flash Attention enabled:
 
249
  To run Qwen with 1M context support:
250
 
251
  ```bash
252
+ pip install -U vllm \
253
+ --torch-backend=auto \
254
+ --extra-index-url https://wheels.vllm.ai/nightly
255
  ```
256
 
257
  Then launch the server with Dual Chunk Flash Attention enabled: