hzhwcmhf commited on
Commit
144afc2
·
verified ·
1 Parent(s): 328ebc6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -245,9 +245,9 @@ After updating the config, proceed with either **vLLM** or **SGLang** for servin
245
  To run Qwen with 1M context support:
246
 
247
  ```bash
248
- git clone https://github.com/vllm-project/vllm.git
249
- cd vllm
250
- pip install -e .
251
  ```
252
 
253
  Then launch the server with Dual Chunk Flash Attention enabled:
 
245
  To run Qwen with 1M context support:
246
 
247
  ```bash
248
+ pip install -U vllm \
249
+ --torch-backend=auto \
250
+ --extra-index-url https://wheels.vllm.ai/nightly
251
  ```
252
 
253
  Then launch the server with Dual Chunk Flash Attention enabled: