JunHowie commited on
Commit
a374f25
·
verified ·
1 Parent(s): 06b7360

Update README.md

Browse files

[BugFix] Fix compatibility issues with vLLM 0.10.1

Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -18,7 +18,7 @@ Base model [Qwen/Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwe
18
 
19
 
20
  ### 【VLLM Launch Command for 8 GPUs (Single Node)】
21
- <i>注: Note: When launching with 8 GPUs, --enable-expert-parallel must be specified; otherwise, the expert tensors cannot be evenly split across tensor parallel ranks. This option is not required for 4-GPU setups. </i>
22
  ```
23
  CONTEXT_LENGTH=32768 # 262144
24
 
@@ -46,6 +46,9 @@ vllm>=0.9.2
46
 
47
  ### 【Model Update History】
48
  ```
 
 
 
49
  2025-08-11
50
  1.Upload tokenizer_config.json
51
 
 
18
 
19
 
20
  ### 【VLLM Launch Command for 8 GPUs (Single Node)】
21
+ <i>Note: Note: When launching with 8 GPUs, --enable-expert-parallel must be specified; otherwise, the expert tensors cannot be evenly split across tensor parallel ranks. This option is not required for 4-GPU setups. </i>
22
  ```
23
  CONTEXT_LENGTH=32768 # 262144
24
 
 
46
 
47
  ### 【Model Update History】
48
  ```
49
+ 2025-08-19
50
+ 1.[BugFix] Fix compatibility issues with vLLM 0.10.1
51
+
52
  2025-08-11
53
  1.Upload tokenizer_config.json
54