RangiLyu commited on
Commit
2d45c3a
·
verified ·
1 Parent(s): 46a8856

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -187,13 +187,13 @@ The minimum hardware requirements for deploying Intern-S1 series models are:
187
 
188
  You can utilize one of the following LLM inference frameworks to create an OpenAI compatible server:
189
 
190
- #### [lmdeploy(>=0.9.2)](https://github.com/InternLM/lmdeploy)
191
 
192
  ```bash
193
  lmdeploy serve api_server internlm/Intern-S1-mini --reasoning-parser intern-s1 --tool-call-parser intern-s1
194
  ```
195
 
196
- #### [vllm](https://github.com/vllm-project/vllm)
197
 
198
  ```bash
199
  vllm serve internlm/Intern-S1-mini --trust-remote-code
@@ -443,6 +443,10 @@ extra_body={
443
  }
444
  ```
445
 
 
 
 
 
446
  ## Citation
447
 
448
  If you find this work useful, feel free to give us a cite.
 
187
 
188
  You can utilize one of the following LLM inference frameworks to create an OpenAI compatible server:
189
 
190
+ #### [lmdeploy (>=0.9.2)](https://github.com/InternLM/lmdeploy)
191
 
192
  ```bash
193
  lmdeploy serve api_server internlm/Intern-S1-mini --reasoning-parser intern-s1 --tool-call-parser intern-s1
194
  ```
195
 
196
+ #### [vllm (>=0.10.1)](https://github.com/vllm-project/vllm)
197
 
198
  ```bash
199
  vllm serve internlm/Intern-S1-mini --trust-remote-code
 
443
  }
444
  ```
445
 
446
+ ## Fine-tuning
447
+
448
+ See this [documentation](https://github.com/InternLM/Intern-S1/blob/main/docs/sft.md) for more details.
449
+
450
  ## Citation
451
 
452
  If you find this work useful, feel free to give us a cite.