HaoxingChen commited on
Commit
2e8d28e
·
verified ·
1 Parent(s): 7bc9ca6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -131,8 +131,7 @@ curl http://localhost:30000/v1/chat/completions \
131
  To achieve optimal performance, we recommend the following settings:
132
 
133
  1. **Sampling Parameters**:
134
- - We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
135
- ⚠️ For benchmarking scenarios requiring sampling (e.g., AIME), these parameters must be explicitly configured.
136
 
137
  2. **Adequate Output Length**: Set output length to 16,384 tokens for general use cases to accommodate complex reasoning tasks in instruct models.
138
 
 
131
  To achieve optimal performance, we recommend the following settings:
132
 
133
  1. **Sampling Parameters**:
134
+ - We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. (⚠️ For benchmarking scenarios requiring sampling (e.g., AIME), these parameters must be explicitly configured.)
 
135
 
136
  2. **Adequate Output Length**: Set output length to 16,384 tokens for general use cases to accommodate complex reasoning tasks in instruct models.
137