Update README.md
Browse files
README.md
CHANGED
|
@@ -54,6 +54,27 @@ Discover more about DeepSWE-Preview's development and capabilities in our [techn
|
|
| 54 |
</p>
|
| 55 |
</div>
|
| 56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
## Training
|
| 59 |
|
|
|
|
| 54 |
</p>
|
| 55 |
</div>
|
| 56 |
|
| 57 |
+
## Usage
|
| 58 |
+
|
| 59 |
+
See our reproduction script for DeepSWE's [test-time scaling](https://github.com/agentica-project/R2E-Gym/blob/master/reproduction/DEEPSWE_TTS_REPRODUCTION.MD).
|
| 60 |
+
|
| 61 |
+
## Serving DeepSWE-Verifier
|
| 62 |
+
|
| 63 |
+
We suggest using vLLM to serve:
|
| 64 |
+
```
|
| 65 |
+
# Stop previous server and start verifier model
|
| 66 |
+
export MAX_CONTEXT_LEN=76800
|
| 67 |
+
vllm serve Qwen/Qwen3-14B \
|
| 68 |
+
--max-model-len $MAX_CONTEXT_LEN \
|
| 69 |
+
--hf-overrides '{"max_position_embeddings": '$MAX_CONTEXT_LEN'}' \
|
| 70 |
+
--enable-lora \
|
| 71 |
+
--lora-modules verifier=agentica-org/DeepSWE-Preview \
|
| 72 |
+
--port 8000 \
|
| 73 |
+
--dtype bfloat16 \
|
| 74 |
+
--max-lora-rank 64 \
|
| 75 |
+
--tensor-parallel-size 8
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
|
| 79 |
## Training
|
| 80 |
|