|
--- |
|
library_name: peft |
|
license: other |
|
base_model: Qwen/Qwen3-14B |
|
datasets: |
|
- r2e-edits/deepswe-swebv-eval-n16-verifier-v1 |
|
tags: |
|
- llama-factory |
|
- lora |
|
- generated_from_trainer |
|
model-index: |
|
- name: verifier |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
<div align="center"> |
|
<span style="font-family: default; font-size: 1.5em;">DeepSWE-Verifier</span> |
|
<div> |
|
🚀 Democratizing Reinforcement Learning for LLM Agents (RLLM) 🌟 |
|
</div> |
|
</div> |
|
<br> |
|
<div align="center" style="line-height: 1;"> |
|
<a href="https://github.com/agentica-project/rllm" style="margin: 2px;"> |
|
<img alt="Code" src="https://img.shields.io/badge/rLLM-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/> |
|
</a> |
|
<a href="www.google.com" target="_blank" style="margin: 2px;"> |
|
<img alt="Blog" src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white" style="display: inline-block; vertical-align: middle;"/> |
|
</a> |
|
<a href="https://x.com/Agentica_" style="margin: 2px;"> |
|
<img alt="X.ai" src="https://img.shields.io/badge/Agentica-white?style=for-the-badge&logo=X&logoColor=000&color=000&labelColor=white" style="display: inline-block; vertical-align: middle;"/> |
|
</a> |
|
<a href="https://huggingface.co/agentica-org" style="margin: 2px;"> |
|
<img alt="Hugging Face" src="https://img.shields.io/badge/Agentica-fcd022?style=for-the-badge&logo=huggingface&logoColor=000&labelColor" style="display: inline-block; vertical-align: middle;"/> |
|
</a> |
|
</div> |
|
</div> |
|
</div> |
|
|
|
## DeepSWE-Verifier Overview |
|
DeepSWE-Verifier is "critic model" that aids DeepSWE-Preview, a coding agent, for test-time scaling. For each SWE-Bench problem, DeepSWE-Preview generates multiple solutions, which produces multiple code patches, while DeepSWE-Verifier chooses the best code patch.Pairing DeepSWE-Preview with DeepSWE-Verifier can increases SWE-Bench-Verified score by +10% (See Figure 1, Execution-Free Verifier). |
|
|
|
DeepSWE-Verifier is a fine-tuned/SFT version of [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) |
|
|
|
Discover more about DeepSWE-Preview's development and capabilities in our [technical blog post](www.google.com). |
|
|
|
<div style="margin: 0 auto;"> |
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/654037be97949fd2304aab7f/a7urAV3isk73ZkIbu3d7s.png" style="width: 100%;" /> |
|
<p align="center" style="margin-top: 8px; font-style: italic; color: #666;"> |
|
Figure 1: SWE-Bench Verified Performance w.r.t. different TTS strategies. With hybrid TTS, DeepSWE-Preview achieves 59%, beating the current SOTA open-weights model (SkyWork + TTS, 47%) by 12%. We note that only using execution-based and execution-free verifiers is still effective and can bring 10+% performance. |
|
</p> |
|
</div> |
|
|
|
## Usage |
|
|
|
See our reproduction script for DeepSWE's [test-time scaling](https://github.com/agentica-project/R2E-Gym/blob/master/reproduction/DEEPSWE_TTS_REPRODUCTION.MD). |
|
|
|
## Serving DeepSWE-Verifier |
|
|
|
We suggest using vLLM to serve: |
|
``` |
|
# Stop previous server and start verifier model |
|
export MAX_CONTEXT_LEN=76800 |
|
vllm serve Qwen/Qwen3-14B \ |
|
--max-model-len $MAX_CONTEXT_LEN \ |
|
--hf-overrides '{"max_position_embeddings": '$MAX_CONTEXT_LEN'}' \ |
|
--enable-lora \ |
|
--lora-modules verifier=agentica-org/DeepSWE-Preview \ |
|
--port 8000 \ |
|
--dtype bfloat16 \ |
|
--max-lora-rank 64 \ |
|
--tensor-parallel-size 8 |
|
``` |
|
|
|
|
|
## Training |
|
|
|
### Hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-05 |
|
- train_batch_size: 1 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 8 |
|
- total_train_batch_size: 8 |
|
- total_eval_batch_size: 64 |
|
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.05 |
|
- num_epochs: 2.0 |
|
|
|
### Framework versions |
|
|
|
- PEFT 0.12.0 |
|
- Transformers 4.51.3 |
|
- Pytorch 2.7.1+cu126 |
|
- Datasets 3.1.0 |
|
- Tokenizers 0.21.2 |