|
--- |
|
license: mit |
|
--- |
|
**SWE-Dev-7B is trained from [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/)** |
|
|
|
🚀 SWE-Dev, a groundbreaking open-source Software Engineering Agent (SWE Agent)! |
|
|
|
📚 We have built a high-quality dataset and significantly improved the model’s performance on SWE tasks through rejection sampling. We also explored the feasibility of various offline algorithms on SWE through extensive experiments. |
|
|
|
🔧 Using only open-source frameworks and models, SWE-Dev-7B and 32B achieved solve rates of 23.4% and 36.6% on SWE-bench-Verified, respectively, even approaching the performance of closed-source models like GPT-4o. |
|
|
|
🛠 No need for complex prompt engineering or expensive multi-round evaluations—performance breakthroughs can be achieved with simplified inference scaling! We discovered that increasing interaction rounds significantly boosts model performance. For instance, DeepSeek-V3’s solve rate improved from 37.4% at 30 rounds to 41.2% at 75 rounds. Context extension also proved highly effective for short-text-trained models! |
|
|
|
💡 We further explored the scaling laws between data size, interaction rounds, and model performance, demonstrating that smaller, high-quality datasets are sufficient to support top-tier performance. |
|
|
|
Notion Link: https://ubecwang.notion.site/1bc32cf963e080b2a01df2895f66021f?v=1bc32cf963e0810ca07e000c86c4c1e1 |
|
GitHub Link: https://github.com/THUDM/SWE-Dev |
|
Hugging Face Link: https://huggingface.co/SWE-Dev |