yinjiewang commited on
Commit
9bfd6f6
·
verified ·
1 Parent(s): a4aad52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -3,10 +3,17 @@ license: mit
3
  library_name: transformers
4
  ---
5
 
 
 
 
 
 
6
  <p align="center">
7
  <img src="https://github.com/yinjjiew/Data/raw/main/cure/results.png" width="100%"/>
8
  </p>
9
 
 
 
10
 
11
 
12
  # Introduction to our ReasonFlux-Coders
@@ -17,7 +24,6 @@ We introduce **ReasonFlux-Coders**, trained with **CURE**, our algorithm for co-
17
  * **ReasonFlux-Coder-4B** is our Long-CoT model, outperforming Qwen3-4B while achieving 64.8% efficiency in unit test generation. We have demonstrated its ability to serve as a reward model for training base models via reinforcement learning (see our [paper](https://arxiv.org/abs/2505.15809)).
18
 
19
 
20
- [Paper](https://arxiv.org/abs/2505.15809) | [Code](https://github.com/Gen-Verse/CURE)
21
 
22
  # Citation
23
 
 
3
  library_name: transformers
4
  ---
5
 
6
+ <p align="center">
7
+ <img src="https://github.com/yinjjiew/Data/raw/main/cure/overviewplot.png" width="100%"/>
8
+ </p>
9
+
10
+
11
  <p align="center">
12
  <img src="https://github.com/yinjjiew/Data/raw/main/cure/results.png" width="100%"/>
13
  </p>
14
 
15
+ [Paper](https://arxiv.org/abs/2505.15809) | [Code](https://github.com/Gen-Verse/CURE)
16
+
17
 
18
 
19
  # Introduction to our ReasonFlux-Coders
 
24
  * **ReasonFlux-Coder-4B** is our Long-CoT model, outperforming Qwen3-4B while achieving 64.8% efficiency in unit test generation. We have demonstrated its ability to serve as a reward model for training base models via reinforcement learning (see our [paper](https://arxiv.org/abs/2505.15809)).
25
 
26
 
 
27
 
28
  # Citation
29