EtashGuha commited on
Commit
af40378
·
verified ·
1 Parent(s): f2ac531

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -29,9 +29,9 @@ State-of-the-art open-data 7B reasoning model. 🚀
29
 
30
  This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the
31
  [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
32
- It represents a notable improvement over our previous models, [OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) and [OpenThinker2-7B](https://huggingface.co/open-thoughts/OpenThinker2-7B), and it outperforms several other strong reasoning 7B models such as [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) and [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1), despite being trained only with SFT, without any RL.
33
 
34
- This time, we also release a paper! See our [paper](https://arxiv.org/abs/2506.04178) and [blog post](https://openthoughts.ai/blog/ot3) for more details. OpenThinker3-32B to follow! 👀
35
 
36
  # Evaluation Results
37
  The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
@@ -52,8 +52,8 @@ In the table below, we bold values in each column that are within 2 standard err
52
 
53
  This model was trained on the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
54
 
55
- The key behind the strong model performance is our comprehensive data pipeline and 1000+ ablation experiments.
56
- This led to the creation of [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M), which consists of 850,000 math examples, 250,000 code examples, and 100,000 science examples.
57
  Reasoning traces are generated with QwQ-32B.
58
 
59
  See the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset page or our [paper](https://arxiv.org/abs/2506.04178) for additional information.
@@ -81,6 +81,7 @@ The following hyperparameters were used during training:
81
  - lr_scheduler_type: cosine
82
  - lr_scheduler_warmup_ratio: 0.1
83
  - num_epochs: 5.0
 
84
 
85
  ## Framework versions
86
 
 
29
 
30
  This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the
31
  [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
32
+ It represents a notable improvement over our previous models, [OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) and [OpenThinker2-7B](https://huggingface.co/open-thoughts/OpenThinker2-7B), and it outperforms several other strong reasoning 7B models such as [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) and [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1), despite being trained only with SFT, without any RL.
33
 
34
+ This time, we also released a paper! See our [paper](https://arxiv.org/abs/2506.04178) and [blog post](https://openthoughts.ai/blog/ot3) for more details. OpenThinker3-32B to follow! 👀
35
 
36
  # Evaluation Results
37
  The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
 
52
 
53
  This model was trained on the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset.
54
 
55
+ The key to the strong model performance is our comprehensive data pipeline and over 1,000+ ablation experiments.
56
+ This led to the creation of [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M), which consists of 850,000 math questions, 250,000 code questions, and 100,000 science questions.
57
  Reasoning traces are generated with QwQ-32B.
58
 
59
  See the [OpenThoughts3-1.2M](https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M) dataset page or our [paper](https://arxiv.org/abs/2506.04178) for additional information.
 
81
  - lr_scheduler_type: cosine
82
  - lr_scheduler_warmup_ratio: 0.1
83
  - num_epochs: 5.0
84
+ - weight_decay: 0.0
85
 
86
  ## Framework versions
87