Update README.md
Browse files
README.md
CHANGED
|
@@ -64,7 +64,7 @@ The pipeline we used to produce the data and models is fully open-sourced!
|
|
| 64 |
We provide [all instructions](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/)
|
| 65 |
to fully reproduce our results, including data generation.
|
| 66 |
|
| 67 |
-
|
| 68 |
|
| 69 |
Our models can be used in 3 inference modes: chain-of-thought (CoT), tool-integrated reasoning (TIR) and generative solution selection (GenSelect).
|
| 70 |
|
|
@@ -137,7 +137,7 @@ This model is intended to facilitate research in the area of mathematical reason
|
|
| 137 |
|
| 138 |
Huggingface 04/23/2025 <br>
|
| 139 |
|
| 140 |
-
|
| 141 |
|
| 142 |
**Architecture Type:** Transformer decoder-only language model <br>
|
| 143 |
|
|
@@ -148,7 +148,7 @@ Huggingface 04/23/2025 <br>
|
|
| 148 |
|
| 149 |
** This model has 1.5B of model parameters. <br>
|
| 150 |
|
| 151 |
-
|
| 152 |
|
| 153 |
**Input Type(s):** Text <br>
|
| 154 |
|
|
@@ -160,7 +160,7 @@ Huggingface 04/23/2025 <br>
|
|
| 160 |
|
| 161 |
|
| 162 |
|
| 163 |
-
|
| 164 |
|
| 165 |
**Output Type(s):** Text <br>
|
| 166 |
|
|
@@ -176,7 +176,7 @@ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated sys
|
|
| 176 |
|
| 177 |
|
| 178 |
|
| 179 |
-
|
| 180 |
|
| 181 |
**Runtime Engine(s):** <br>
|
| 182 |
|
|
@@ -198,7 +198,7 @@ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated sys
|
|
| 198 |
|
| 199 |
|
| 200 |
|
| 201 |
-
|
| 202 |
|
| 203 |
[OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B)
|
| 204 |
|
|
|
|
| 64 |
We provide [all instructions](https://nvidia.github.io/NeMo-Skills/openmathreasoning1/)
|
| 65 |
to fully reproduce our results, including data generation.
|
| 66 |
|
| 67 |
+
## How to use the models?
|
| 68 |
|
| 69 |
Our models can be used in 3 inference modes: chain-of-thought (CoT), tool-integrated reasoning (TIR) and generative solution selection (GenSelect).
|
| 70 |
|
|
|
|
| 137 |
|
| 138 |
Huggingface 04/23/2025 <br>
|
| 139 |
|
| 140 |
+
### Model Architecture: <br>
|
| 141 |
|
| 142 |
**Architecture Type:** Transformer decoder-only language model <br>
|
| 143 |
|
|
|
|
| 148 |
|
| 149 |
** This model has 1.5B of model parameters. <br>
|
| 150 |
|
| 151 |
+
### Input: <br>
|
| 152 |
|
| 153 |
**Input Type(s):** Text <br>
|
| 154 |
|
|
|
|
| 160 |
|
| 161 |
|
| 162 |
|
| 163 |
+
### Output: <br>
|
| 164 |
|
| 165 |
**Output Type(s):** Text <br>
|
| 166 |
|
|
|
|
| 176 |
|
| 177 |
|
| 178 |
|
| 179 |
+
### Software Integration : <br>
|
| 180 |
|
| 181 |
**Runtime Engine(s):** <br>
|
| 182 |
|
|
|
|
| 198 |
|
| 199 |
|
| 200 |
|
| 201 |
+
### Model Version(s):
|
| 202 |
|
| 203 |
[OpenMath-Nemotron-1.5B](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B)
|
| 204 |
|