Update README.md
Browse files
README.md
CHANGED
@@ -27,9 +27,9 @@ pipeline_tag: text-generation
|
|
27 |
|
28 |
[12/19/2023] 🔥 We released **WizardMath-7B-V1.1**, the **SOTA 7B math LLM**, achieves **83.2 pass@1** on GSM8k, and **33.0 pass@1** on MATH.
|
29 |
|
30 |
-
[12/19/2023] 🔥 **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, and **Claude Instant** on GSM8K pass@1.
|
31 |
|
32 |
-
[12/19/2023] 🔥 **WizardMath-7B-V1.1**
|
33 |
|
34 |
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
|
35 |
| ----- |------| ---- |------|-------| ----- | ----- |
|
|
|
27 |
|
28 |
[12/19/2023] 🔥 We released **WizardMath-7B-V1.1**, the **SOTA 7B math LLM**, achieves **83.2 pass@1** on GSM8k, and **33.0 pass@1** on MATH.
|
29 |
|
30 |
+
[12/19/2023] 🔥 **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, **Mixtral MOE**, and **Claude Instant** on GSM8K pass@1.
|
31 |
|
32 |
+
[12/19/2023] 🔥 **WizardMath-7B-V1.1** is comparable with **ChatGPT 3.5**, **Gemini Pro**, and surpasses **Mixtral MOE** on MATH pass@1.
|
33 |
|
34 |
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
|
35 |
| ----- |------| ---- |------|-------| ----- | ----- |
|