inference: false
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/WizardLM/WizardMath-7B-V1.1/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF)
🤗 HF Repo •🐱 Github Repo • 🐦 Twitter
📃 [WizardLM] • 📃 [WizardCoder] • 📃 [WizardMath]
👋 Join our Discord
News
[12/19/2023] 🔥 We released WizardMath-7B-V1.1, the SOTA 7B math LLM, achieves 83.2 pass@1 on GSM8k, and 33.0 pass@1 on MATH.
[12/19/2023] 🔥 WizardMath-7B-V1.1 outperforms ChatGPT 3.5, Gemini Pro, Mixtral MOE, and Claude Instant on GSM8K pass@1.
[12/19/2023] 🔥 WizardMath-7B-V1.1 is comparable with ChatGPT 3.5, Gemini Pro, and surpasses Mixtral MOE on MATH pass@1.
Model | Checkpoint | Paper | GSM8k | MATH | Online Demo | License |
---|---|---|---|---|---|---|
WizardMath-7B-V1.1 | 🤗 HF Link | 📃 [WizardMath] | 83.2 | 33.0 | Demo | |
WizardMath-70B-V1.0 | 🤗 HF Link | 📃 [WizardMath] | 81.6 | 22.7 | Demo | Llama 2 |
WizardMath-13B-V1.0 | 🤗 HF Link | 📃 [WizardMath] | 63.9 | 14.0 | Demo | Llama 2 |
WizardMath-7B-V1.0 | 🤗 HF Link | 📃 [WizardMath] | 54.9 | 10.7 | Demo | Llama 2 |
[12/19/2023] Comparing WizardMath-7B-V1.1 with other open source 7B size math LLMs.
Model | GSM8k Pass@1 | MATH Pass@1 |
---|---|---|
MPT-7B | 6.8 | 3.0 |
Llama 1-7B | 11.0 | 2.9 |
Llama 2-7b | 12.96 | 2.78 |
Yi-6b | 32.6 | 5.78 |
Mistral-7b | 37.83 | 9.06 |
Qwen-7b | 47.84 | 9.34 |
RFT-7B | 50.3 | -- |
MAmmoTH-7B (COT) | 50.5 | 10.4 |
WizardMath-7B-V1.0 | 54.9 | 10.7 |
Abel-7B-001 | 59.7 | 13 |
MetaMath-7B | 66.5 | 19.8 |
Arithmo-Mistral-7B | 74.7 | 25.3 |
MetaMath-Mistral-7B | 77.7 | 28.2 |
Abel-7B-002 | 80.4 | 29.5 |
WizardMath-7B-V1.1 | 83.2 | 33.0 |
[12/19/2023] Comparing WizardMath-7B-V1.1 with large open source (30B~70B) size math LLMs.
Model | GSM8k Pass@1 | MATH Pass@1 |
---|---|---|
Llemma-34B | 51.5 | 25.0 |
Minerva-62B | 52.4 | 27.6 |
Llama 2-70B | 56.8 | 13.5 |
DeepSeek 67B | 63.4 | -- |
Gork 33B | 62.9 | 23.9 |
MAmmoTH-70B | 72.4 | 21.1 |
Yi-34B | 67.9 | 15.9 |
Mixtral 8x7B | 74.4 | 28.4 |
MetaMath-70B | 82.3 | 26.6 |
WizardMath-7B-V1.1 | 83.2 | 33.0 |
🔥 ❗Note for model system prompts usage:
Please use the same systems prompts strictly with us, and we do not guarantee the accuracy of the quantified versions.
Default version:
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
CoT Version: (❗For the simple math questions, we do NOT recommend to use the CoT prompt.)
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
Inference WizardMath Demo Script
We provide the WizardMath inference demo code here.
Citation
Please cite the repo if you use the data, method or code in this repo.
@article{luo2023wizardmath,
title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct},
author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei},
journal={arXiv preprint arXiv:2308.09583},
year={2023}
}