|
# SciBench |
|
**SciBench** is a novel benchmark for college-level scientific problems sourced from instructional textbooks. The benchmark is designed to evaluate the complex reasoning capabilities, |
|
strong domain knowledge, and advanced calculation skills of LLMs. |
|
Please refer to our [paper](https://arxiv.org/abs/2307.10635) or [website](https://scibench-ucla.github.io) for full description: SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models |
|
. |
|
|
|
|
|
## Citation |
|
If you find our paper useful, please cite our paper |
|
``` |
|
@inproceedings{wang2024scibench, |
|
author = {Wang, Xiaoxuan and Hu, Ziniu and Lu, Pan and Zhu, Yanqiao and Zhang, Jieyu and Subramaniam, Satyen and Loomba, Arjun R. and Zhang, Shichang and Sun, Yizhou and Wang, Wei}, |
|
title = {{SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models}}, |
|
booktitle = {Proceedings of the Forty-First International Conference on Machine Learning}, |
|
year = {2024}, |
|
} |
|
``` |
|
|
|
|
|
--- |
|
license: mit |
|
--- |
|
|