metadata
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: float64
- name: context
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 638720
num_examples: 223
download_size: 198425
dataset_size: 638720
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Finance Fundamentals: Quantity Extraction
This dataset contains evaluations for extracting numbers from financial text. The source data comes from:
Each question went through additional manual review to ensure both correctness and clarity. For more information, see the BizBench paper.
Example
Each question will contain a document context:
The Company’s top ten clients accounted for 42.2%, 44.2% and 46.9% of its consolidated revenues during the years ended December 31, 2019, 2018 and 2017, respectively.
The following table represents a disaggregation of revenue from contracts with customers by delivery location (in thousands):
| | | Years Ended December 31, | |
| :--- | :--- | :--- | :--- |
| | 2019 | 2018 | 2017 |
| Americas: | | | |
| United States | $614,493 | $668,580 | $644,870 |
| The Philippines | 250,888 | 231,966 | 241,211 |
| Costa Rica | 127,078 | 127,963 | 132,542 |
| Canada | 99,037 | 102,353 | 112,367 |
| El Salvador | 81,195 | 81,156 | 75,800 |
| Other | 123,969 | 118,620 | 118,853 |
| Total Americas | 1,296,660 | 1,330,638 | 1,325,643 |
| EMEA: | | | |
| Germany | 94,166 | 91,703 | 81,634 |
| Other | 223,847 | 203,251 | 178,649 |
| Total EMEA | 318,013 | 294,954 | 260,283 |
| Total Other | 89 | 95 | 82 |
| | $1,614,762 | $1,625,687 | $1,586,008 |
An associated question that references the context:
What was the Total Americas amount in 2019? (thousand)
And an answer represented as a single float value:
1296660.0
Citation
If you find this data useful, please cite:
@inproceedings{krumdick-etal-2024-bizbench,
title = "{B}iz{B}ench: A Quantitative Reasoning Benchmark for Business and Finance",
author = "Krumdick, Michael and
Koncel-Kedziorski, Rik and
Lai, Viet Dac and
Reddy, Varshini and
Lovering, Charles and
Tanner, Chris",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.452/",
doi = "10.18653/v1/2024.acl-long.452",
pages = "8309--8332",
abstract = "Answering questions within business and finance requires reasoning, precision, and a wide-breadth of technical knowledge. Together, these requirements make this domain difficult for large language models (LLMs). We introduce BizBench, a benchmark for evaluating models' ability to reason about realistic financial problems. BizBench comprises eight quantitative reasoning tasks, focusing on question-answering (QA) over financial data via program synthesis. We include three financially-themed code-generation tasks from newly collected and augmented QA data. Additionally, we isolate the reasoning capabilities required for financial QA: reading comprehension of financial text and tables for extracting intermediate values, and understanding financial concepts and formulas needed to calculate complex solutions. Collectively, these tasks evaluate a model{'}s financial background knowledge, ability to parse financial documents, and capacity to solve problems with code. We conduct an in-depth evaluation of open-source and commercial LLMs, comparing and contrasting the behavior of code-focused and language-focused models. We demonstrate that the current bottleneck in performance is due to LLMs' limited business and financial understanding, highlighting the value of a challenging benchmark for quantitative reasoning within this domain."
}