|
--- |
|
license: cc-by-4.0 |
|
configs: |
|
- config_name: ArxivOCR |
|
data_files: |
|
- split: train |
|
path: ArxivOCR/train-* |
|
- config_name: ArxivTableCap |
|
data_files: |
|
- split: train |
|
path: ArxivTableCap/train-* |
|
- split: val |
|
path: ArxivTableCap/val-* |
|
- split: test |
|
path: ArxivTableCap/test-* |
|
- config_name: COCOtext |
|
data_files: |
|
- split: train |
|
path: COCOtext/train-* |
|
- split: val |
|
path: COCOtext/val-* |
|
- config_name: Open4Business |
|
data_files: |
|
- split: train |
|
path: Open4Business/train-* |
|
- split: val |
|
path: Open4Business/val-* |
|
- split: test |
|
path: Open4Business/test-* |
|
- config_name: TabFact |
|
data_files: |
|
- split: train |
|
path: TabFact/train-* |
|
- split: val |
|
path: TabFact/val-* |
|
- split: test |
|
path: TabFact/test-* |
|
- config_name: TextOCR |
|
data_files: |
|
- split: train |
|
path: TextOCR/train-* |
|
- split: val |
|
path: TextOCR/val-* |
|
- config_name: WikiTQ |
|
data_files: |
|
- split: train |
|
path: WikiTQ/train-* |
|
- split: val |
|
path: WikiTQ/val-* |
|
- split: test |
|
path: WikiTQ/test-* |
|
- config_name: cord-v2 |
|
data_files: |
|
- split: train |
|
path: cord-v2/train-* |
|
- split: val |
|
path: cord-v2/val-* |
|
- split: test |
|
path: cord-v2/test-* |
|
- config_name: pubtables-1m |
|
data_files: |
|
- split: train |
|
path: pubtables-1m/train-* |
|
- split: val |
|
path: pubtables-1m/val-* |
|
- split: test |
|
path: pubtables-1m/test-* |
|
dataset_info: |
|
- config_name: ArxivOCR |
|
features: |
|
- name: sample_id |
|
dtype: string |
|
- name: dataset_name |
|
dtype: string |
|
- name: task_name |
|
dtype: string |
|
- name: query |
|
sequence: string |
|
- name: annotations |
|
sequence: string |
|
- name: image |
|
dtype: image |
|
- name: query_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_attribution |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: annotations_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_attribution |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: image_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_attribution |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: image_sha256 |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 563398948953.968 |
|
num_examples: 446016 |
|
download_size: 561963351553 |
|
dataset_size: 563398948953.968 |
|
- config_name: ArxivTableCap |
|
features: |
|
- name: sample_id |
|
dtype: string |
|
- name: dataset_name |
|
dtype: string |
|
- name: task_name |
|
dtype: string |
|
- name: query |
|
sequence: string |
|
- name: annotations |
|
sequence: string |
|
- name: image |
|
dtype: image |
|
- name: query_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_attribution |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: annotations_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_attribution |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: image_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_attribution |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: image_sha256 |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 5994865187.0 |
|
num_examples: 71024 |
|
- name: val |
|
num_bytes: 78047974.0 |
|
num_examples: 1000 |
|
- name: test |
|
num_bytes: 24057511.0 |
|
num_examples: 500 |
|
download_size: 6058341574 |
|
dataset_size: 6096970672.0 |
|
- config_name: COCOtext |
|
features: |
|
- name: sample_id |
|
dtype: string |
|
- name: dataset_name |
|
dtype: string |
|
- name: task_name |
|
dtype: string |
|
- name: query |
|
sequence: string |
|
- name: annotations |
|
sequence: string |
|
- name: img_id |
|
dtype: string |
|
- name: query_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_attribution |
|
sequence: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: annotations_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_attribution |
|
sequence: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: image_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: image_sha256 |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 66261559 |
|
num_examples: 46970 |
|
- name: val |
|
num_bytes: 12498555 |
|
num_examples: 8892 |
|
download_size: 15280439 |
|
dataset_size: 78760114 |
|
- config_name: Open4Business |
|
features: |
|
- name: sample_id |
|
dtype: string |
|
- name: dataset_name |
|
dtype: string |
|
- name: task_name |
|
dtype: string |
|
- name: query |
|
sequence: string |
|
- name: img_id |
|
dtype: string |
|
- name: query_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: annotations_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: image_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: image_sha256 |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 15054769 |
|
num_examples: 27932 |
|
- name: val |
|
num_bytes: 1862001 |
|
num_examples: 3492 |
|
- name: test |
|
num_bytes: 1869451 |
|
num_examples: 3492 |
|
download_size: 2997014 |
|
dataset_size: 18786221 |
|
- config_name: TabFact |
|
features: |
|
- name: sample_id |
|
dtype: string |
|
- name: dataset_name |
|
dtype: string |
|
- name: task_name |
|
dtype: string |
|
- name: query |
|
sequence: string |
|
- name: img_id |
|
dtype: string |
|
- name: query_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: annotations_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: image_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: image_sha256 |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 21877661 |
|
num_examples: 39543 |
|
- name: val |
|
num_bytes: 2815202 |
|
num_examples: 5088 |
|
- name: test |
|
num_bytes: 2813402 |
|
num_examples: 5085 |
|
download_size: 4778068 |
|
dataset_size: 27506265 |
|
- config_name: TextOCR |
|
features: |
|
- name: sample_id |
|
dtype: string |
|
- name: dataset_name |
|
dtype: string |
|
- name: task_name |
|
dtype: string |
|
- name: query |
|
sequence: string |
|
- name: annotations |
|
sequence: string |
|
- name: img_id |
|
dtype: string |
|
- name: query_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: annotations_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: image_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: image_sha256 |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 123664898 |
|
num_examples: 43484 |
|
- name: val |
|
num_bytes: 17736515 |
|
num_examples: 6232 |
|
download_size: 35928078 |
|
dataset_size: 141401413 |
|
- config_name: WikiTQ |
|
features: |
|
- name: sample_id |
|
dtype: string |
|
- name: dataset_name |
|
dtype: string |
|
- name: task_name |
|
dtype: string |
|
- name: query |
|
sequence: string |
|
- name: annotations |
|
sequence: string |
|
- name: img_id |
|
dtype: string |
|
- name: query_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: annotations_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: image_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: image_sha256 |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 19880124 |
|
num_examples: 27807 |
|
- name: val |
|
num_bytes: 4931100 |
|
num_examples: 6907 |
|
- name: test |
|
num_bytes: 6090798 |
|
num_examples: 8528 |
|
download_size: 2966839 |
|
dataset_size: 30902022 |
|
- config_name: cord-v2 |
|
features: |
|
- name: sample_id |
|
dtype: string |
|
- name: dataset_name |
|
dtype: string |
|
- name: task_name |
|
dtype: string |
|
- name: query |
|
sequence: string |
|
- name: annotations |
|
sequence: string |
|
- name: image |
|
dtype: image |
|
- name: query_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: annotations_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: image_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: image_sha256 |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 5185587863.0 |
|
num_examples: 3200 |
|
- name: val |
|
num_bytes: 686091779.0 |
|
num_examples: 400 |
|
- name: test |
|
num_bytes: 654076163.0 |
|
num_examples: 400 |
|
download_size: 5879361638 |
|
dataset_size: 6525755805.0 |
|
- config_name: pubtables-1m |
|
features: |
|
- name: sample_id |
|
dtype: string |
|
- name: dataset_name |
|
dtype: string |
|
- name: task_name |
|
dtype: string |
|
- name: query |
|
sequence: string |
|
- name: annotations |
|
sequence: string |
|
- name: img_id |
|
dtype: string |
|
- name: query_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: annotations_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: source_license |
|
dtype: string |
|
- name: source_url |
|
dtype: string |
|
- name: image_info |
|
struct: |
|
- name: notes |
|
dtype: string |
|
- name: image_sha256 |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 4515375098 |
|
num_examples: 1371294 |
|
- name: val |
|
num_bytes: 568892238 |
|
num_examples: 172887 |
|
- name: test |
|
num_bytes: 561231801 |
|
num_examples: 170466 |
|
download_size: 1398777256 |
|
dataset_size: 5645499137 |
|
--- |
|
# BigDocs-7.5M |
|
#### Training data for the paper: [BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks](https://huggingface.co/datasets/ServiceNow/BigDocs-Bench-Collections/) |
|
|
|
🌐 [Homepage](https://bigdocs.github.io) | 📖 [arXiv](https://arxiv.org/pdf/2412.04626) |
|
|
|
|
|
## Guide on Data Loading |
|
Some parts of BigDocs-7.5M are distributed without their "image" column, and instead have an "img_id" column. The file `get_bigdocs_75m.py`, part of this repository, provides tooling to substitutes such images back in. |
|
|
|
```python |
|
from get_bigdocs_75m import get_bigdocs_75m |
|
|
|
arxivocr = get_bigdocs_75m("ArxivOCR") |
|
arxivtablecap = get_bigdocs_75m("ArxivTableCap") |
|
cocotext = get_bigdocs_75m("COCOtext", user_local_path=".../train2014") |
|
pubtables1m = get_bigdocs_75m("pubtables-1m", user_local_path=".../PubTables-1M-Detection/images") |
|
textocr = get_bigdocs_75m("TextOCR", user_local_path=".../train") |
|
tabfact = get_bigdocs_75m("TabFact", user_local_path=".../Table-Fact-Checking") |
|
open4business = get_bigdocs_75m("Open4Business", user_local_path=".../Open4Business") |
|
wikitq = get_bigdocs_75m("WikiTQ", user_local_path=".../WikiTableQuestions") |
|
``` |
|
|
|
When specified, `user_local_path` must point to one of the third-party datasets listed below. |
|
|
|
- COCOtext: http://images.cocodataset.org/zips/train2014.zip |
|
- pubtables-1m: https://www.microsoft.com/en-us/research/publication/pubtables-1m |
|
- TextOCR: https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip |
|
- TabFact: https://github.com/wenhuchen/Table-Fact-Checking |
|
- Open4Business: https://github.com/amanpreet692/Open4Business |
|
- WikiTQ: https://github.com/ppasupat/WikiTableQuestions |
|
|
|
You may specify `num_proc` as you would for `datasets.map`. See the docstring in `get_bigdocs_75m.py` for more details. |
|
|
|
|
|
## Licensing |
|
The part of this repository generated by us is Copyright ServiceNow 2024 and licensed under the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. |
|
|
|
Multiple datasets, documents, and tools were involved in the generation of BigDocs-Bench. We document these dependencies on a per-sample basis through the `query_info`, `annotation_info` and `image_info` fields, respectively documenting the `query`, `annotations` and `image` fields of our datasets. |
|
|