tomhodemon's picture
Update README.md
14e3b34 verified
metadata
configs:
  - config_name: random
    default: true
    data_files:
      - split: train
        path: random/train.jsonl
      - split: test
        path: random/test.jsonl
      - split: dev
        path: random/dev.jsonl
  - config_name: zeroshot
    data_files:
      - split: train
        path: zeroshot/train.jsonl
      - split: test
        path: zeroshot/test.jsonl
      - split: dev
        path: zeroshot/dev.jsonl
license: apache-2.0
language:
  - en
tags:
  - visual reasoning
  - grounded chat
  - visual grounding
size_categories:
  - 1K<n<10K

Grounded Visual Spatial Reasoning

Code for generating the annotations can be found here: github.com

Dataset Summary

This dataset extends the Visual Spatial Reasoning (VSR) dataset with visual grounding annotations: each caption is annotated with COCO-category object mentions, their positions , and corresponding bounding boxes in the image.

Data instance

Each sample instance has the following structure:

Field Type Description
image_file string COCO-style image filename
image_link string Direct COCO image URL
width int Image width
height int Image height
caption string Caption with two COCO-category object mentions
label bool Label from VSR original dataset
relation string Spatial relation
ref_exp.labels list[string] List of object labels from COCO categories
ref_exp.label_positions list[list[int]] Position (start, end) of each label in caption sentence
ref_exp.bboxes list[list[float]] Bounding boxes ([x, y, w, h] format)

Download Images

To download the images, follow the instructions from the VSR official GitHub repo.

Citation

If you use this dataset, please cite the original Visual Spatial Reasoning paper:

@article{Liu2022VisualSR,
  title={Visual Spatial Reasoning},
  author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier},
  journal={Transactions of the Association for Computational Linguistics},
  year={2023},
}