Data for CLIP Training on Chart Task
This repository contains the CLIP Training data implementation from our paper "On the Perception Bottleneck of VLMs for Chart Understanding".
Data Details
- Data Source: Mainly chart tasks data like ChartQA, FigureQA, and DVQA.
- Data overview: Each data contains image, a correct caption and wrong caption.
Citation
If you find this data useful in your research, please consider citing our paper:
@misc{liu2025perceptionbottleneckvlmschart,
title={On the Perception Bottleneck of VLMs for Chart Understanding},
author={Junteng Liu and Weihao Zeng and Xiwen Zhang and Yijun Wang and Zifei Shan and Junxian He},
year={2025},
eprint={2503.18435},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.18435},
}