Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA

Qingyi Si, Fandong Meng, Mingyu Zheng, Zheng Lin, Yuanxin Liu, Peng Fu, Yanan Cao, Weiping Wang, Jie Zhou


Abstract
Visual Question Answering (VQA) models are prone to learn the shortcut solution formed by dataset biases rather than the intended solution. To evaluate the VQA models’ reasoning ability beyond shortcut learning, the VQA-CP v2 dataset introduces a distribution shift between the training and test set given a question type. In this way, the model cannot use the training set shortcut (from question type to answer) to perform well on the test set. However, VQA-CP v2 only considers one type of shortcut and thus still cannot guarantee that the model relies on the intended solution rather than a solution specific to this shortcut. To overcome this limitation, we propose a new dataset that considers varying types of shortcuts by constructing different distribution shifts in multiple OOD test sets. In addition, we overcome the three troubling practices in the use of VQA-CP v2, e.g., selecting models using OOD test sets, and further standardize OOD evaluation procedure. Our benchmark provides a more rigorous and comprehensive testbed for shortcut learning in VQA. We benchmark recent methods and find that methods specifically designed for particular shortcuts fail to simultaneously generalize to our varying OOD test sets. We also systematically study the varying shortcuts and provide several valuable findings, which may promote the exploration of shortcut learning in VQA.
Anthology ID:
2022.findings-emnlp.271
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3698–3712
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.271
DOI:
10.18653/v1/2022.findings-emnlp.271
Bibkey:
Cite (ACL):
Qingyi Si, Fandong Meng, Mingyu Zheng, Zheng Lin, Yuanxin Liu, Peng Fu, Yanan Cao, Weiping Wang, and Jie Zhou. 2022. Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3698–3712, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA (Si et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2022.findings-emnlp.271.pdf