A negative case analysis of visual grounding methods for VQA

Robik Shrestha, Kushal Kafle, Christopher Kanan


Abstract
Existing Visual Question Answering (VQA) methods tend to exploit dataset biases and spurious statistical correlations, instead of producing right answers for the right reasons. To address this issue, recent bias mitigation methods for VQA propose to incorporate visual cues (e.g., human attention maps) to better ground the VQA models, showcasing impressive gains. However, we show that the performance improvements are not a result of improved visual grounding, but a regularization effect which prevents over-fitting to linguistic priors. For instance, we find that it is not actually necessary to provide proper, human-based cues; random, insensible cues also result in similar improvements. Based on this observation, we propose a simpler regularization scheme that does not require any external annotations and yet achieves near state-of-the-art performance on VQA-CPv2.
Anthology ID:
2020.acl-main.727
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8172–8181
Language:
URL:
https://aclanthology.org/2020.acl-main.727
DOI:
10.18653/v1/2020.acl-main.727
Bibkey:
Cite (ACL):
Robik Shrestha, Kushal Kafle, and Christopher Kanan. 2020. A negative case analysis of visual grounding methods for VQA. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8172–8181, Online. Association for Computational Linguistics.
Cite (Informal):
A negative case analysis of visual grounding methods for VQA (Shrestha et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2020.acl-main.727.pdf
Video:
 http://slideslive.com/38929240
Code
 erobic/negative_analysis_of_grounding
Data
Visual Question Answering