Exploring Weaknesses of VQA Models through Attribution Driven Insights

Shaunak Halbe


Abstract
Deep Neural Networks have been successfully used for the task of Visual Question Answering for the past few years owing to the availability of relevant large scale datasets. However these datasets are created in artificial settings and rarely reflect the real world scenario. Recent research effectively applies these VQA models for answering visual questions for the blind. Despite achieving high accuracy these models appear to be susceptible to variation in input questions.We analyze popular VQA models through the lens of attribution (input’s influence on predictions) to gain valuable insights. Further, We use these insights to craft adversarial attacks which inflict significant damage to these systems with negligible change in meaning of the input questions. We believe this will enhance development of systems more robust to the possible variations in inputs when deployed to assist the visually impaired.
Anthology ID:
2020.challengehml-1.9
Volume:
Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML)
Month:
July
Year:
2020
Address:
Seattle, USA
Venue:
Challenge-HML
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
64–68
Language:
URL:
https://aclanthology.org/2020.challengehml-1.9
DOI:
10.18653/v1/2020.challengehml-1.9
Bibkey:
Cite (ACL):
Shaunak Halbe. 2020. Exploring Weaknesses of VQA Models through Attribution Driven Insights. In Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML), pages 64–68, Seattle, USA. Association for Computational Linguistics.
Cite (Informal):
Exploring Weaknesses of VQA Models through Attribution Driven Insights (Halbe, Challenge-HML 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.challengehml-1.9.pdf
Data
Visual Question AnsweringVisual Question Answering v2.0VizWiz