Do explanations make VQA models more predictable to a human?
Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit Chattopadhyay, Devi Parikh
Abstract
A rich line of research attempts to make deep neural networks more transparent by generating human-interpretable ‘explanations’ of their decision process, especially for interactive tasks like Visual Question Answering (VQA). In this work, we analyze if existing explanations indeed make a VQA model — its responses as well as failures — more predictable to a human. Surprisingly, we find that they do not. On the other hand, we find that human-in-the-loop approaches that treat the model as a black-box do.- Anthology ID:
- D18-1128
- Volume:
- Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
- Month:
- October-November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Editors:
- Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1036–1042
- Language:
- URL:
- https://aclanthology.org/D18-1128
- DOI:
- 10.18653/v1/D18-1128
- Cite (ACL):
- Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit Chattopadhyay, and Devi Parikh. 2018. Do explanations make VQA models more predictable to a human?. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1036–1042, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- Do explanations make VQA models more predictable to a human? (Chandrasekaran et al., EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/naacl24-info/D18-1128.pdf
- Data
- Visual Question Answering