Data Augmentation for Visual Question Answering

Kushal Kafle, Mohammed Yousefhussien, Christopher Kanan


Abstract
Data augmentation is widely used to train deep neural networks for image classification tasks. Simply flipping images can help learning tremendously by increasing the number of training images by a factor of two. However, little work has been done studying data augmentation in natural language processing. Here, we describe two methods for data augmentation for Visual Question Answering (VQA). The first uses existing semantic annotations to generate new questions. The second method is a generative approach using recurrent neural networks. Experiments show that the proposed data augmentation improves performance of both baseline and state-of-the-art VQA algorithms.
Anthology ID:
W17-3529
Volume:
Proceedings of the 10th International Conference on Natural Language Generation
Month:
September
Year:
2017
Address:
Santiago de Compostela, Spain
Editors:
Jose M. Alonso, Alberto Bugarín, Ehud Reiter
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
198–202
Language:
URL:
https://aclanthology.org/W17-3529
DOI:
10.18653/v1/W17-3529
Bibkey:
Cite (ACL):
Kushal Kafle, Mohammed Yousefhussien, and Christopher Kanan. 2017. Data Augmentation for Visual Question Answering. In Proceedings of the 10th International Conference on Natural Language Generation, pages 198–202, Santiago de Compostela, Spain. Association for Computational Linguistics.
Cite (Informal):
Data Augmentation for Visual Question Answering (Kafle et al., INLG 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/W17-3529.pdf
Data
COCO-QAMS COCOVQGVisual Question Answering