Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning
Qingyi Si, Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang, Jie Zhou
Abstract
Models for Visual Question Answering (VQA) often rely on the spurious correlations, i.e., the language priors, that appear in the biased samples of training set, which make them brittle against the out-of-distribution (OOD) test data. Recent methods have achieved promising progress in overcoming this problem by reducing the impact of biased samples on model training. However, these models reveal a trade-off that the improvements on OOD data severely sacrifice the performance on the in-distribution (ID) data (which is dominated by the biased samples). Therefore, we propose a novel contrastive learning approach, MMBS, for building robust VQA models by Making the Most of Biased Samples. Specifically, we construct positive samples for contrastive learning by eliminating the information related to spurious correlation from the original training samples and explore several strategies to use the constructed positive samples for training. Instead of undermining the importance of biased samples in model training, our approach precisely exploits the biased samples for unbiased information that contributes to reasoning. The proposed method is compatible with various VQA backbones. We validate our contributions by achieving competitive performance on the OOD dataset VQA-CP v2 while preserving robust performance on the ID dataset VQA v2.- Anthology ID:
- 2022.findings-emnlp.495
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6650–6662
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.495
- DOI:
- 10.18653/v1/2022.findings-emnlp.495
- Cite (ACL):
- Qingyi Si, Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang, and Jie Zhou. 2022. Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6650–6662, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning (Si et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/landing_page/2022.findings-emnlp.495.pdf