Boqing Gong
2021
CrossVQA: Scalably Generating Benchmarks for Systematically Testing VQA Generalization
Arjun Akula
|
Soravit Changpinyo
|
Boqing Gong
|
Piyush Sharma
|
Song-Chun Zhu
|
Radu Soricut
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
One challenge in evaluating visual question answering (VQA) models in the cross-dataset adaptation setting is that the distribution shifts are multi-modal, making it difficult to identify if it is the shifts in visual or language features that play a key role. In this paper, we propose a semi-automatic framework for generating disentangled shifts by introducing a controllable visual question-answer generation (VQAG) module that is capable of generating highly-relevant and diverse question-answer pairs with the desired dataset style. We use it to create CrossVQA, a collection of test splits for assessing VQA generalization based on the VQA2, VizWiz, and Open Images datasets. We provide an analysis of our generated datasets and demonstrate its utility by using them to evaluate several state-of-the-art VQA systems. One important finding is that the visual shifts in cross-dataset VQA matter more than the language shifts. More broadly, we present a scalable framework for systematically evaluating the machine with little human intervention.
Search