Abstract
Recent years have witnessed impressive results of pre-trained vision-language models on knowledge-intensive tasks such as visual question answering (VQA). Despite the recent advances in VQA, existing methods mainly adopt a discriminative formulation that predicts answers within a pre-defined label set, leading to easy overfitting on low-resource domains (e.g., medicine) and poor generalization under domain shift to another dataset. To tackle this limitation, we propose a novel generative model enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts and multimodal features to generate answers in free text. Our generative model enables rapid zero-shot dataset adaptation to unseen data distributions and open-set answer labels across datasets. Our experiments on medical VQA tasks show that MPR outperforms its non-retrieval counterpart by up to 30% accuracy points in a few-shot domain adaptation setting.- Anthology ID:
- 2023.findings-acl.158
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2023
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2518–2535
- Language:
- URL:
- https://preview.aclanthology.org/add_missing_videos/2023.findings-acl.158/
- DOI:
- 10.18653/v1/2023.findings-acl.158
- Cite (ACL):
- Timothy Ossowski and Junjie Hu. 2023. Retrieving Multimodal Prompts for Generative Visual Question Answering. In Findings of the Association for Computational Linguistics: ACL 2023, pages 2518–2535, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Retrieving Multimodal Prompts for Generative Visual Question Answering (Ossowski & Hu, Findings 2023)
- PDF:
- https://preview.aclanthology.org/add_missing_videos/2023.findings-acl.158.pdf