SURf: Teaching Large Vision-Language Models to Selectively Utilize Retrieved Information
Jiashuo Sun, Jihai Zhang, Yucheng Zhou, Zhaochen Su, Xiaoye Qu, Yu Cheng
Abstract
Large Vision-Language Models (LVLMs) have become pivotal at the intersection of computer vision and natural language processing. However, the full potential of LVLMs’ Retrieval-Augmented Generation (RAG) capabilities remains underutilized. Existing works either focus solely on the text modality or are limited to specific tasks. Moreover, most LVLMs struggle to selectively utilize retrieved information and are sensitive to irrelevant or misleading references. To address these challenges, we propose a self-refinement framework designed to teach LVLMs to Selectively Utilize Retrieved Information (SURf). Specifically, when given questions that are incorrectly answered by the LVLM backbone, we obtain references that help correct the answers (positive references) and those that do not (negative references). We then fine-tune the LVLM backbone using a combination of these positive and negative references. Our experiments across three tasks and seven datasets demonstrate that our framework significantly enhances LVLMs’ ability to effectively utilize retrieved multimodal references and improves their robustness against irrelevant or misleading information. The source code is available at https://anonymous.4open.science/r/SURf-6433.- Anthology ID:
- 2024.emnlp-main.434
- Volume:
- Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7611–7629
- Language:
- URL:
- https://preview.aclanthology.org/add-emnlp-2024-awards/2024.emnlp-main.434/
- DOI:
- 10.18653/v1/2024.emnlp-main.434
- Cite (ACL):
- Jiashuo Sun, Jihai Zhang, Yucheng Zhou, Zhaochen Su, Xiaoye Qu, and Yu Cheng. 2024. SURf: Teaching Large Vision-Language Models to Selectively Utilize Retrieved Information. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 7611–7629, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- SURf: Teaching Large Vision-Language Models to Selectively Utilize Retrieved Information (Sun et al., EMNLP 2024)
- PDF:
- https://preview.aclanthology.org/add-emnlp-2024-awards/2024.emnlp-main.434.pdf