Abstract
In Large Visual Language Models (LVLMs), the efficacy of In-Context Learning (ICL) remains limited by challenges in cross-modal interactions and representation disparities. To overcome these challenges, we introduce a novel Visual In-Context Learning (VICL) method comprising Visual Demonstration Retrieval, Intent-Oriented Image Summarization, and Intent-Oriented Demonstration Composition. Our approach retrieves images via ”Retrieval & Rerank” paradigm, summarises images with task intent and task-specific visual parsing, and composes language-based demonstrations that reduce token count and alleviate cross-modal interaction problem. Experimental evaluations on five visual reasoning datasets demonstrate the effectiveness of our method. Moreover, our extensive experiments leverage information flow analysis to elucidate the effectiveness of our method, and investigate the impact of length and position of demonstrations for LVLM. The use of in-context unlearning further shows promise in resetting specific model knowledge without retraining.- Anthology ID:
- 2024.findings-acl.940
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2024
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand
- Editors:
- Lun-Wei Ku, Andre Martins, Vivek Srikumar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 15890–15902
- Language:
- URL:
- https://aclanthology.org/2024.findings-acl.940
- DOI:
- 10.18653/v1/2024.findings-acl.940
- Cite (ACL):
- Yucheng Zhou, Xiang Li, Qianning Wang, and Jianbing Shen. 2024. Visual In-Context Learning for Large Vision-Language Models. In Findings of the Association for Computational Linguistics: ACL 2024, pages 15890–15902, Bangkok, Thailand. Association for Computational Linguistics.
- Cite (Informal):
- Visual In-Context Learning for Large Vision-Language Models (Zhou et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/autopr/2024.findings-acl.940.pdf