Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language Models

Jihoon Lee, Min Song


Abstract
Despite significant advancements in Large Vision-Language Models, Object Hallucination (OH) remains a persistent challenge. Building upon prior studies on contrastive decoding that address this issue without requiring additional model training, we introduce RVCD (Retrieval Visual Contrastive Decoding), an advanced method to suppress OH. RVCD leverages both negative and positive images at the logit level, explicitly referencing AI-generated images designed to represent a single concept. Our approach demonstrates substantial improvements over existing decoding-based methods.
Anthology ID:
2025.findings-acl.430
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8200–8219
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.430/
DOI:
Bibkey:
Cite (ACL):
Jihoon Lee and Min Song. 2025. Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 8200–8219, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language Models (Lee & Song, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.430.pdf