Mitigating Hallucination in Multimodal Large Language Model via Hallucination-targeted Direct Preference Optimization

Yuhan Fu, Ruobing Xie, Xingwu Sun, Zhanhui Kang, Xirong Li


Abstract
Multimodal Large Language Models (MLLMs) are known to hallucinate, which limits their practical applications. Recent works have attempted to apply Direct Preference Optimization (DPO) to enhance the performance of MLLMs, but have shown inconsistent improvements in mitigating hallucinations. To address this issue more effectively, we introduce Hallucination-targeted Direct Preference Optimization (HDPO) to reduce hallucinations in MLLMs. Unlike previous approaches, our method tackles hallucinations from their diverse forms and causes. Specifically, we develop three types of preference pair data targeting the following causes of MLLM hallucinations: (1) insufficient visual capabilities, (2) long context generation, and (3) multimodal conflicts. Experimental results demonstrate that our method achieves superior performance across multiple hallucination evaluation datasets, surpassing most state-of-the-art (SOTA) methods and highlighting the potential of our approach. Ablation studies and in-depth analyses further confirm the effectiveness of our method and suggest the potential for further improvements through scaling up.
Anthology ID:
2025.findings-acl.850
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16563–16577
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.850/
DOI:
10.18653/v1/2025.findings-acl.850
Bibkey:
Cite (ACL):
Yuhan Fu, Ruobing Xie, Xingwu Sun, Zhanhui Kang, and Xirong Li. 2025. Mitigating Hallucination in Multimodal Large Language Model via Hallucination-targeted Direct Preference Optimization. In Findings of the Association for Computational Linguistics: ACL 2025, pages 16563–16577, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Mitigating Hallucination in Multimodal Large Language Model via Hallucination-targeted Direct Preference Optimization (Fu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.850.pdf