Can Hallucination Correction Improve Video-Language Alignment?

Lingjun Zhao, Mingyang Xie, Paola Cascante-Bonilla, Hal Daumé Iii, Kwonjoon Lee


Abstract
Large Vision-Language Models often generate hallucinated content that is not grounded in its visual inputs. While prior work focuses on mitigating hallucinations, we instead explore leveraging hallucination correction as a training objective to improve video-language alignment. We introduce HACA, a self-training framework learning to correct hallucinations in descriptions that do not align with the video content. By identifying and correcting inconsistencies, HACA enhances the model’s ability to align video and textual representations for spatio-temporal reasoning. Our experimental results show consistent gains in video-caption binding and text-to-video retrieval tasks, demonstrating that hallucination correction-inspired tasks serve as an effective strategy for improving vision and language alignment.
Anthology ID:
2025.findings-acl.1314
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25636–25646
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1314/
DOI:
Bibkey:
Cite (ACL):
Lingjun Zhao, Mingyang Xie, Paola Cascante-Bonilla, Hal Daumé Iii, and Kwonjoon Lee. 2025. Can Hallucination Correction Improve Video-Language Alignment?. In Findings of the Association for Computational Linguistics: ACL 2025, pages 25636–25646, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Can Hallucination Correction Improve Video-Language Alignment? (Zhao et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1314.pdf