MMRefine: Unveiling the Obstacles to Robust Refinement in Multimodal Large Language Models

Gio Paik, Geewook Kim, Jinbae Im


Abstract
This paper introduces MMRefine, a MultiModal Refinement benchmark designed to evaluate the error refinement capabilities of Multimodal Large Language Models (MLLMs). As the emphasis shifts toward enhancing reasoning during inference, MMRefine provides a framework that evaluates MLLMs’ abilities to detect and correct errors across six distinct scenarios beyond just comparing final accuracy before and after refinement. Furthermore, the benchmark analyzes the refinement performance by categorizing errors into six error types.Experiments with various open and closed MLLMs reveal bottlenecks and factors impeding refinement performance, highlighting areas for improvement in effective reasoning enhancement. Our code and dataset are publicly available at https://github.com/naver-ai/MMRefine.
Anthology ID:
2025.findings-acl.1378
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26883–26904
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1378/
DOI:
Bibkey:
Cite (ACL):
Gio Paik, Geewook Kim, and Jinbae Im. 2025. MMRefine: Unveiling the Obstacles to Robust Refinement in Multimodal Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 26883–26904, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
MMRefine: Unveiling the Obstacles to Robust Refinement in Multimodal Large Language Models (Paik et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1378.pdf