Multimodal Large Language Models for Text-rich Image Understanding: A Comprehensive Review
Pei Fu, Tongkun Guan, Zining Wang, Zhentao Guo, Chen Duan, Hao Sun, Boming Chen, Qianyi Jiang, Jiayao Ma, Kai Zhou, Junfeng Luo
Abstract
The recent emergence of Multi-modal Large Language Models (MLLMs) has introduced a new dimension to the Text-rich Image Understanding (TIU) field, with models demonstrating impressive and inspiring performance. However, their rapid evolution and widespread adoption have made it increasingly challenging to keep up with the latest advancements. To address this, we present a systematic and comprehensive survey to facilitate further research on TIU MLLMs. Initially, we outline the timeline, architecture, and pipeline of nearly all TIU MLLMs. Then, we review the performance of selected models on mainstream benchmarks. Finally, we explore promising directions, challenges, and limitations within the field.- Anthology ID:
- 2025.findings-acl.1023
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2025
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 19941–19958
- Language:
- URL:
- https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1023/
- DOI:
- Cite (ACL):
- Pei Fu, Tongkun Guan, Zining Wang, Zhentao Guo, Chen Duan, Hao Sun, Boming Chen, Qianyi Jiang, Jiayao Ma, Kai Zhou, and Junfeng Luo. 2025. Multimodal Large Language Models for Text-rich Image Understanding: A Comprehensive Review. In Findings of the Association for Computational Linguistics: ACL 2025, pages 19941–19958, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- Multimodal Large Language Models for Text-rich Image Understanding: A Comprehensive Review (Fu et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1023.pdf