Abstract
Multimodal pre-training with text, layout, and image has made significant progress for Visually Rich Document Understanding (VRDU), especially the fixed-layout documents such as scanned document images. While, there are still a large number of digital documents where the layout information is not fixed and needs to be interactively and dynamically rendered for visualization, making existing layout-based pre-training approaches not easy to apply. In this paper, we propose MarkupLM for document understanding tasks with markup languages as the backbone, such as HTML/XML-based documents, where text and markup information is jointly pre-trained. Experiment results show that the pre-trained MarkupLM significantly outperforms the existing strong baseline models on several document understanding tasks. The pre-trained model and code will be publicly available at https://aka.ms/markuplm.- Anthology ID:
- 2022.acl-long.420
- Volume:
- Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6078–6087
- Language:
- URL:
- https://aclanthology.org/2022.acl-long.420
- DOI:
- 10.18653/v1/2022.acl-long.420
- Cite (ACL):
- Junlong Li, Yiheng Xu, Lei Cui, and Furu Wei. 2022. MarkupLM: Pre-training of Text and Markup Language for Visually Rich Document Understanding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6078–6087, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- MarkupLM: Pre-training of Text and Markup Language for Visually Rich Document Understanding (Li et al., ACL 2022)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/2022.acl-long.420.pdf
- Data
- WebSRC