Transferring General Multimodal Pretrained Models to Text Recognition

Junyang Lin, Xuancheng Ren, Yichang Zhang, Gao Liu, Peng Wang, An Yang, Chang Zhou


Abstract
This paper proposes a new method, OFA-OCR, to transfer multimodal pretrained models to text recognition. Specifically, we recast text recognition as image captioning and directly transfer a unified vision-language pretrained model to the end task. Without pretraining on large-scale annotated or synthetic text recognition data, OFA-OCR outperforms the baselines and achieves state-of-the-art performance in the Chinese text recognition benchmark. Additionally, we construct an OCR pipeline with OFA-OCR, and we demonstrate that it can achieve competitive performance with the product-level API.
Anthology ID:
2023.findings-acl.37
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
588–597
Language:
URL:
https://aclanthology.org/2023.findings-acl.37
DOI:
10.18653/v1/2023.findings-acl.37
Bibkey:
Cite (ACL):
Junyang Lin, Xuancheng Ren, Yichang Zhang, Gao Liu, Peng Wang, An Yang, and Chang Zhou. 2023. Transferring General Multimodal Pretrained Models to Text Recognition. In Findings of the Association for Computational Linguistics: ACL 2023, pages 588–597, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Transferring General Multimodal Pretrained Models to Text Recognition (Lin et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2023.findings-acl.37.pdf