Gao Liu
2023
Transferring General Multimodal Pretrained Models to Text Recognition
Junyang Lin
|
Xuancheng Ren
|
Yichang Zhang
|
Gao Liu
|
Peng Wang
|
An Yang
|
Chang Zhou
Findings of the Association for Computational Linguistics: ACL 2023
This paper proposes a new method, OFA-OCR, to transfer multimodal pretrained models to text recognition. Specifically, we recast text recognition as image captioning and directly transfer a unified vision-language pretrained model to the end task. Without pretraining on large-scale annotated or synthetic text recognition data, OFA-OCR outperforms the baselines and achieves state-of-the-art performance in the Chinese text recognition benchmark. Additionally, we construct an OCR pipeline with OFA-OCR, and we demonstrate that it can achieve competitive performance with the product-level API.
Search
Co-authors
- Junyang Lin 1
- Xuancheng Ren 1
- Yichang Zhang 1
- Peng Wang 1
- An Yang 1
- show all...