Lacuna Reconstruction: Self-Supervised Pre-Training for Low-Resource Historical Document Transcription

Nikolai Vogler, Jonathan Allen, Matthew Miller, Taylor Berg-Kirkpatrick


Abstract
We present a self-supervised pre-training approach for learning rich visual language representations for both handwritten and printed historical document transcription. After supervised fine-tuning of our pre-trained encoder representations for low-resource document transcription on two languages, (1) a heterogeneous set of handwritten Islamicate manuscript images and (2) early modern English printed documents, we show a meaningful improvement in recognition accuracy over the same supervised model trained from scratch with as few as 30 line image transcriptions for training. Our masked language model-style pre-training strategy, where the model is trained to be able to identify the true masked visual representation from distractors sampled from within the same line, encourages learning robust contextualized language representations invariant to scribal writing style and printing noise present across documents.
Anthology ID:
2022.findings-naacl.15
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
206–216
Language:
URL:
https://aclanthology.org/2022.findings-naacl.15
DOI:
10.18653/v1/2022.findings-naacl.15
Bibkey:
Cite (ACL):
Nikolai Vogler, Jonathan Allen, Matthew Miller, and Taylor Berg-Kirkpatrick. 2022. Lacuna Reconstruction: Self-Supervised Pre-Training for Low-Resource Historical Document Transcription. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 206–216, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Lacuna Reconstruction: Self-Supervised Pre-Training for Low-Resource Historical Document Transcription (Vogler et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2022.findings-naacl.15.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-2/2022.findings-naacl.15.mp4