Abstract
The new pre-train-then-fine-tune paradigm in Natural made important performance gains accessible to a wider audience. Once pre-trained, deploying a large language model presents comparatively small infrastructure requirements, and offers robust performance in many NLP tasks. The Digital Humanities community has been an early adapter of this paradigm. Yet, a large part of this community is concerned with the application of NLP algorithms to historical texts, for which large models pre-trained on contemporary text may not provide optimal results. In the present paper, we present “MacBERTh”—a transformer-based language model pre-trained on historical English—and exhaustively assess its benefits on a large set of relevant downstream tasks. Our experiments highlight that, despite some differences across target time periods, pre-training on historical language from scratch outperforms models pre-trained on present-day language and later adapted to historical language.- Anthology ID:
- 2021.nlp4dh-1.4
- Volume:
- Proceedings of the Workshop on Natural Language Processing for Digital Humanities
- Month:
- December
- Year:
- 2021
- Address:
- NIT Silchar, India
- Venue:
- NLP4DH
- SIG:
- Publisher:
- NLP Association of India (NLPAI)
- Note:
- Pages:
- 23–36
- Language:
- URL:
- https://aclanthology.org/2021.nlp4dh-1.4
- DOI:
- Cite (ACL):
- Enrique Manjavacas Arevalo and Lauren Fonteyn. 2021. MacBERTh: Development and Evaluation of a Historically Pre-trained Language Model for English (1450-1950). In Proceedings of the Workshop on Natural Language Processing for Digital Humanities, pages 23–36, NIT Silchar, India. NLP Association of India (NLPAI).
- Cite (Informal):
- MacBERTh: Development and Evaluation of a Historically Pre-trained Language Model for English (1450-1950) (Manjavacas Arevalo & Fonteyn, NLP4DH 2021)
- PDF:
- https://preview.aclanthology.org/paclic-22-ingestion/2021.nlp4dh-1.4.pdf