Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese

Kurt Micallef, Albert Gatt, Marc Tanti, Lonneke van der Plas, Claudia Borg


Abstract
Multilingual language models such as mBERT have seen impressive cross-lingual transfer to a variety of languages, but many languages remain excluded from these models. In this paper, we analyse the effect of pre-training with monolingual data for a low-resource language that is not included in mBERT – Maltese – with a range of pre-training set ups. We conduct evaluations with the newly pre-trained models on three morphosyntactic tasks – dependency parsing, part-of-speech tagging, and named-entity recognition – and one semantic classification task – sentiment analysis. We also present a newly created corpus for Maltese, and determine the effect that the pre-training data size and domain have on the downstream performance. Our results show that using a mixture of pre-training domains is often superior to using Wikipedia text only. We also find that a fraction of this corpus is enough to make significant leaps in performance over Wikipedia-trained models. We pre-train and compare two models on the new corpus: a monolingual BERT model trained from scratch (BERTu), and a further pretrained multilingual BERT (mBERTu). The models achieve state-of-the-art performance on these tasks, despite the new corpus being considerably smaller than typically used corpora for high-resourced languages. On average, BERTu outperforms or performs competitively with mBERTu, and the largest gains are observed for higher-level tasks.
Anthology ID:
2022.deeplo-1.10
Volume:
Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing
Month:
July
Year:
2022
Address:
Hybrid
Venue:
DeepLo
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
90–101
Language:
URL:
https://aclanthology.org/2022.deeplo-1.10
DOI:
10.18653/v1/2022.deeplo-1.10
Bibkey:
Cite (ACL):
Kurt Micallef, Albert Gatt, Marc Tanti, Lonneke van der Plas, and Claudia Borg. 2022. Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese. In Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, pages 90–101, Hybrid. Association for Computational Linguistics.
Cite (Informal):
Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese (Micallef et al., DeepLo 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.deeplo-1.10.pdf
Video:
 https://preview.aclanthology.org/auto-file-uploads/2022.deeplo-1.10.mp4
Code
 mlrs/bertu
Data
Korpus Malti