Lisa Veiber
2022
LuxemBERT: Simple and Practical Data Augmentation in Language Model Pre-Training for Luxembourgish
Cedric Lothritz
|
Bertrand Lebichot
|
Kevin Allix
|
Lisa Veiber
|
Tegawende Bissyande
|
Jacques Klein
|
Andrey Boytsov
|
Clément Lefebvre
|
Anne Goujon
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Pre-trained Language Models such as BERT have become ubiquitous in NLP where they have achieved state-of-the-art performance in most NLP tasks. While these models are readily available for English and other widely spoken languages, they remain scarce for low-resource languages such as Luxembourgish. In this paper, we present LuxemBERT, a BERT model for the Luxembourgish language that we create using the following approach: we augment the pre-training dataset by considering text data from a closely related language that we partially translate using a simple and straightforward method. We are then able to produce the LuxemBERT model, which we show to be effective for various NLP tasks: it outperforms a simple baseline built with the available Luxembourgish text data as well the multilingual mBERT model, which is currently the only option for transformer-based language models in Luxembourgish. Furthermore, we present datasets for various downstream NLP tasks that we created for this study and will make available to researchers on request.
2020
Evaluating Pretrained Transformer-based Models on the Task of Fine-Grained Named Entity Recognition
Cedric Lothritz
|
Kevin Allix
|
Lisa Veiber
|
Tegawendé F. Bissyandé
|
Jacques Klein
Proceedings of the 28th International Conference on Computational Linguistics
Named Entity Recognition (NER) is a fundamental Natural Language Processing (NLP) task and has remained an active research field. In recent years, transformer models and more specifically the BERT model developed at Google revolutionised the field of NLP. While the performance of transformer-based approaches such as BERT has been studied for NER, there has not yet been a study for the fine-grained Named Entity Recognition (FG-NER) task. In this paper, we compare three transformer-based models (BERT, RoBERTa, and XLNet) to two non-transformer-based models (CRF and BiLSTM-CNN-CRF). Furthermore, we apply each model to a multitude of distinct domains. We find that transformer-based models incrementally outperform the studied non-transformer-based models in most domains with respect to the F1 score. Furthermore, we find that the choice of domains significantly influenced the performance regardless of the respective data size or the model chosen.
Search
Co-authors
- Cedric Lothritz 2
- Kevin Allix 2
- Jacques Klein 2
- Bertrand Lebichot 1
- Tegawendé Bissyandé 1
- show all...