Cedric Lothritz


2024

pdf bib
Soft Prompt Tuning for Cross-Lingual Transfer: When Less is More
Fred Philippy | Siwen Guo | Shohreh Haddadan | Cedric Lothritz | Jacques Klein | Tegawendé F. Bissyandé
Proceedings of the 1st Workshop on Modular and Open Multilingual NLP (MOOMIN 2024)

Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting pre-trained language models (PLMs) to specific tasks by inserting learnable embeddings, or soft prompts, at the input layer of the PLM, without modifying its parameters. This paper investigates the potential of SPT for cross-lingual transfer. Unlike previous studies on SPT for cross-lingual transfer that often fine-tune both the soft prompt and the model parameters, we adhere to the original intent of SPT by keeping the model parameters frozen and only training the soft prompt. This does not only reduce the computational cost and storage overhead of full-model fine-tuning, but we also demonstrate that this very parameter efficiency intrinsic to SPT can enhance cross-lingual transfer performance to linguistically distant languages. Moreover, we explore how different factors related to the prompt, such as the length or its reparameterization, affect cross-lingual transfer performance.

2023

pdf bib
Comparing Pre-Training Schemes for Luxembourgish BERT Models
Cedric Lothritz | Saad Ezzini | Christoph Purschke | Tegawendé Bissyandé | Jacques Klein | Isabella Olariu | Andrey Boytsov | Clément LeFebvre | Anne Goujon
Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023)

pdf
Evaluating Data Augmentation Techniques for the Training of Luxembourgish Language Models
Isabella Olariu | Cedric Lothritz | Tegawendé Bissyandé | Jacques Klein
Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023)

pdf
Evaluating Parameter-Efficient Finetuning Approaches for Pre-trained Models on the Financial Domain
Isabella Olariu | Cedric Lothritz | Jacques Klein | Tegawendé Bissyandé | Siwen Guo | Shohreh Haddadan
Findings of the Association for Computational Linguistics: EMNLP 2023

Large-scale language models with millions, billions, or trillions of trainable parameters are becoming increasingly popular. However, they risk becoming rapidly over-parameterized and the adaptation cost of fully fine-tuning them increases significantly. Storing them becomes progressively impractical as it requires keeping a separate copy of all the fine-tuned weights for each task. By freezing all pre-trained weights during fine-tuning, parameter-efficient tuning approaches have become an appealing alternative to traditional fine-tuning. The performance of these approaches has been evaluated on common NLP tasks of the GLUE benchmark and shown to match full fine-tuning performance, however, their impact is less researched in domain-specific fields such as finance. This work compares the performance of a set of financial BERT-like models to their fully fine-tuned counterparts by leveraging different parameter-efficient tuning methods. We see that results are comparable to traditional fine-tuning while gaining in time and resource efficiency.

pdf bib
Evaluating the Impact of Text De-Identification on Downstream NLP Tasks
Cedric Lothritz | Bertrand Lebichot | Kevin Allix | Saad Ezzini | Tegawendé Bissyandé | Jacques Klein | Andrey Boytsov | Clément Lefebvre | Anne Goujon
Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)

Data anonymisation is often required to comply with regulations when transfering information across departments or entities. However, the risk is that this procedure can distort the data and jeopardise the models built on it. Intuitively, the process of training an NLP model on anonymised data may lower the performance of the resulting model when compared to a model trained on non-anonymised data. In this paper, we investigate the impact of de-identification on the performance of nine downstream NLP tasks. We focus on the anonymisation and pseudonymisation of personal names and compare six different anonymisation strategies for two state-of-the-art pre-trained models. Based on these experiments, we formulate recommendations on how the de-identification should be performed to guarantee accurate NLP models. Our results reveal that de-identification does have a negative impact on the performance of NLP models, but this impact is relatively low. We also find that using pseudonymisation techniques involving random names leads to better performance across most tasks.

2022

pdf
LuxemBERT: Simple and Practical Data Augmentation in Language Model Pre-Training for Luxembourgish
Cedric Lothritz | Bertrand Lebichot | Kevin Allix | Lisa Veiber | Tegawende Bissyande | Jacques Klein | Andrey Boytsov | Clément Lefebvre | Anne Goujon
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Pre-trained Language Models such as BERT have become ubiquitous in NLP where they have achieved state-of-the-art performance in most NLP tasks. While these models are readily available for English and other widely spoken languages, they remain scarce for low-resource languages such as Luxembourgish. In this paper, we present LuxemBERT, a BERT model for the Luxembourgish language that we create using the following approach: we augment the pre-training dataset by considering text data from a closely related language that we partially translate using a simple and straightforward method. We are then able to produce the LuxemBERT model, which we show to be effective for various NLP tasks: it outperforms a simple baseline built with the available Luxembourgish text data as well the multilingual mBERT model, which is currently the only option for transformer-based language models in Luxembourgish. Furthermore, we present datasets for various downstream NLP tasks that we created for this study and will make available to researchers on request.

2020

pdf
Evaluating Pretrained Transformer-based Models on the Task of Fine-Grained Named Entity Recognition
Cedric Lothritz | Kevin Allix | Lisa Veiber | Tegawendé F. Bissyandé | Jacques Klein
Proceedings of the 28th International Conference on Computational Linguistics

Named Entity Recognition (NER) is a fundamental Natural Language Processing (NLP) task and has remained an active research field. In recent years, transformer models and more specifically the BERT model developed at Google revolutionised the field of NLP. While the performance of transformer-based approaches such as BERT has been studied for NER, there has not yet been a study for the fine-grained Named Entity Recognition (FG-NER) task. In this paper, we compare three transformer-based models (BERT, RoBERTa, and XLNet) to two non-transformer-based models (CRF and BiLSTM-CNN-CRF). Furthermore, we apply each model to a multitude of distinct domains. We find that transformer-based models incrementally outperform the studied non-transformer-based models in most domains with respect to the F1 score. Furthermore, we find that the choice of domains significantly influenced the performance regardless of the respective data size or the model chosen.