Tegawendé F. Bissyandé

Also published as: Tegawendé F. Bissyandé


2024

pdf bib
Soft Prompt Tuning for Cross-Lingual Transfer: When Less is More
Fred Philippy | Siwen Guo | Shohreh Haddadan | Cedric Lothritz | Jacques Klein | Tegawendé F. Bissyandé
Proceedings of the 1st Workshop on Modular and Open Multilingual NLP (MOOMIN 2024)

Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting pre-trained language models (PLMs) to specific tasks by inserting learnable embeddings, or soft prompts, at the input layer of the PLM, without modifying its parameters. This paper investigates the potential of SPT for cross-lingual transfer. Unlike previous studies on SPT for cross-lingual transfer that often fine-tune both the soft prompt and the model parameters, we adhere to the original intent of SPT by keeping the model parameters frozen and only training the soft prompt. This does not only reduce the computational cost and storage overhead of full-model fine-tuning, but we also demonstrate that this very parameter efficiency intrinsic to SPT can enhance cross-lingual transfer performance to linguistically distant languages. Moreover, we explore how different factors related to the prompt, such as the length or its reparameterization, affect cross-lingual transfer performance.

2020

pdf
Evaluating Pretrained Transformer-based Models on the Task of Fine-Grained Named Entity Recognition
Cedric Lothritz | Kevin Allix | Lisa Veiber | Tegawendé F. Bissyandé | Jacques Klein
Proceedings of the 28th International Conference on Computational Linguistics

Named Entity Recognition (NER) is a fundamental Natural Language Processing (NLP) task and has remained an active research field. In recent years, transformer models and more specifically the BERT model developed at Google revolutionised the field of NLP. While the performance of transformer-based approaches such as BERT has been studied for NER, there has not yet been a study for the fine-grained Named Entity Recognition (FG-NER) task. In this paper, we compare three transformer-based models (BERT, RoBERTa, and XLNet) to two non-transformer-based models (CRF and BiLSTM-CNN-CRF). Furthermore, we apply each model to a multitude of distinct domains. We find that transformer-based models incrementally outperform the studied non-transformer-based models in most domains with respect to the F1 score. Furthermore, we find that the choice of domains significantly influenced the performance regardless of the respective data size or the model chosen.