Juan Manuel Coria
2022
Analyzing BERT Cross-lingual Transfer Capabilities in Continual Sequence Labeling
Juan Manuel Coria
|
Mathilde Veron
|
Sahar Ghannay
|
Guillaume Bernard
|
Hervé Bredin
|
Olivier Galibert
|
Sophie Rosset
Proceedings of the First Workshop on Performance and Interpretability Evaluations of Multimodal, Multipurpose, Massive-Scale Models
Knowledge transfer between neural language models is a widely used technique that has proven to improve performance in a multitude of natural language tasks, in particular with the recent rise of large pre-trained language models like BERT. Similarly, high cross-lingual transfer has been shown to occur in multilingual language models. Hence, it is of great importance to better understand this phenomenon as well as its limits. While most studies about cross-lingual transfer focus on training on independent and identically distributed (i.e. i.i.d.) samples, in this paper we study cross-lingual transfer in a continual learning setting on two sequence labeling tasks: slot-filling and named entity recognition. We investigate this by training multilingual BERT on sequences of 9 languages, one language at a time, on the MultiATIS++ and MultiCoNER corpora. Our first findings are that forward transfer between languages is retained although forgetting is present. Additional experiments show that lost performance can be recovered with as little as a single training epoch even if forgetting was high, which can be explained by a progressive shift of model parameters towards a better multilingual initialization. We also find that commonly used metrics might be insufficient to assess continual learning performance.
2020
A Metric Learning Approach to Misogyny Categorization
Juan Manuel Coria
|
Sahar Ghannay
|
Sophie Rosset
|
Hervé Bredin
Proceedings of the 5th Workshop on Representation Learning for NLP
The task of automatic misogyny identification and categorization has not received as much attention as other natural language tasks have, even though it is crucial for identifying hate speech in social Internet interactions. In this work, we address this sentence classification task from a representation learning perspective, using both a bidirectional LSTM and BERT optimized with the following metric learning loss functions: contrastive loss, triplet loss, center loss, congenerous cosine loss and additive angular margin loss. We set new state-of-the-art for the task with our fine-tuned BERT, whose sentence embeddings can be compared with a simple cosine distance, and we release all our code as open source for easy reproducibility. Moreover, we find that almost every loss function performs equally well in this setting, matching the regular cross entropy loss.
Search
Co-authors
- Sahar Ghannay 2
- Sophie Rosset 2
- Hervé Bredin 2
- Mathilde Veron 1
- Guillaume Bernard 1
- show all...