Laura Cabello Piqueras
2022
Challenges and Strategies in Cross-Cultural NLP
Daniel Hershcovich
|
Stella Frank
|
Heather Lent
|
Miryam de Lhoneux
|
Mostafa Abdou
|
Stephanie Brandl
|
Emanuele Bugliarello
|
Laura Cabello Piqueras
|
Ilias Chalkidis
|
Ruixiang Cui
|
Constanza Fierro
|
Katerina Margatina
|
Phillip Rust
|
Anders Søgaard
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Various efforts in the Natural Language Processing (NLP) community have been made to accommodate linguistic diversity and serve speakers of many different languages. However, it is important to acknowledge that speakers and the content they produce and require, vary not just by language, but also by culture. Although language and culture are tightly linked, there are important differences. Analogous to cross-lingual and multilingual NLP, cross-cultural and multicultural NLP considers these differences in order to better serve users of NLP systems. We propose a principled framework to frame these efforts, and survey existing and potential strategies.
Are Pretrained Multilingual Models Equally Fair across Languages?
Laura Cabello Piqueras
|
Anders Søgaard
Proceedings of the 29th International Conference on Computational Linguistics
Pretrained multilingual language models can help bridge the digital language divide, enabling high-quality NLP models for lower-resourced languages. Studies of multilingual models have so far focused on performance, consistency, and cross-lingual generalisation. However, with their wide-spread application in the wild and downstream societal impact, it is important to put multilingual models under the same scrutiny as monolingual models. This work investigates the group fairness of multilingual models, asking whether these models are equally fair across languages. To this end, we create a new four-way multilingual dataset of parallel cloze test examples (MozArt), equipped with demographic information (balanced with regard to gender and native tongue) about the test participants. We evaluate three multilingual models on MozArt –mBERT, XLM-R, and mT5– and show that across the four target languages, the three models exhibit different levels of group disparity, e.g., exhibiting near-equal risk for Spanish, but high levels of disparity for German.