2024
pdf
abs
Do Multilingual Large Language Models Mitigate Stereotype Bias?
Shangrui Nie
|
Michael Fromm
|
Charles Welch
|
Rebekka Görge
|
Akbar Karimi
|
Joan Plepi
|
Nazia Mowmita
|
Nicolas Flores-Herr
|
Mehdi Ali
|
Lucie Flek
Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP
While preliminary findings indicate that multilingual LLMs exhibit reduced bias compared to monolingual ones, a comprehensive understanding of the effect of multilingual training on bias mitigation, is lacking. This study addresses this gap by systematically training six LLMs of identical size (2.6B parameters) and architecture: five monolingual models (English, German, French, Italian, and Spanish) and one multilingual model trained on an equal distribution of data across these languages, all using publicly available data. To ensure robust evaluation, standard bias benchmarks were automatically translated into the five target languages and verified for both translation quality and bias preservation by human annotators. Our results consistently demonstrate that multilingual training effectively mitigates bias. Moreover, we observe that multilingual models achieve not only lower bias but also superior prediction accuracy when compared to monolingual models with the same amount of training data, model architecture, and size.
pdf
abs
Tokenizer Choice For LLM Training: Negligible or Crucial?
Mehdi Ali
|
Michael Fromm
|
Klaudia Thellmann
|
Richard Rutmann
|
Max Lübbering
|
Johannes Leveling
|
Katrin Klug
|
Jan Ebert
|
Niclas Doll
|
Jasper Buschhoff
|
Charvi Jain
|
Alexander Weber
|
Lena Jurkschat
|
Hammam Abdelwahab
|
Chelsea John
|
Pedro Ortiz Suarez
|
Malte Ostendorff
|
Samuel Weinbach
|
Rafet Sifa
|
Stefan Kesselheim
|
Nicolas Flores-Herr
Findings of the Association for Computational Linguistics: NAACL 2024
The recent success of large language models (LLMs) has been predominantly driven by curating the training dataset composition, scaling of model architectures and dataset sizes and advancements in pretraining objectives, leaving tokenizer influence as a blind spot.Shedding light on this underexplored area, we conduct a comprehensive study on the influence of tokenizer choice on LLM downstream performance by training 24 mono- and multilingual LLMs at a 2.6B parameter scale, ablating different tokenizer algorithms and parameterizations. Our studies highlight that the tokenizer choice can significantly impact the model’s downstream performance and training costs. In particular, we find that the common tokenizer evaluation metrics fertility and parity are not always predictive of model downstream performance, rendering these metrics a questionable proxy for the model’s downstream performance. Furthermore, we show that multilingual tokenizers trained on the five most frequent European languages require vocabulary size increases of factor three in comparison to English. While English-centric tokenizers have been applied to the training of multi-lingual LLMs in the past, we find that this approach results in a severe downstream performance degradation and additional training costs of up to 68%, due to an inefficient tokenization vocabulary.