This paper studies gender bias in machine translation through the lens of Large Language Models (LLMs). Four widely-used test sets are employed to benchmark various base LLMs, comparing their translation quality and gender bias against state-of-the-art Neural Machine Translation (NMT) models for English to Catalan (En → Ca) and English to Spanish (En → Es) translation directions. Our findings reveal pervasive gender bias across all models, with base LLMs exhibiting a higher degree of bias compared to NMT models.To combat this bias, we explore prompting engineering techniques applied to an instruction-tuned LLM. We identify a prompt structure that significantly reduces gender bias by up to 12% on the WinoMT evaluation dataset compared to more straightforward prompts. These results significantly reduce the gender bias accuracy gap between LLMs and traditional NMT systems.
This paper presents a comprehensive evaluation of gender bias in English-Catalan machine translation, encompassing the creation of a novel language resource and an analysis of translation quality across four different tokenization models. The study introduces a new dataset derived from the MuST-SHE corpus, focusing on gender-neutral terms that necessitate gendered translations in Catalan. The results reveal noteworthy gender bias across all translation models, with a consistent preference for masculine forms. Notably, the study finds that when context is available, BPE and Sentencepiece Unigram tokenization methods outperform others, achieving higher accuracy in gender translation. However, when no context is provided, Morfessor outputs more feminine forms than other tokenization methods, albeit still a small percentage. The study also reflects that stereotypes present in the data are amplified in the translation output. Ultimately, this work serves as a valuable resource for addressing and mitigating gender bias in machine translation, emphasizing the need for improved awareness and sensitivity to gender issues in natural language processing applications.
This paper describes the BSC’s submission to the AmericasNLP 2024 Shared Task. We participated in the Spanish to Quechua and Spanish to Guarani tasks. In this paper we show that by using LoRA adapters we can achieve similar performance as a full parameter fine-tuning by only training 14.2% of the total number of parameters. Our systems achieved the highest ChrF++ scores and ranked first for both directions in the final results outperforming strong baseline systems in the provided development and test datasets.