Duarte M. Alves
2023
Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning
Duarte M. Alves | Nuno M. Guerreiro | João Alves | José Pombal | Ricardo Rei | José G. C. de Souza | Pierre Colombo | André F. T. Martins
Findings of the Association for Computational Linguistics: EMNLP 2023
Duarte M. Alves | Nuno M. Guerreiro | João Alves | José Pombal | Ricardo Rei | José G. C. de Souza | Pierre Colombo | André F. T. Martins
Findings of the Association for Computational Linguistics: EMNLP 2023
Large language models (LLMs) are a promising avenue for machine translation (MT). However, current LLM-based MT systems are brittle: their effectiveness highly depends on the choice of few-shot examples and they often require extra post-processing due to overgeneration. Alternatives such as finetuning on translation instructions are computationally expensive and may weaken in-context learning capabilities, due to overspecialization. In this paper, we provide a closer look at this problem. We start by showing that adapter-based finetuning with LoRA matches the performance of traditional finetuning while reducing the number of training parameters by a factor of 50. This method also outperforms few-shot prompting and eliminates the need for post-processing or in-context examples. However, we show that finetuning generally degrades few-shot performance, hindering adaptation capabilities. Finally, to obtain the best of both worlds, we propose a simple approach that incorporates few-shot examples during finetuning. Experiments on 10 language pairs show that our proposed approach recovers the original few-shot capabilities while keeping the added benefits of finetuning.
Hallucinations in Large Multilingual Translation Models
Nuno M. Guerreiro | Duarte M. Alves | Jonas Waldendorf | Barry Haddow | Alexandra Birch | Pierre Colombo | André F. T. Martins
Transactions of the Association for Computational Linguistics, Volume 11
Nuno M. Guerreiro | Duarte M. Alves | Jonas Waldendorf | Barry Haddow | Alexandra Birch | Pierre Colombo | André F. T. Martins
Transactions of the Association for Computational Linguistics, Volume 11
Hallucinated translations can severely undermine and raise safety issues when machine translation systems are deployed in the wild. Previous research on the topic focused on small bilingual models trained on high-resource languages, leaving a gap in our understanding of hallucinations in multilingual models across diverse translation scenarios. In this work, we fill this gap by conducting a comprehensive analysis—over 100 language pairs across various resource levels and going beyond English-centric directions—on both the M2M neural machine translation (NMT) models and GPT large language models (LLMs). Among several insights, we highlight that models struggle with hallucinations primarily in low-resource directions and when translating out of English, where, critically, they may reveal toxic patterns that can be traced back to the training data. We also find that LLMs produce qualitatively different hallucinations to those of NMT models. Finally, we show that hallucinations are hard to reverse by merely scaling models trained with the same data. However, employing more diverse models, trained on different data or with different procedures, as fallback systems can improve translation quality and virtually eliminate certain pathologies.