José de Souza
2023
Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning
Duarte Alves
|
Nuno Guerreiro
|
João Alves
|
José Pombal
|
Ricardo Rei
|
José de Souza
|
Pierre Colombo
|
Andre Martins
Findings of the Association for Computational Linguistics: EMNLP 2023
Large language models (LLMs) are a promising avenue for machine translation (MT). However, current LLM-based MT systems are brittle: their effectiveness highly depends on the choice of few-shot examples and they often require extra post-processing due to overgeneration. Alternatives such as finetuning on translation instructions are computationally expensive and may weaken in-context learning capabilities, due to overspecialization. In this paper, we provide a closer look at this problem. We start by showing that adapter-based finetuning with LoRA matches the performance of traditional finetuning while reducing the number of training parameters by a factor of 50. This method also outperforms few-shot prompting and eliminates the need for post-processing or in-context examples. However, we show that finetuning generally degrades few-shot performance, hindering adaptation capabilities. Finally, to obtain the best of both worlds, we propose a simple approach that incorporates few-shot examples during finetuning. Experiments on 10 language pairs show that our proposed approach recovers the original few-shot capabilities while keeping the added benefits of finetuning.
An Empirical Study of Translation Hypothesis Ensembling with Large Language Models
António Farinhas
|
José de Souza
|
Andre Martins
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) are becoming a one-fits-many solution, but they sometimes hallucinate or produce unreliable output. In this paper, we investigate how hypothesis ensembling can improve the quality of the generated text for the specific problem of LLM-based machine translation. We experiment with several techniques for ensembling hypotheses produced by LLMs such as ChatGPT, LLaMA, and Alpaca. We provide a comprehensive study along multiple dimensions, including the method to generate hypotheses (multiple prompts, temperature-based sampling, and beam search) and the strategy to produce the final translation (instruction-based, quality-based reranking, and minimum Bayes risk (MBR) decoding). Our results show that MBR decoding is a very effective method, that translation quality can be improved using a small number of samples, and that instruction tuning has a strong impact on the relation between the diversity of the hypotheses and the sampling temperature.
Search
Co-authors
- André F. T. Martins 2
- Duarte Alves 1
- Nuno Guerreiro 1
- João Alves 1
- José Pombal 1
- show all...