Analyzing Context Contributions in LLM-based Machine Translation

Emmanouil Zaranis, Nuno M Guerreiro, Andre Martins


Abstract
Large language models (LLMs) have achieved state-of-the-art performance in machine translation (MT) and demonstrated the ability to leverage in-context learning through few-shot examples. However, the mechanisms by which LLMs use different parts of the input context remain largely unexplored. In this work, we provide a comprehensive analysis of context utilization in MT, studying how LLMs use various context parts, such as few-shot examples and the source text, when generating translations. We highlight several key findings: (1) the source part of few-shot examples appears to contribute more than its corresponding targets, irrespective of translation direction; (2) finetuning LLMs with parallel data alters the contribution patterns of different context parts; and (3) there is a positional bias where earlier few-shot examples have higher contributions to the translated sequence. Finally, we demonstrate that inspecting anomalous context contributions can potentially uncover pathological translations, such as hallucinations. Our findings shed light on the internal workings of LLM-based MT which go beyond those known for standard encoder-decoder MT models.
Anthology ID:
2024.findings-emnlp.876
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14899–14924
Language:
URL:
https://preview.aclanthology.org/jlcl-multiple-ingestion/2024.findings-emnlp.876/
DOI:
10.18653/v1/2024.findings-emnlp.876
Bibkey:
Cite (ACL):
Emmanouil Zaranis, Nuno M Guerreiro, and Andre Martins. 2024. Analyzing Context Contributions in LLM-based Machine Translation. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 14899–14924, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Analyzing Context Contributions in LLM-based Machine Translation (Zaranis et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/jlcl-multiple-ingestion/2024.findings-emnlp.876.pdf