Context-Aware or Context-Insensitive? Assessing LLMs’ Performance in Document-Level Translation

Wafaa Mohammed, Vlad Niculae


Abstract
Large language models (LLMs) are increasingly strong contenders in machine translation. In this work, we focus on document-level translation, where some words cannot be translated without context from outside the sentence. Specifically, we investigate the ability of prominent LLMs to utilize the document context during translation through a perturbation analysis (analyzing models’ robustness to perturbed and randomized document context) and an attribution analysis (examining the contribution of relevant context to the translation). We conduct an extensive evaluation across nine LLMs from diverse model families and training paradigms, including translation-specialized LLMs, alongside two encoder-decoder transformer baselines. We find that LLMs’ improved document-translation performance compared to encoder-decoder models is not reflected in pronoun translation performance. Our analysis highlight the need for context-aware finetuning of LLMs with a focus on relevant parts of the context to improve their reliability for document-level translation.
Anthology ID:
2025.mtsummit-1.10
Volume:
Proceedings of Machine Translation Summit XX: Volume 1
Month:
June
Year:
2025
Address:
Geneva, Switzerland
Editors:
Pierrette Bouillon, Johanna Gerlach, Sabrina Girletti, Lise Volkart, Raphael Rubino, Rico Sennrich, Ana C. Farinha, Marco Gaido, Joke Daems, Dorothy Kenny, Helena Moniz, Sara Szoc
Venue:
MTSummit
SIG:
Publisher:
European Association for Machine Translation
Note:
Pages:
126–137
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.mtsummit-1.10/
DOI:
Bibkey:
Cite (ACL):
Wafaa Mohammed and Vlad Niculae. 2025. Context-Aware or Context-Insensitive? Assessing LLMs’ Performance in Document-Level Translation. In Proceedings of Machine Translation Summit XX: Volume 1, pages 126–137, Geneva, Switzerland. European Association for Machine Translation.
Cite (Informal):
Context-Aware or Context-Insensitive? Assessing LLMs’ Performance in Document-Level Translation (Mohammed & Niculae, MTSummit 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.mtsummit-1.10.pdf