Same evaluation, more tokens: On the effect of input length for machine translation evaluation using Large Language Models

Tobias Domhan, Dawei Zhu


Abstract
Accurately evaluating machine-translated text remains a long-standing challenge, particularly for long documents. Recent work has shown that large language models (LLMs) can serve as reliable and interpretable sentence-level translation evaluators via MQM error span annotations. With modern LLMs supporting larger context windows, a natural question arises: can we feed entire document translations into an LLM for quality assessment? Ideally, evaluation should be invariant to text length, producing consistent error spans regardless of input granularity. However, our analysis shows that text length significantly impacts evaluation: longer texts lead to fewer error spans and reduced system ranking accuracy. To address this limitation, we evaluate several strategies, including granularity-aligned prompting, Focus Sentence Prompting (FSP), and a fine-tuning approach to better align LLMs with the evaluation task. The latter two methods largely mitigate this length bias, making LLMs more reliable for long-form translation evaluation.
Anthology ID:
2025.emnlp-main.402
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7940–7958
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.402/
DOI:
Bibkey:
Cite (ACL):
Tobias Domhan and Dawei Zhu. 2025. Same evaluation, more tokens: On the effect of input length for machine translation evaluation using Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 7940–7958, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Same evaluation, more tokens: On the effect of input length for machine translation evaluation using Large Language Models (Domhan & Zhu, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.402.pdf
Checklist:
 2025.emnlp-main.402.checklist.pdf