Please Translate Again: Two Simple Experiments on Whether Human-Like Reasoning Helps Translation

Di Wu, Seth Aycock, Christof Monz


Abstract
Large Language Models (LLMs) demonstrate strong reasoning capabilities for many tasks, often by explicitly decomposing the task via Chain-of-Thought (CoT) reasoning. Recent work on LLM-based translation designs hand-crafted prompts to decompose translation, or trains models to incorporate intermediate steps. _Translating Step-by-step_ (Briakou et al., 2024), for instance, introduces a multi-step prompt with decomposition and refinement of translation with LLMs, which achieved state-of-the-art results on WMT24 test data. In this work, we scrutinise this strategy’s effectiveness. Empirically, we find no clear evidence that performance gains stem from explicitly decomposing the translation process via CoT, at least for the models on test; and we show prompting LLMs to “translate again” and self-refine yields even better results than human-like step-by-step prompting. While the decomposition influences translation behaviour, faithfulness to the decomposition has both positive and negative effects on translation. Our analysis therefore suggests a divergence between the optimal translation strategies for humans and LLMs.
Anthology ID:
2025.emnlp-main.1031
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20435–20451
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1031/
DOI:
Bibkey:
Cite (ACL):
Di Wu, Seth Aycock, and Christof Monz. 2025. Please Translate Again: Two Simple Experiments on Whether Human-Like Reasoning Helps Translation. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 20435–20451, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Please Translate Again: Two Simple Experiments on Whether Human-Like Reasoning Helps Translation (Wu et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1031.pdf
Checklist:
 2025.emnlp-main.1031.checklist.pdf