@inproceedings{wu-etal-2025-please,
    title = "Please Translate Again: Two Simple Experiments on Whether Human-Like Reasoning Helps Translation",
    author = "Wu, Di  and
      Aycock, Seth  and
      Monz, Christof",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1031/",
    pages = "20435--20451",
    ISBN = "979-8-89176-332-6",
    abstract = "Large Language Models (LLMs) demonstrate strong reasoning capabilities for many tasks, often by explicitly decomposing the task via Chain-of-Thought (CoT) reasoning. Recent work on LLM-based translation designs hand-crafted prompts to decompose translation, or trains models to incorporate intermediate steps. {\_}Translating Step-by-step{\_} (Briakou et al., 2024), for instance, introduces a multi-step prompt with decomposition and refinement of translation with LLMs, which achieved state-of-the-art results on WMT24 test data. In this work, we scrutinise this strategy{'}s effectiveness. Empirically, we find no clear evidence that performance gains stem from explicitly decomposing the translation process via CoT, at least for the models on test; and we show prompting LLMs to ``translate again'' and self-refine yields even better results than human-like step-by-step prompting. While the decomposition influences translation behaviour, faithfulness to the decomposition has both positive and negative effects on translation. Our analysis therefore suggests a divergence between the optimal translation strategies for humans and LLMs."
}Markdown (Informal)
[Please Translate Again: Two Simple Experiments on Whether Human-Like Reasoning Helps Translation](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1031/) (Wu et al., EMNLP 2025)
ACL