A Fair Comparison without Translationese: English vs. Target-language Instructions for Multilingual LLMs

Taisei Enomoto, Hwichan Kim, Zhousi Chen, Mamoru Komachi


Abstract
Most large language models are multilingual instruction executors. Prior studies suggested that English instructions are more effective than target-language instructions even for non-English tasks; however, these studies often use datasets and instructions translated from English, which introduce biases known as translationese, hindering an unbiased comparison. To address this issue, we conduct a fair comparison between English and target-language instructions by eliminating translationese effects. Contrary to previous studies, our experiments across several tasks reveal that the advantage of adopting English instructions is not overwhelming. Additionally, we report on the features of generated texts and the instruction-following abilities when using respective instructions.
Anthology ID:
2025.naacl-short.55
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
649–670
Language:
URL:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.naacl-short.55/
DOI:
Bibkey:
Cite (ACL):
Taisei Enomoto, Hwichan Kim, Zhousi Chen, and Mamoru Komachi. 2025. A Fair Comparison without Translationese: English vs. Target-language Instructions for Multilingual LLMs. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 649–670, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
A Fair Comparison without Translationese: English vs. Target-language Instructions for Multilingual LLMs (Enomoto et al., NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.naacl-short.55.pdf