Proverbs Run in Pairs: Evaluating Proverb Translation Capability of Large Language Model

Minghan Wang, Viet Thanh Pham, Farhad Moghimifar, Thuy-Trang Vu


Abstract
Despite achieving remarkable performance, machine translation (MT) research remains underexplored in terms of translating cultural elements in languages, such as idioms, proverbs, and colloquial expressions. This paper investigates the capability of state-of-the-art neural machine translation (NMT) and large language models (LLMs) in translating proverbs, which are deeply rooted in cultural contexts. We construct a translation dataset of standalone proverbs and proverbs in conversation for four language pairs. Our experiments show that the studied models can achieve good translation between languages with similar cultural backgrounds, and LLMs generally outperform NMT models in proverb translation. Furthermore, we find that current automatic evaluation metrics such as BLEU, CHRF++ and COMET are inadequate for reliably assessing the quality of proverb translation, highlighting the need for more culturally aware evaluation metrics.
Anthology ID:
2025.findings-acl.83
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1646–1662
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.83/
DOI:
Bibkey:
Cite (ACL):
Minghan Wang, Viet Thanh Pham, Farhad Moghimifar, and Thuy-Trang Vu. 2025. Proverbs Run in Pairs: Evaluating Proverb Translation Capability of Large Language Model. In Findings of the Association for Computational Linguistics: ACL 2025, pages 1646–1662, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Proverbs Run in Pairs: Evaluating Proverb Translation Capability of Large Language Model (Wang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.83.pdf