DMDTEval: An Evaluation and Analysis of LLMs on Disambiguation in Multi-domain Translation

Zhibo Man, Yuanmeng Chen, Yujie Zhang, Jinan Xu


Abstract
Currently, Large Language Models (LLMs) have achieved remarkable results in machine translation. However, their performance in multi-domain translation (MDT) is less satisfactory, the meanings of words can vary across different domains, highlighting the significant ambiguity inherent in MDT. Therefore, evaluating the disambiguation ability of LLMs in MDT, remains an open problem. To this end, we present an evaluation and analysis of LLMs on disambiguation in multi-domain translation (DMDTEval), our systematic evaluation framework consisting of three aspects: (1) we construct a translation test set with multi-domain ambiguous word annotation, (2) we curate a diverse set of disambiguation prompt strategies, and (3) we design precise disambiguation metrics, and study the efficacy of various prompt strategies on multiple state-of-the-art LLMs. We conduct comprehensive experiments across 4 language pairs and 13 domains, our extensive experiments reveal a number of crucial findings that we believe will pave the way and also facilitate further research in the critical area of improving the disambiguation of LLMs.
Anthology ID:
2025.emnlp-main.309
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6065–6082
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.309/
DOI:
Bibkey:
Cite (ACL):
Zhibo Man, Yuanmeng Chen, Yujie Zhang, and Jinan Xu. 2025. DMDTEval: An Evaluation and Analysis of LLMs on Disambiguation in Multi-domain Translation. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 6065–6082, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
DMDTEval: An Evaluation and Analysis of LLMs on Disambiguation in Multi-domain Translation (Man et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.309.pdf
Checklist:
 2025.emnlp-main.309.checklist.pdf