Leveraging Domain Knowledge at Inference Time for LLM Translation: Retrieval versus Generation

Bryan Li, Jiaming Luo, Eleftheria Briakou, Colin Cherry


Abstract
While large language models (LLMs) have been increasingly adopted for machine translation (MT), their performance for specialist domains such as medicine and law remains an open challenge. Prior work has shown that LLMs can be domain-adapted at test-time by retrieving targeted few-shot demonstrations or terminologies for inclusion in the prompt. Meanwhile, for general-purpose LLM MT, recent studies have found some success in generating similarly useful domain knowledge from an LLM itself, prior to translation. Our work studies domain-adapted MT with LLMs through a careful prompting setup, finding that demonstrations consistently outperform terminology, and retrieval consistently outperforms generation. We find that generating demonstrations with weaker models can close the gap with larger model’s zero-shot performance. Given the effectiveness of demonstrations, we perform detailed analyses to understand their value. We find that domain-specificity is particularly important, and that the popular multi-domain benchmark is testing adaptation to a particular writing style more so than to a specific domain.
Anthology ID:
2025.knowledgenlp-1.7
Volume:
Proceedings of the 4th International Workshop on Knowledge-Augmented Methods for Natural Language Processing
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico, USA
Editors:
Weijia Shi, Wenhao Yu, Akari Asai, Meng Jiang, Greg Durrett, Hannaneh Hajishirzi, Luke Zettlemoyer
Venues:
KnowledgeNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
91–106
Language:
URL:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.knowledgenlp-1.7/
DOI:
Bibkey:
Cite (ACL):
Bryan Li, Jiaming Luo, Eleftheria Briakou, and Colin Cherry. 2025. Leveraging Domain Knowledge at Inference Time for LLM Translation: Retrieval versus Generation. In Proceedings of the 4th International Workshop on Knowledge-Augmented Methods for Natural Language Processing, pages 91–106, Albuquerque, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Leveraging Domain Knowledge at Inference Time for LLM Translation: Retrieval versus Generation (Li et al., KnowledgeNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.knowledgenlp-1.7.pdf