Augmenting Large Language Model Translators via Translation Memories

Yongyu Mu, Abudurexiti Reheman, Zhiquan Cao, Yuchun Fan, Bei Li, Yinqiao Li, Tong Xiao, Chunliang Zhang, Jingbo Zhu


Abstract
Using translation memories (TMs) as prompts is a promising approach to in-context learning of machine translation models. In this work, we take a step towards prompting large language models (LLMs) with TMs and making them better translators. We find that the ability of LLMs to “understand” prompts is indeed helpful for making better use of TMs. Experiments show that the results of a pre-trained LLM translator can be greatly improved by using high-quality TM-based prompts. These results are even comparable to those of the state-of-the-art NMT systems which have access to large-scale in-domain bilingual data and are well tuned on the downstream tasks.
Anthology ID:
2023.findings-acl.653
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10287–10299
Language:
URL:
https://aclanthology.org/2023.findings-acl.653
DOI:
10.18653/v1/2023.findings-acl.653
Bibkey:
Cite (ACL):
Yongyu Mu, Abudurexiti Reheman, Zhiquan Cao, Yuchun Fan, Bei Li, Yinqiao Li, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2023. Augmenting Large Language Model Translators via Translation Memories. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10287–10299, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Augmenting Large Language Model Translators via Translation Memories (Mu et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2023.findings-acl.653.pdf
Video:
 https://preview.aclanthology.org/emnlp-22-attachments/2023.findings-acl.653.mp4