Boosting LLM Translation Skills without General Ability Loss via Rationale Distillation

Junhong Wu, Yang Zhao, Yangyifan Xu, Bing Liu, Chengqing Zong


Abstract
Large Language Models (LLMs) have achieved impressive results across numerous NLP tasks, and fine-tuning them for Machine Translation (MT) has improved their performance. However, vanilla fine-tuning often leads to catastrophic forgetting, compromising the broad general abilities of LLMs and introducing potential security risks. These abilities, which are developed using proprietary and unavailable training data, make simple data replay methods ineffective. To overcome this issue, we propose a novel approach called **Ra**tionale **Dis**tillation. RaDis harnesses the strong generative capabilities of LLMs to create rationales for training data, which are then “replayed” to prevent forgetting. These rationales connect prior knowledge with new tasks, acting as self-distillation targets to regulate the training process. By jointly training on reference translations and self-generated rationales, the model can learn new translation skills while preserving its general abilities across other tasks. Additionally, RaDis provides a fresh perspective on using rationales in the CL field and has the potential to serve as a general continual learning method for a variety of tasks.
Anthology ID:
2025.findings-acl.632
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12217–12236
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.632/
DOI:
Bibkey:
Cite (ACL):
Junhong Wu, Yang Zhao, Yangyifan Xu, Bing Liu, and Chengqing Zong. 2025. Boosting LLM Translation Skills without General Ability Loss via Rationale Distillation. In Findings of the Association for Computational Linguistics: ACL 2025, pages 12217–12236, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Boosting LLM Translation Skills without General Ability Loss via Rationale Distillation (Wu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.632.pdf