InstructionCP: A Simple yet Effective Approach for Transferring Large Language Models to Target Languages

Kuang-Ming Chen, Jenq-Neng Hwang, Hung-yi Lee


Abstract
The rapid development of large language models (LLMs) in recent years has largely focused on English, resulting in models that respond exclusively in English. To adapt these models to other languages, continual pre-training (CP) is often employed, followed by supervised fine-tuning (SFT) to maintain conversational abilities. However, CP and SFT can reduce a model’s ability to filter harmful content. We propose Instruction Continual Pre-training (InsCP), which integrates instruction tags—also known as chat templates—into the CP process to prevent loss of conversational proficiency while acquiring new languages. Our experiments demonstrate that InsCP retains conversational and Reinforcement Learning from Human Feedback (RLHF) abilities. Empirical evaluations on language alignment, reliability, and knowledge benchmarks confirm the efficacy of InsCP. Notably, this approach requires only 0.1 billion tokens of high-quality instruction-following data, thereby reducing resource consumption.
Anthology ID:
2025.sigtyp-1.1
Volume:
Proceedings of the 7th Workshop on Research in Computational Linguistic Typology and Multilingual NLP
Month:
August
Year:
2025
Address:
Vinenna. Austria
Editors:
Michael Hahn, Priya Rani, Ritesh Kumar, Andreas Shcherbakov, Alexey Sorokin, Oleg Serikov, Ryan Cotterell, Ekaterina Vylomova
Venues:
SIGTYP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–6
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.sigtyp-1.1/
DOI:
Bibkey:
Cite (ACL):
Kuang-Ming Chen, Jenq-Neng Hwang, and Hung-yi Lee. 2025. InstructionCP: A Simple yet Effective Approach for Transferring Large Language Models to Target Languages. In Proceedings of the 7th Workshop on Research in Computational Linguistic Typology and Multilingual NLP, pages 1–6, Vinenna. Austria. Association for Computational Linguistics.
Cite (Informal):
InstructionCP: A Simple yet Effective Approach for Transferring Large Language Models to Target Languages (Chen et al., SIGTYP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.sigtyp-1.1.pdf