Lingxun Meng
2025
Don’t Half-listen: Capturing Key-part Information in Continual Instruction Tuning
Yongquan He
|
Wenyuan Zhang
|
Xuancheng Huang
|
Peng Zhang
|
Lingxun Meng
|
Xiang Zhou
|
Ke Zeng
|
Xunliang Cai
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Instruction tuning for large language models (LLMs) can drive them to produce results consistent with human goals in specific downstream tasks. However, the process of continual instruction tuning (CIT) for LLMs may bring about the catastrophic forgetting (CF) problem, where previously learned abilities are degraded. Recent methods try to alleviate the CF problem by modifying models or replaying data, which may only remember the surface-level pattern of instructions and get confused on held-out tasks. In this paper, we propose a novel continual instruction tuning method based on Key-part Information Gain (KPIG). Our method computes the information gain on masked parts to dynamically replay data and refine the training objective, which enables LLMs to capture task-aware information relevant to the correct response and alleviate overfitting to general descriptions in instructions. In addition, we propose two metrics, P-score and V-score, to measure the generalization and instruction-following abilities of LLMs. Experiments demonstrate our method achieves superior performance on both seen and held-out tasks.
Search
Fix author
Co-authors
- Xunliang Cai 1
- Yongquan He 1
- Xuancheng Huang 1
- Ke Zeng 1
- Wenyuan Zhang (张文源) 1
- show all...
Venues
- acl1