Continual Learning of Large Language Models

Tongtong Wu, Trang Vu, Linhao Luo, Gholamreza Haffari


Abstract
As large language models (LLMs) continue to expand in size and utility, keeping them current with evolving knowledge and shifting user preferences becomes an increasingly urgent yet challenging task. This tutorial offers a comprehensive exploration of continual learning (CL) in the context of LLMs, presenting a structured framework that spans continual pre-training, instruction tuning, and alignment. Grounded in recent survey work and empirical studies, we discuss emerging trends, key methods, and practical insights from both academic research and industry deployments. In addition, we highlight the new frontier of lifelong LLM agents, i.e., systems capable of autonomous, self-reflective, and tool-augmented adaptation. Participants will gain a deep understanding of the computational, algorithmic, and ethical challenges inherent to CL in LLMs, and learn about strategies to mitigate forgetting, manage data and evaluation pipelines, and design systems that can adapt responsibly and reliably over time. This tutorial will benefit researchers and practitioners interested in advancing the long-term effectiveness, adaptability, and safety of foundation models.
Anthology ID:
2025.emnlp-tutorials.7
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Valentina Pyatkin, Andreas Vlachos
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16–17
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-tutorials.7/
DOI:
Bibkey:
Cite (ACL):
Tongtong Wu, Trang Vu, Linhao Luo, and Gholamreza Haffari. 2025. Continual Learning of Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, pages 16–17, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Continual Learning of Large Language Models (Wu et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-tutorials.7.pdf