@inproceedings{wu-etal-2025-continual,
    title = "Continual Learning of Large Language Models",
    author = "Wu, Tongtong  and
      Vu, Trang  and
      Luo, Linhao  and
      Haffari, Gholamreza",
    editor = "Pyatkin, Valentina  and
      Vlachos, Andreas",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-tutorials.7/",
    pages = "16--17",
    ISBN = "979-8-89176-336-4",
    abstract = "As large language models (LLMs) continue to expand in size and utility, keeping them current with evolving knowledge and shifting user preferences becomes an increasingly urgent yet challenging task. This tutorial offers a comprehensive exploration of continual learning (CL) in the context of LLMs, presenting a structured framework that spans continual pre-training, instruction tuning, and alignment. Grounded in recent survey work and empirical studies, we discuss emerging trends, key methods, and practical insights from both academic research and industry deployments. In addition, we highlight the new frontier of lifelong LLM agents, i.e., systems capable of autonomous, self-reflective, and tool-augmented adaptation. Participants will gain a deep understanding of the computational, algorithmic, and ethical challenges inherent to CL in LLMs, and learn about strategies to mitigate forgetting, manage data and evaluation pipelines, and design systems that can adapt responsibly and reliably over time. This tutorial will benefit researchers and practitioners interested in advancing the long-term effectiveness, adaptability, and safety of foundation models."
}Markdown (Informal)
[Continual Learning of Large Language Models](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-tutorials.7/) (Wu et al., EMNLP 2025)
ACL
- Tongtong Wu, Trang Vu, Linhao Luo, and Gholamreza Haffari. 2025. Continual Learning of Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, pages 16–17, Suzhou, China. Association for Computational Linguistics.