Learn or Recall? Revisiting Incremental Learning with Pre-trained Language Models

Junhao Zheng, Shengjie Qiu, Qianli Ma


Abstract
Incremental Learning (IL) has been a long-standing problem in both vision and Natural Language Processing (NLP) communities.In recent years, as Pre-trained Language Models (PLMs) have achieved remarkable progress in various NLP downstream tasks, utilizing PLMs as backbones has become a common practice in recent research of IL in NLP.Most assume that catastrophic forgetting is the biggest obstacle to achieving superior IL performance and propose various techniques to overcome this issue.However, we find that this assumption is problematic.Specifically, we revisit more than 20 methods on four classification tasks (Text Classification, Intent Classification, Relation Extraction, and Named Entity Recognition) under the two most popular IL settings (Class-Incremental and Task-Incremental) and reveal that most of them severely underestimate the inherent anti-forgetting ability of PLMs.Based on the observation, we propose a frustratingly easy method called SEQ* for IL with PLMs.The results show that SEQ* has competitive or superior performance compared with state-of-the-art (SOTA) IL methods yet requires considerably less trainable parameters and training time.These findings urge us to revisit the IL with PLMs and encourage future studies to have a fundamental understanding of the catastrophic forgetting in PLMs.
Anthology ID:
2024.acl-long.794
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14848–14877
Language:
URL:
https://aclanthology.org/2024.acl-long.794
DOI:
10.18653/v1/2024.acl-long.794
Bibkey:
Cite (ACL):
Junhao Zheng, Shengjie Qiu, and Qianli Ma. 2024. Learn or Recall? Revisiting Incremental Learning with Pre-trained Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14848–14877, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Learn or Recall? Revisiting Incremental Learning with Pre-trained Language Models (Zheng et al., ACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2024.acl-long.794.pdf