Weimin Xiong
2023
InfoCL: Alleviating Catastrophic Forgetting in Continual Text Classification from An Information Theoretic Perspective
Yifan Song
|
Peiyi Wang
|
Weimin Xiong
|
Dawei Zhu
|
Tianyu Liu
|
Zhifang Sui
|
Sujian Li
Findings of the Association for Computational Linguistics: EMNLP 2023
Continual learning (CL) aims to constantly learn new knowledge over time while avoiding catastrophic forgetting on old tasks. We focus on continual text classification under the class-incremental setting. Recent CL studies have identified the severe performance decrease on analogous classes as a key factor for catastrophic forgetting. In this paper, through an in-depth exploration of the representation learning process in CL, we discover that the compression effect of the information bottleneck leads to confusion on analogous classes. To enable the model learn more sufficient representations, we propose a novel replay-based continual text classification method, InfoCL. Our approach utilizes fast-slow and current-past contrastive learning to perform mutual information maximization and better recover the previously learned representations. In addition, InfoCL incorporates an adversarial memory augmentation strategy to alleviate the overfitting problem of replay. Experimental results demonstrate that InfoCL effectively mitigates forgetting and achieves state-of-the-art performance on three text classification tasks.
Rationale-Enhanced Language Models are Better Continual Relation Learners
Weimin Xiong
|
Yifan Song
|
Peiyi Wang
|
Sujian Li
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Continual relation extraction (CRE) aims to solve the problem of catastrophic forgetting when learning a sequence of newly emerging relations. Recent CRE studies have found that catastrophic forgetting arises from the model’s lack of robustness against future analogous relations. To address the issue, we introduce rationale, i.e., the explanations of relation classification results generated by Large Language Models (LLM), into CRE task. Specifically, we design the multi-task rationale tuning strategy to help the model learn current relations robustly. We also conduct contrastive rationale replay to further distinguish analogous relations. Experimental results on two standard benchmarks demonstrate that our method outperforms the state-of-the-art CRE models.
Search
Co-authors
- Yifan Song 2
- Peiyi Wang 2
- Sujian Li 2
- Dawei Zhu 1
- Tianyu Liu 1
- show all...