@inproceedings{wang-etal-2022-ask,
    title = "Ask Question First for Enhancing Lifelong Language Learning",
    author = "Wang, Han  and
      Fu, Ruiliu  and
      Zhang, Xuejun  and
      Zhou, Jun  and
      Zhao, Qingwei",
    editor = "Calzolari, Nicoletta  and
      Huang, Chu-Ren  and
      Kim, Hansaem  and
      Pustejovsky, James  and
      Wanner, Leo  and
      Choi, Key-Sun  and
      Ryu, Pum-Mo  and
      Chen, Hsin-Hsi  and
      Donatelli, Lucia  and
      Ji, Heng  and
      Kurohashi, Sadao  and
      Paggio, Patrizia  and
      Xue, Nianwen  and
      Kim, Seokhwan  and
      Hahm, Younggyun  and
      He, Zhong  and
      Lee, Tony Kyungil  and
      Santus, Enrico  and
      Bond, Francis  and
      Na, Seung-Hoon",
    booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
    month = oct,
    year = "2022",
    address = "Gyeongju, Republic of Korea",
    publisher = "International Committee on Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.coling-1.408/",
    pages = "4610--4621",
    abstract = "Lifelong language learning aims to stream learning NLP tasks while retaining knowledge of previous tasks. Previous works based on the language model and following data-free constraint approaches have explored formatting all data as ``begin token (B) + context (C) + question (Q) + answer (A)'' for different tasks. However, they still suffer from catastrophic forgetting and are exacerbated when the previous task{'}s pseudo data is insufficient for the following reasons: (1) The model has difficulty generating task-corresponding pseudo data, and (2) A is prone to error when A and C are separated by Q because the information of the C is diminished before generating A. Therefore, we propose the Ask Question First and Replay Question (AQF-RQ), including a novel data format ``BQCA'' and a new training task to train pseudo questions of previous tasks. Experimental results demonstrate that AQF-RQ makes it easier for the model to generate more pseudo data that match corresponding tasks, and is more robust to both sufficient and insufficient pseudo-data when the task boundary is both clear and unclear. AQF-RQ can achieve only 0.36{\%} lower performance than multi-task learning."
}Markdown (Informal)
[Ask Question First for Enhancing Lifelong Language Learning](https://preview.aclanthology.org/ingest-emnlp/2022.coling-1.408/) (Wang et al., COLING 2022)
ACL
- Han Wang, Ruiliu Fu, Xuejun Zhang, Jun Zhou, and Qingwei Zhao. 2022. Ask Question First for Enhancing Lifelong Language Learning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4610–4621, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.