Learning Label Modular Prompts for Text Classification in the Wild

Hailin Chen, Amrita Saha, Shafiq Joty, Steven C.H. Hoi


Abstract
Machine learning models usually assume i.i.d data during training and testing, but data and tasks in real world often change over time. To emulate the transient nature of real world, we propose a challenging but practical task: text classification in-the-wild, which introduces different non-stationary training/testing stages. Decomposing a complex task into modular components can enable robust generalisation under such non-stationary environment. However, current modular approaches in NLP do not take advantage of recent advances in parameter efficient tuning of pretrained language models. To close this gap, we propose ModularPrompt, a label-modular prompt tuning framework for text classification tasks. In ModularPrompt, the input prompt consists of a sequence of soft label prompts, each encoding modular knowledge related to the corresponding class label. In two of most formidable settings, ModularPrompt outperforms relevant baselines by a large margin demonstrating strong generalisation ability. We also conduct comprehensive analysis to validate whether the learned prompts satisfy properties of a modular representation.
Anthology ID:
2022.emnlp-main.109
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1677–1690
Language:
URL:
https://aclanthology.org/2022.emnlp-main.109
DOI:
Bibkey:
Cite (ACL):
Hailin Chen, Amrita Saha, Shafiq Joty, and Steven C.H. Hoi. 2022. Learning Label Modular Prompts for Text Classification in the Wild. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1677–1690, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Learning Label Modular Prompts for Text Classification in the Wild (Chen et al., EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.emnlp-main.109.pdf