Adversarial Knowledge Stimulated Contrastive Prompting for Few-shot Language Learners

Kai Zheng, Qingfeng Sun, Yaming Yang, Tengchao Lv, Yeyong Pi, Changlin Zhao, Fei Xu, Qi Zhang


Abstract
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models(PLMs) on few-shot Natural Language Understanding (NLU) tasks by employing task-specific prompts. Yet, PLMsare unfamiliar with prompt-style expressionsduring pre-training, which limits the few-shotlearning performance on downstream tasks. It would be desirable if the models can stimulate prompting knowledge while adaptation to specific NLU tasks. We present the Adversarial Knowledge Stimulated Contrastive Prompting (AKSCP) framework, leading to better few-shot NLU tasks for language models by implicitly stimulate knowledge from pretrained language model. In AKSCP, a novel paradigm Cloze-driven prompt is proposed for joint prompt tuning across word cloze task and prompt-based learning, forcing PLMs to stimulate prompting knowledge. We further design an Adversarial Contrastive learning method to improve the generalization ability of PLM for different downstream tasks. Experiments over a variety of NLU tasks show that AKSCP consistently outperforms state-of-the-arts for prompt-based fine-tuning.
Anthology ID:
2023.findings-acl.852
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13495–13507
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2023.findings-acl.852/
DOI:
10.18653/v1/2023.findings-acl.852
Bibkey:
Cite (ACL):
Kai Zheng, Qingfeng Sun, Yaming Yang, Tengchao Lv, Yeyong Pi, Changlin Zhao, Fei Xu, and Qi Zhang. 2023. Adversarial Knowledge Stimulated Contrastive Prompting for Few-shot Language Learners. In Findings of the Association for Computational Linguistics: ACL 2023, pages 13495–13507, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Adversarial Knowledge Stimulated Contrastive Prompting for Few-shot Language Learners (Zheng et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2023.findings-acl.852.pdf