Abstract
Prompt-based learning has achieved excellent performance in few-shot learning by mapping the outputs of the pre-trained language model to the labels with the help of a label mapping component. Existing manual label mapping (MLM) methods achieve good results but heavily rely on expensive human knowledge. Automatic label mapping (ALM) methods that learn the mapping functions with extra parameters have shown their potentiality. However, no effective ALM model comparable to MLM methods is developed yet due to the limited data. In this paper, we propose a Latent Pseudo Label Mapping (LPLM) method that optimizes the label mapping without human knowledge and extra parameters. LPLM is built upon a probabilistic latent model and is iteratively self-improved with the EM-style algorithm. The empirical results demonstrate that our LPLM method is superior to the mainstream ALM methods and significantly outperforms the SOTA method in few-shot classification tasks. Moreover, LPLM also shows impressively better performance than the vanilla MLM method which requires extra task-specific prior knowledge.- Anthology ID:
- 2022.findings-emnlp.291
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3952–3962
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.291
- DOI:
- Cite (ACL):
- Jirui Qi, Richong Zhang, Junfan Chen, Jaein Kim, and Yongyi Mao. 2022. Parameter-free Automatically Prompting: A Latent Pseudo Label Mapping Model for Prompt-based Learning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3952–3962, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Parameter-free Automatically Prompting: A Latent Pseudo Label Mapping Model for Prompt-based Learning (Qi et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2022.findings-emnlp.291.pdf