What Makes Pre-trained Language Models Better Zero-shot Learners?
Jinghui Lu, Dongsheng Zhu, Weidong Han, Rui Zhao, Brian Mac Namee, Fei Tan
Abstract
Current methods for prompt learning in zero-shot scenarios widely rely on a development set with sufficient human-annotated data to select the best-performing prompt template a posteriori. This is not ideal because in a real-world zero-shot scenario of practical relevance, no labelled data is available. Thus, we propose a simple yet effective method for screening reasonable prompt templates in zero-shot text classification: Perplexity Selection (Perplection). We hypothesize that language discrepancy can be used to measure the efficacy of prompt templates, and thereby develop a substantiated perplexity-based scheme allowing for forecasting the performance of prompt templates in advance. Experiments show that our method leads to improved prediction performance in a realistic zero-shot setting, eliminating the need for any labelled examples.- Anthology ID:
- 2023.acl-long.128
- Volume:
- Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2288–2303
- Language:
- URL:
- https://preview.aclanthology.org/jlcl-multiple-ingestion/2023.acl-long.128/
- DOI:
- 10.18653/v1/2023.acl-long.128
- Cite (ACL):
- Jinghui Lu, Dongsheng Zhu, Weidong Han, Rui Zhao, Brian Mac Namee, and Fei Tan. 2023. What Makes Pre-trained Language Models Better Zero-shot Learners?. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2288–2303, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- What Makes Pre-trained Language Models Better Zero-shot Learners? (Lu et al., ACL 2023)
- PDF:
- https://preview.aclanthology.org/jlcl-multiple-ingestion/2023.acl-long.128.pdf