Enabling Natural Zero-Shot Prompting on Encoder Models via Statement-Tuning

Ahmed Elshabrawy, Yongxin Huang, Iryna Gurevych, Alham Fikri Aji


Abstract
While Large Language Models (LLMs) exhibit remarkable capabilities in zero-shot and few-shot scenarios, they often require computationally prohibitive sizes. Conversely, smaller Masked Language Models (MLMs) like BERT and RoBERTa achieve state-of-the-art results through fine-tuning but struggle with extending to few-shot and zero-shot settings due to their architectural constraints. Hence, we propose Statement-Tuning, a technique that models discriminative tasks as a set of finite statements and trains an encoder model to discriminate between the potential statements to determine the label. We do Statement-Tuning on multiple tasks to enable cross-task generalization. Experimental results demonstrate that Statement-Tuning achieves competitive performance compared to state-of-the-art LLMs with significantly fewer parameters. Furthermore, we compare with previous encoder-based methodology and show that our method is more accurate and more robust to spurious patterns. Moreover, the study investigates the impact of several design choices on few-shot and zero-shot generalization, revealing that Statement-Tuning can achieve strong performance with modest training data and benefits from task and statement diversity for unseen task generalizability. We release all the code used to generate statement data, train and evaluate our Statement-Tuned models.
Anthology ID:
2025.findings-naacl.465
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8302–8321
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.465/
DOI:
Bibkey:
Cite (ACL):
Ahmed Elshabrawy, Yongxin Huang, Iryna Gurevych, and Alham Fikri Aji. 2025. Enabling Natural Zero-Shot Prompting on Encoder Models via Statement-Tuning. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 8302–8321, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Enabling Natural Zero-Shot Prompting on Encoder Models via Statement-Tuning (Elshabrawy et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.465.pdf