Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning

Prasetya Utama, Nafise Sadat Moosavi, Victor Sanh, Iryna Gurevych


Abstract
Recent prompt-based approaches allow pretrained language models to achieve strong performances on few-shot finetuning by reformulating downstream tasks as a language modeling problem. In this work, we demonstrate that, despite its advantages on low data regimes, finetuned prompt-based models for sentence pair classification tasks still suffer from a common pitfall of adopting inference heuristics based on lexical overlap, e.g., models incorrectly assuming a sentence pair is of the same meaning because they consist of the same set of words. Interestingly, we find that this particular inference heuristic is significantly less present in the zero-shot evaluation of the prompt-based model, indicating how finetuning can be destructive to useful knowledge learned during the pretraining. We then show that adding a regularization that preserves pretraining weights is effective in mitigating this destructive tendency of few-shot finetuning. Our evaluation on three datasets demonstrates promising improvements on the three corresponding challenge datasets used to diagnose the inference heuristics.
Anthology ID:
2021.emnlp-main.713
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9063–9074
Language:
URL:
https://aclanthology.org/2021.emnlp-main.713
DOI:
10.18653/v1/2021.emnlp-main.713
Bibkey:
Cite (ACL):
Prasetya Utama, Nafise Sadat Moosavi, Victor Sanh, and Iryna Gurevych. 2021. Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9063–9074, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning (Utama et al., EMNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2021.emnlp-main.713.pdf
Video:
 https://preview.aclanthology.org/improve-issue-templates/2021.emnlp-main.713.mp4
Code
 ukplab/emnlp2021-prompt-ft-heuristics
Data
GLUEMultiNLIPAWSSNLI