On the Calibration of Pre-trained Language Models using Mixup Guided by Area Under the Margin and Saliency

Seo Yeon Park, Cornelia Caragea


Abstract
A well-calibrated neural model produces confidence (probability outputs) closely approximated by the expected accuracy. While prior studies have shown that mixup training as a data augmentation technique can improve model calibration on image classification tasks, little is known about using mixup for model calibration on natural language understanding (NLU) tasks. In this paper, we explore mixup for model calibration on several NLU tasks and propose a novel mixup strategy for pre-trained language models that improves model calibration further. Our proposed mixup is guided by both the Area Under the Margin (AUM) statistic (Pleiss et al., 2020) and the saliency map of each sample (Simonyan et al., 2013). Moreover, we combine our mixup strategy with model miscalibration correction techniques (i.e., label smoothing and temperature scaling) and provide detailed analyses of their impact on our proposed mixup. We focus on systematically designing experiments on three NLU tasks: natural language inference, paraphrase detection, and commonsense reasoning. Our method achieves the lowest expected calibration error compared to strong baselines on both in-domain and out-of-domain test samples while maintaining competitive accuracy.
Anthology ID:
2022.acl-long.368
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5364–5374
Language:
URL:
https://aclanthology.org/2022.acl-long.368
DOI:
10.18653/v1/2022.acl-long.368
Bibkey:
Cite (ACL):
Seo Yeon Park and Cornelia Caragea. 2022. On the Calibration of Pre-trained Language Models using Mixup Guided by Area Under the Margin and Saliency. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5364–5374, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
On the Calibration of Pre-trained Language Models using Mixup Guided by Area Under the Margin and Saliency (Park & Caragea, ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.acl-long.368.pdf
Data
MultiNLISNLISWAG