Answer-level Calibration for Free-form Multiple Choice Question Answering

Sawan Kumar


Abstract
Pre-trained language models have recently shown that training on large corpora using the language modeling objective enables few-shot and zero-shot capabilities on a variety of NLP tasks, including commonsense reasoning tasks. This is achieved using text interactions with the model, usually by posing the task as a natural language text completion problem. While using language model probabilities to obtain task specific scores has been generally useful, it often requires task-specific heuristics such as length normalization, or probability calibration. In this work, we consider the question answering format, where we need to choose from a set of (free-form) textual choices of unspecified lengths given a context. We present ALC (Answer-Level Calibration), where our main suggestion is to model context-independent biases in terms of the probability of a choice without the associated context and to subsequently remove it using an unsupervised estimate of similarity with the full context. We show that our unsupervised answer-level calibration consistently improves over or is competitive with baselines using standard evaluation metrics on a variety of tasks including commonsense reasoning tasks. Further, we show that popular datasets potentially favor models biased towards easy cues which are available independent of the context. We analyze such biases using an associated F1-score. Our analysis indicates that answer-level calibration is able to remove such biases and leads to a more robust measure of model capability.
Anthology ID:
2022.acl-long.49
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
665–679
Language:
URL:
https://aclanthology.org/2022.acl-long.49
DOI:
10.18653/v1/2022.acl-long.49
Bibkey:
Cite (ACL):
Sawan Kumar. 2022. Answer-level Calibration for Free-form Multiple Choice Question Answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 665–679, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Answer-level Calibration for Free-form Multiple Choice Question Answering (Kumar, ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nodalida-main-page/2022.acl-long.49.pdf
Video:
 https://preview.aclanthology.org/nodalida-main-page/2022.acl-long.49.mp4
Code
 sawankumar28/alc
Data
ARCCOPACommonsenseQADREAMMC-TACOPIQASWAGWinoGrande