Abstract
We investigate the problem of determining the predictive confidence (or, conversely, uncertainty) of a neural classifier through the lens of low-resource languages. By training models on sub-sampled datasets in three different languages, we assess the quality of estimates from a wide array of approaches and their dependence on the amount of available data. We find that while approaches based on pre-trained models and ensembles achieve the best results overall, the quality of uncertainty estimates can surprisingly suffer with more data. We also perform a qualitative analysis of uncertainties on sequences, discovering that a model’s total uncertainty seems to be influenced to a large degree by its data uncertainty, not model uncertainty. All model implementations are open-sourced in a software package.- Anthology ID:
- 2022.findings-emnlp.198
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2707–2735
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.198
- DOI:
- 10.18653/v1/2022.findings-emnlp.198
- Cite (ACL):
- Dennis Ulmer, Jes Frellsen, and Christian Hardmeier. 2022. Exploring Predictive Uncertainty and Calibration in NLP: A Study on the Impact of Method & Data Scarcity. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2707–2735, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Exploring Predictive Uncertainty and Calibration in NLP: A Study on the Impact of Method & Data Scarcity (Ulmer et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/2022.findings-emnlp.198.pdf