Cornelia Gruber
2025
Revisiting Active Learning under (Human) Label Variation
Cornelia Gruber
|
Helen Alber
|
Bernd Bischl
|
Göran Kauermann
|
Barbara Plank
|
Matthias Aßenmacher
Proceedings of the The 4th Workshop on Perspectivist Approaches to NLP
Access to high-quality labeled data remains a limiting factor in applied supervised learning. Active learning (AL), a popular approach to optimizing the use of limited annotation budgets in training ML models, often relies on at least one of several simplifying assumptions, which rarely hold in practice when acknowledging human label variation (HLV). Label variation (LV), i.e., differing labels for the same instance, is common, especially in natural language processing. Yet annotation frameworks often still rest on the assumption of a single ground truth, overlooking HLV, i.e., the occurrence of plausible differences in annotations, as an informative signal. In this paper, we examine foundational assumptions about truth and label nature, highlighting the need to decompose observed LV into signal (e.g., HLV) and noise (e.g., annotation error). We survey how the AL and (H)LV communities have addressed—or neglected—these distinctions and propose a conceptual framework for incorporating HLV throughout the AL loop, including instance selection, annotator choice, and label representation. We further discuss the integration of large language models (LLM) as annotators. Our work aims to lay a conceptual foundation for (H)LV-aware active learning, better reflecting the complexities of real-world annotation.
2024
More Labels or Cases? Assessing Label Variation in Natural Language Inference
Cornelia Gruber
|
Katharina Hechinger
|
Matthias Assenmacher
|
Göran Kauermann
|
Barbara Plank
Proceedings of the Third Workshop on Understanding Implicit and Underspecified Language
In this work, we analyze the uncertainty that is inherently present in the labels used for supervised machine learning in natural language inference (NLI). In cases where multiple annotations per instance are available, neither the majority vote nor the frequency of individual class votes is a trustworthy representation of the labeling uncertainty. We propose modeling the votes via a Bayesian mixture model to recover the data-generating process, i.e., the “true” latent classes, and thus gain insight into the class variations. This will enable a better understanding of the confusion happening during the annotation process. We also assess the stability of the proposed estimation procedure by systematically varying the numbers of i) instances and ii) labels. Thereby, we observe that few instances with many labels can predict the latent class borders reasonably well, while the estimation fails for many instances with only a few labels. This leads us to conclude that multiple labels are a crucial building block for properly analyzing label uncertainty.
Search
Fix author
Co-authors
- Matthias Aßenmacher 2
- Göran Kauermann 2
- Barbara Plank 2
- Helen Alber 1
- Bernd Bischl 1
- show all...