Measuring Label Ambiguity in Subjective Tasks using Predictive Uncertainty Estimation

Richard Alies, Elena Merdjanovska, Alan Akbik


Abstract
Human annotations in natural language corpora vary due to differing human perspectives. This is especially prevalent in subjective tasks. In these datasets, certain data samples are more prone to label variation and can be indicated as ambiguous samples.
Anthology ID:
2025.law-1.2
Volume:
Proceedings of the 19th Linguistic Annotation Workshop (LAW-XIX-2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Siyao Peng, Ines Rehbein
Venues:
LAW | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21–34
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.law-1.2/
DOI:
Bibkey:
Cite (ACL):
Richard Alies, Elena Merdjanovska, and Alan Akbik. 2025. Measuring Label Ambiguity in Subjective Tasks using Predictive Uncertainty Estimation. In Proceedings of the 19th Linguistic Annotation Workshop (LAW-XIX-2025), pages 21–34, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Measuring Label Ambiguity in Subjective Tasks using Predictive Uncertainty Estimation (Alies et al., LAW 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.law-1.2.pdf