We Need to Consider Disagreement in Evaluation

Valerio Basile, Michael Fell, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, Massimo Poesio, Alexandra Uma


Abstract
Evaluation is of paramount importance in data-driven research fields such as Natural Language Processing (NLP) and Computer Vision (CV). Current evaluation practice largely hinges on the existence of a single “ground truth” against which we can meaningfully compare the prediction of a model. However, this comparison is flawed for two reasons. 1) In many cases, more than one answer is correct. 2) Even where there is a single answer, disagreement among annotators is ubiquitous, making it difficult to decide on a gold standard. We argue that the current methods of adjudication, agreement, and evaluation need serious reconsideration. Some researchers now propose to minimize disagreement and to fix datasets. We argue that this is a gross oversimplification, and likely to conceal the underlying complexity. Instead, we suggest that we need to better capture the sources of disagreement to improve today’s evaluation practice. We discuss three sources of disagreement: from the annotator, the data, and the context, and show how this affects even seemingly objective tasks. Datasets with multiple annotations are becoming more common, as are methods to integrate disagreement into modeling. The logical next step is to extend this to evaluation.
Anthology ID:
2021.bppf-1.3
Volume:
Proceedings of the 1st Workshop on Benchmarking: Past, Present and Future
Month:
Aug
Year:
2021
Address:
Online
Venue:
BPPF
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15–21
Language:
URL:
https://aclanthology.org/2021.bppf-1.3
DOI:
10.18653/v1/2021.bppf-1.3
Bibkey:
Cite (ACL):
Valerio Basile, Michael Fell, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, Massimo Poesio, and Alexandra Uma. 2021. We Need to Consider Disagreement in Evaluation. In Proceedings of the 1st Workshop on Benchmarking: Past, Present and Future, pages 15–21, Online. Association for Computational Linguistics.
Cite (Informal):
We Need to Consider Disagreement in Evaluation (Basile et al., BPPF 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2021.bppf-1.3.pdf