Order Effects in Annotation Tasks: Further Evidence of Annotation Sensitivity

Jacob Beck, Stephanie Eckman, Bolei Ma, Rob Chew, Frauke Kreuter


Abstract
The data-centric revolution in AI has revealed the importance of high-quality training data for developing successful AI models. However, annotations are sensitive to annotator characteristics, training materials, and to the design and wording of the data collection instrument. This paper explores the impact of observation order on annotations. We find that annotators’ judgments change based on the order in which they see observations. We use ideas from social psychology to motivate hypotheses about why this order effect occurs. We believe that insights from social science can help AI researchers improve data and model quality.
Anthology ID:
2024.uncertainlp-1.8
Volume:
Proceedings of the 1st Workshop on Uncertainty-Aware NLP (UncertaiNLP 2024)
Month:
March
Year:
2024
Address:
St Julians, Malta
Editors:
Raúl Vázquez, Hande Celikkanat, Dennis Ulmer, Jörg Tiedemann, Swabha Swayamdipta, Wilker Aziz, Barbara Plank, Joris Baan, Marie-Catherine de Marneffe
Venues:
UncertaiNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
81–86
Language:
URL:
https://aclanthology.org/2024.uncertainlp-1.8
DOI:
Bibkey:
Cite (ACL):
Jacob Beck, Stephanie Eckman, Bolei Ma, Rob Chew, and Frauke Kreuter. 2024. Order Effects in Annotation Tasks: Further Evidence of Annotation Sensitivity. In Proceedings of the 1st Workshop on Uncertainty-Aware NLP (UncertaiNLP 2024), pages 81–86, St Julians, Malta. Association for Computational Linguistics.
Cite (Informal):
Order Effects in Annotation Tasks: Further Evidence of Annotation Sensitivity (Beck et al., UncertaiNLP-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.uncertainlp-1.8.pdf