Best-Worst Scaling More Reliable than Rating Scales: A Case Study on Sentiment Intensity Annotation

Svetlana Kiritchenko, Saif Mohammad


Abstract
Rating scales are a widely used method for data annotation; however, they present several challenges, such as difficulty in maintaining inter- and intra-annotator consistency. Best–worst scaling (BWS) is an alternative method of annotation that is claimed to produce high-quality annotations while keeping the required number of annotations similar to that of rating scales. However, the veracity of this claim has never been systematically established. Here for the first time, we set up an experiment that directly compares the rating scale method with BWS. We show that with the same total number of annotations, BWS produces significantly more reliable results than the rating scale.
Anthology ID:
P17-2074
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
465–470
Language:
URL:
https://aclanthology.org/P17-2074
DOI:
10.18653/v1/P17-2074
Bibkey:
Cite (ACL):
Svetlana Kiritchenko and Saif Mohammad. 2017. Best-Worst Scaling More Reliable than Rating Scales: A Case Study on Sentiment Intensity Annotation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 465–470, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Best-Worst Scaling More Reliable than Rating Scales: A Case Study on Sentiment Intensity Annotation (Kiritchenko & Mohammad, ACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/P17-2074.pdf