Collective Human Opinions in Semantic Textual Similarity
Yuxia Wang, Shimin Tao, Ning Xie, Hao Yang, Timothy Baldwin, Karin Verspoor
Abstract
Despite the subjective nature of semantic textual similarity (STS) and pervasive disagreements in STS annotation, existing benchmarks have used averaged human ratings as gold standard. Averaging masks the true distribution of human opinions on examples of low agreement, and prevents models from capturing the semantic vagueness that the individual ratings represent. In this work, we introduce USTS, the first Uncertainty-aware STS dataset with ∼15,000 Chinese sentence pairs and 150,000 labels, to study collective human opinions in STS. Analysis reveals that neither a scalar nor a single Gaussian fits a set of observed judgments adequately. We further show that current STS models cannot capture the variance caused by human disagreement on individual instances, but rather reflect the predictive confidence over the aggregate dataset.- Anthology ID:
- 2023.tacl-1.56
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 11
- Month:
- Year:
- 2023
- Address:
- Cambridge, MA
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 997–1013
- Language:
- URL:
- https://aclanthology.org/2023.tacl-1.56
- DOI:
- 10.1162/tacl_a_00584
- Cite (ACL):
- Yuxia Wang, Shimin Tao, Ning Xie, Hao Yang, Timothy Baldwin, and Karin Verspoor. 2023. Collective Human Opinions in Semantic Textual Similarity. Transactions of the Association for Computational Linguistics, 11:997–1013.
- Cite (Informal):
- Collective Human Opinions in Semantic Textual Similarity (Wang et al., TACL 2023)
- PDF:
- https://preview.aclanthology.org/emnlp-22-attachments/2023.tacl-1.56.pdf