Annotating Training Data for Conditional Semantic Textual Similarity Measurement using Large Language Models

Gaifan Zhang, Yi Zhou, Danushka Bollegala


Abstract
Semantic similarity between two sentences depends on the aspects considered between those sentences. To study this phenomenon, Deshpande et al. (2023) proposed the Conditional Semantic Textual Similarity (C-STS) task and annotated a human-rated similarity dataset containing pairs of sentences compared under two different conditions. However, Tu et al. (2024) found various annotation issues in this dataset and showed that manually re-annotating a small portion of it leads to more accurate C-STS models. Despite these pioneering efforts, the lack of large and accurately annotated C-STS datasets remains a blocker for making progress on this task as evidenced by the subpar performance of the C-STS models. To address this training data need, we resort to Large Language Models (LLMs) to correct the condition statements and similarity ratings in the original dataset proposed by Deshpande et al. (2023). Our proposed method is able to re-annotate a large training dataset for the C-STS task with minimal manual effort. Importantly, by training a supervised C-STS model on our cleaned and re-annotated dataset, we achieve a 5.4% statistically significant improvement in Spearman correlation. The re-annotated dataset is available at https://LivNLP.github.io/CSTS-reannotation.
Anthology ID:
2025.emnlp-main.1373
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
27003–27015
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1373/
DOI:
Bibkey:
Cite (ACL):
Gaifan Zhang, Yi Zhou, and Danushka Bollegala. 2025. Annotating Training Data for Conditional Semantic Textual Similarity Measurement using Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 27003–27015, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Annotating Training Data for Conditional Semantic Textual Similarity Measurement using Large Language Models (Zhang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1373.pdf
Checklist:
 2025.emnlp-main.1373.checklist.pdf