Confidence and Stability of Global and Pairwise Scores in NLP Evaluation

Georgii Levtsov, Dmitry Ustalov


Abstract
With the advent of highly capable instruction-tuned neural language models, benchmarking in natural language processing (NLP) is increasingly shifting towards pairwise comparison leaderboards, such as LMSYS Arena, from traditional global pointwise scores (e.g., GLUE, BIG-bench, SWE-bench). This paper empirically investigates the strengths and weaknesses of both global scores and pairwise comparisons to aid decision-making in selecting appropriate model evaluation strategies. Through computational experiments on synthetic and real-world datasets using standard global metrics and the popular Bradley–Terry model for pairwise comparisons, we found that while global scores provide more reliable overall rankings, they can underestimate strong models with rare, significant errors or low confidence. Conversely, pairwise comparisons are particularly effective for identifying strong contenders among models with lower global scores, especially where quality metrics are hard to define (e.g., text generation), though they require more comparisons to converge if ties are frequent. Our code and data are available at https://github.com/HSPyroblast/srw-ranking under a permissive license.
Anthology ID:
2025.acl-srw.3
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Jin Zhao, Mingyang Wang, Zhu Liu
Venues:
ACL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
40–52
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-srw.3/
DOI:
Bibkey:
Cite (ACL):
Georgii Levtsov and Dmitry Ustalov. 2025. Confidence and Stability of Global and Pairwise Scores in NLP Evaluation. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), pages 40–52, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Confidence and Stability of Global and Pairwise Scores in NLP Evaluation (Levtsov & Ustalov, ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-srw.3.pdf