Using tournaments to calculate AUROC for zero-shot classification with LLMs

WonJin Yoon, Ian Bulovic, Timothy A. Miller


Abstract
Large language models perform surprisingly well on many zero-shot classification tasks, but are difficult to fairly compare to supervised classifiers due to the lack of a modifiable decision boundary. In this work, we propose and evaluate a method that transforms binary classification tasks into pairwise comparisons between instances within a dataset, using LLMs to produce relative rankings of those instances. Repeated pairwise comparisons can be used to score instances using the Elo rating system (used in chess and other competitions), inducing a confidence ordering over instances in a dataset. We evaluate scheduling algorithms for their ability to minimize comparisons, and show that our proposed algorithm leads to improved classification performance, while also providing more information than traditional zero-shot classification.
Anthology ID:
2025.findings-emnlp.1281
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
23583–23591
Language:
URL:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.1281/
DOI:
10.18653/v1/2025.findings-emnlp.1281
Bibkey:
Cite (ACL):
WonJin Yoon, Ian Bulovic, and Timothy A. Miller. 2025. Using tournaments to calculate AUROC for zero-shot classification with LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 23583–23591, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Using tournaments to calculate AUROC for zero-shot classification with LLMs (Yoon et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.1281.pdf
Checklist:
 2025.findings-emnlp.1281.checklist.pdf