How Many Ratings per Item are Necessary for Reliable Significance Testing?

Christopher M Homan, Flip Korn, Deepak Pandita, Chris Welty


Abstract
A cornerstone of machine learning evaluation is the (often hidden) assumption that model and human responses are reliable enough to evaluate models against unitary, authoritative, “gold standard” data, via simple metrics such as accuracy, precision, and recall. The generative AI revolution would seem to explode this assumption, given the critical role stochastic inference plays. Yet, in spite of public demand for more transparency in AI—along with strong evidence that humans are unreliable judges—estimates of model reliability are conventionally based on, at most, a few output responses per input item. We adapt a method, previously used to evaluate the reliability of various metrics and estimators for machine learning evaluation, to determine whether an (existing or planned) dataset has enough responses per item to assure reliable null hypothesis statistical testing. We show that, for many common metrics, collecting even 5-10 responses per item (from each model and team of human evaluators) is not sufficient. We apply our methods to several of the very few extant gold standard test sets with multiple disaggregated responses per item and show that even these datasets lack enough responses per item. We show how our methods can help AI researchers make better decisions about how to collect data for AI evaluation.
Anthology ID:
2026.findings-eacl.223
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4258–4273
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.223/
DOI:
Bibkey:
Cite (ACL):
Christopher M Homan, Flip Korn, Deepak Pandita, and Chris Welty. 2026. How Many Ratings per Item are Necessary for Reliable Significance Testing?. In Findings of the Association for Computational Linguistics: EACL 2026, pages 4258–4273, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
How Many Ratings per Item are Necessary for Reliable Significance Testing? (Homan et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.223.pdf
Checklist:
 2026.findings-eacl.223.checklist.pdf