Keep Guessing? When Considering Inference Scaling, Mind the Baselines

Gal Yona, Or Honovich, Omer Levy, Roee Aharoni


Abstract
Scaling inference compute in large language models (LLMs) through repeated sampling consistently increases the coverage (fraction of problems solved) as the number of samples increases. We conjecture that this observed improvement is partially due to the answer distribution of standard evaluation benchmarks, which is skewed towards a relatively small set of common answers. To test this conjecture, we define a baseline that enumerates answers according to their prevalence in the training set. Experiments spanning two domains – mathematical reasoning and factual knowledge – reveal that this baseline outperforms repeated model sampling for some LLMs, while the coverage for others is on par with that of a mixture strategy that obtains k answers by using only 10 model samples and similarly guessing the remaining k-10 attempts via enumeration. Our baseline enables a more accurate measurement of how much repeated sampling improves coverage in such settings beyond prompt-agnostic guessing.
Anthology ID:
2025.findings-naacl.332
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5979–5991
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.332/
DOI:
Bibkey:
Cite (ACL):
Gal Yona, Or Honovich, Omer Levy, and Roee Aharoni. 2025. Keep Guessing? When Considering Inference Scaling, Mind the Baselines. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 5979–5991, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Keep Guessing? When Considering Inference Scaling, Mind the Baselines (Yona et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.332.pdf