Examining False Positives under Inference Scaling for Mathematical Reasoning

Yu Wang, Nan Yang, Liang Wang, Furu Wei, Fuli Feng


Abstract
Recent advancements in language models have led to significant improvements in mathematical reasoning across various benchmarks. However, most of these benchmarks rely on automatic evaluation methods that only compare final answers using heuristics, without verifying the underlying reasoning steps. This limitation results in false positive solutions, where models may produce correct final answers but with flawed deduction paths. In this paper, we systematically examine the prevalence of false positive solutions in mathematical problem solving for language models. We analyze the characteristics and extent of this issue across different open-source models, datasets of varying difficulty levels, and decoding strategies. Specifically, we explore how false positives influence the inference time scaling behavior of language models. Our experimental results reveal that: (1) false positive solutions persist across different models, datasets, and decoding methods, (2) sampling-based inference time scaling methods do not alleviate the problem, and (3) the pass@N evaluation metric is more susceptible to false positives, suggesting a significantly lower scaling ceiling than what automatic evaluations indicate. Additionally, we analyze specific instances of false positives and discuss potential limitations in self-improvement techniques and synthetic data generation under such conditions. Our data and code are publicly available at https://github.com/Wloner0809/False-Positives-in-Math.
Anthology ID:
2025.emnlp-main.632
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12512–12531
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.632/
DOI:
Bibkey:
Cite (ACL):
Yu Wang, Nan Yang, Liang Wang, Furu Wei, and Fuli Feng. 2025. Examining False Positives under Inference Scaling for Mathematical Reasoning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 12512–12531, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Examining False Positives under Inference Scaling for Mathematical Reasoning (Wang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.632.pdf
Checklist:
 2025.emnlp-main.632.checklist.pdf