An Efficient and Precise Training Data Construction Framework for Process-supervised Reward Model in Mathematical Reasoning

Wei Sun, Qianlong Du, Fuwei Cui, Jiajun Zhang


Abstract
Enhancing the mathematical reasoning capabilities of Large Language Models (LLMs) is of great scientific and practical significance. Researchers typically employ process-supervised reward models (PRMs) to guide the reasoning process, effectively improving the models’ reasoning abilities. However, existing methods for constructing process supervision training data, such as manual annotation and per-step Monte Carlo estimation, are often costly or suffer from poor quality. To address these challenges, this paper introduces a framework called EpicPRM (Efficient, Precise, Cheap), which annotates each intermediate reasoning step based on its quantified contribution and uses an adaptive binary search algorithm to enhance both annotation precision and efficiency. Using this approach, we efficiently construct a high-quality process supervision training dataset named Epic50k, consisting of 50k annotated intermediate steps. Compared to other publicly available datasets, the PRM trained on Epic50k demonstrates significantly superior performance.
Anthology ID:
2025.acl-long.216
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4292–4305
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.216/
DOI:
Bibkey:
Cite (ACL):
Wei Sun, Qianlong Du, Fuwei Cui, and Jiajun Zhang. 2025. An Efficient and Precise Training Data Construction Framework for Process-supervised Reward Model in Mathematical Reasoning. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4292–4305, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
An Efficient and Precise Training Data Construction Framework for Process-supervised Reward Model in Mathematical Reasoning (Sun et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.216.pdf