From Outcomes to Processes: Guiding PRM Learning from ORM for Inference-Time Alignment

Bin Xie, Bingbing Xu, Yige Yuan, Shengmao Zhu, Huawei Shen


Abstract
Inference-time alignment methods have gained significant attention for their efficiency and effectiveness in aligning large language models (LLMs) with human preferences. However, existing dominant approaches, reward-guided search (RGS), suffer from a critical granularity mismatch: reward models (RMs) are trained on complete responses but applied to incomplete sequences during generation, leading to inconsistent scoring and suboptimal alignment. To combat the challenge, we argue that an ideal RM should satisfy two objectives: Score Consistency, ensuring coherent evaluation across partial and complete responses, and Preference Consistency, aligning partial sequence assessments with human preferences. To achieve these, we propose SPRM, a novel dual-consistency framework integrating score consistency-based and preference consistency-based partial evaluation modules, which leverage the Bradley-Terry model and entropy-based reweighting to predict cumulative rewards and prioritize human-aligned sequences. Extensive experiments on dialogue, summarization, and reasoning tasks demonstrate the effectiveness of SPRM, significantly reducing granularity discrepancies by up to 11.7 on TL;DR Summarization and achieving a 3.6%–10.3% improvement in GPT-4 evaluation scores across all tasks. Code is publicly available at [this link](https://github.com/xiebin23/SPRM).
Anthology ID:
2025.acl-long.946
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19291–19307
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.946/
DOI:
Bibkey:
Cite (ACL):
Bin Xie, Bingbing Xu, Yige Yuan, Shengmao Zhu, and Huawei Shen. 2025. From Outcomes to Processes: Guiding PRM Learning from ORM for Inference-Time Alignment. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 19291–19307, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
From Outcomes to Processes: Guiding PRM Learning from ORM for Inference-Time Alignment (Xie et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.946.pdf