Logical Reasoning with Outcome Reward Models for Test-Time Scaling

Ramya Keerthy Thatikonda, Wray Buntine, Ehsan Shareghi


Abstract
Logical reasoning is a critical benchmark for evaluating the capabilities of large language models (LLMs), as it reflects their ability to derive valid conclusions from given premises. While the combination of test-time scaling with dedicated outcome or process reward models has opened up new avenues to enhance LLMs performance in complex reasoning tasks, this space is under-explored in deductive logical reasoning. We present a set of Outcome Reward Models (ORMs) for deductive reasoning. To train the ORMs we mainly generate data using Chain-of-Thought (CoT) with single and multiple samples. Additionally, we propose a novel tactic to further expand the type of errors covered in the training dataset of the ORM. In particular, we propose an echo generation technique that leverages LLMs’ tendency to reflect incorrect assumptions made in prompts to extract additional training data, covering previously unexplored error types. While a standard CoT chain may contain errors likely to be made by the reasoner, the echo strategy deliberately steers the model toward incorrect reasoning. We show that ORMs trained on CoT and echo-augmented data demonstrate improved performance on the FOLIO, JustLogic, and ProverQA datasets across four different LLMs.
Anthology ID:
2025.emnlp-main.1326
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26113–26123
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1326/
DOI:
Bibkey:
Cite (ACL):
Ramya Keerthy Thatikonda, Wray Buntine, and Ehsan Shareghi. 2025. Logical Reasoning with Outcome Reward Models for Test-Time Scaling. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 26113–26123, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Logical Reasoning with Outcome Reward Models for Test-Time Scaling (Thatikonda et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1326.pdf
Checklist:
 2025.emnlp-main.1326.checklist.pdf