Enhancing Reasoning Abilities of Small LLMs with Cognitive Alignment

Wenrui Cai, Chengyu Wang, Junbing Yan, Jun Huang, Xiangzhong Fang


Abstract
The reasoning capabilities of large language reasoning models (LRMs), such as OpenAI’s o1 and DeepSeek-R1, have seen substantial advancements through deep thinking. However, these enhancements come with significant resource demands, underscoring the need for training effective small reasoning models. A critical challenge is that small models possess different reasoning capacities and cognitive trajectories compared with their larger counterparts. Hence, directly distilling chain-of-thought (CoT) results from large LRMs to smaller ones can sometimes be ineffective and often requires a substantial amount of annotated data. In this paper, we first introduce a novel Critique-Rethink-Verify (CRV) system, designed for training smaller yet powerful LRMs. Our CRV system consists of multiple LLM agents, each specializing in unique abilities: (i) critiquing the CoT qualities according to the cognitive capabilities of smaller models, (ii) rethinking and refining these CoTs based on the critiques, and (iii) verifying the correctness of the refined results. Based on the CRV system, we further propose the Cognitive Preference Optimization (CogPO) algorithm to continuously enhance the reasoning abilities of smaller models by aligning their reasoning processes with their cognitive capacities. Comprehensive evaluations on challenging reasoning benchmarks demonstrate the efficacy of our CRV+CogPO framework, which outperforms other methods by a large margin.
Anthology ID:
2025.emnlp-main.377
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7434–7449
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.377/
DOI:
Bibkey:
Cite (ACL):
Wenrui Cai, Chengyu Wang, Junbing Yan, Jun Huang, and Xiangzhong Fang. 2025. Enhancing Reasoning Abilities of Small LLMs with Cognitive Alignment. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 7434–7449, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Enhancing Reasoning Abilities of Small LLMs with Cognitive Alignment (Cai et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.377.pdf
Checklist:
 2025.emnlp-main.377.checklist.pdf