Let’s Reason Formally: Natural-Formal Hybrid Reasoning Enhances LLM’s Math Capability

Ruida Wang, Yuxin Li, Yi R. Fung, Tong Zhang


Abstract
Enhancing the mathematical reasoning capabilities of LLMs has garnered significant attention in both the mathematical and computer science communities. Recent works have made substantial progress in both Natural Language (NL) reasoning and Formal Language (FL) reasoning by leveraging the potential of pure Reinforcement Learning (RL) methods on base models. However, RL approaches struggle to impart new capabilities not presented in the base model, highlighting the need to integrate more knowledge like FL into NL math reasoning effectively. Yet, this integration is challenging due to inherent disparities in problem structure and reasoning format between NL and FL. To address these challenges, we introduce **NL-FL HybridReasoning (NFL-HR)**, an end-to-end framework designed to incorporate the FL expert into NL math problem-solving. To bridge the NL and FL input format gap, we propose the *NL-FL Problem Alignment* method, which reformulates the Question-Answering (QA) problems in NL as existence theorems in FL. Subsequently, the *Mixed Problem Input* technique we provide enables the FL reasoner to handle both QA and existence problems concurrently. Lastly, we mitigate the NL and FL output format gap in reasoning through an LLM-based *Answer Extraction* mechanism. Comprehensive experiments demonstrate that the **NFL-HR** framework achieves **89.80%** and **84.34%** accuracy rates on the MATH-500 and the AMC benchmarks, surpassing the NL baseline by 4.60% and 4.82%, respectively. Notably, some problems resolved by our framework remain unsolved by the NL baseline model even under a larger number of trials.
Anthology ID:
2025.emnlp-main.850
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16794–16820
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.850/
DOI:
Bibkey:
Cite (ACL):
Ruida Wang, Yuxin Li, Yi R. Fung, and Tong Zhang. 2025. Let’s Reason Formally: Natural-Formal Hybrid Reasoning Enhances LLM’s Math Capability. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 16794–16820, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Let’s Reason Formally: Natural-Formal Hybrid Reasoning Enhances LLM’s Math Capability (Wang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.850.pdf
Checklist:
 2025.emnlp-main.850.checklist.pdf