LLM Reasoning Engine: Specialized Training for Enhanced Mathematical Reasoning

Shuguang Chen, Guang Lin


Abstract
Large Language Models (LLMs) have shown remarkable performance in various natural language processing tasks but face challenges in mathematical reasoning, where complex problem-solving requires both linguistic understanding and mathematical reasoning skills. Existing approaches to address this challenge often rely on ensemble methods and suffer from the problem of data scarcity in target domains. In this work, we present a novel method to enhance the capabilities of LLMs in mathematical reasoning tasks. Motivated by the need to bridge this gap, our approach incorporates a question paraphrase strategy, which aims to diversify the linguistic forms of mathematical questions to improve generalization. Additionally, specialized training objectives are employed to guide the model’s learning process, focusing on enhancing its understanding of mathematical concepts and reasoning processes. We conduct experiments on four datasets using different LLMs, and demonstrate the effectiveness of our approach in improving LLMs’ performance on mathematical reasoning tasks. Our findings underscore the significance of our methodology in advancing large language models and their potential implications for real-world applications that require mathematical reasoning abilities.
Anthology ID:
2025.knowledgenlp-1.9
Volume:
Proceedings of the 4th International Workshop on Knowledge-Augmented Methods for Natural Language Processing
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico, USA
Editors:
Weijia Shi, Wenhao Yu, Akari Asai, Meng Jiang, Greg Durrett, Hannaneh Hajishirzi, Luke Zettlemoyer
Venues:
KnowledgeNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
118–128
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.knowledgenlp-1.9/
DOI:
Bibkey:
Cite (ACL):
Shuguang Chen and Guang Lin. 2025. LLM Reasoning Engine: Specialized Training for Enhanced Mathematical Reasoning. In Proceedings of the 4th International Workshop on Knowledge-Augmented Methods for Natural Language Processing, pages 118–128, Albuquerque, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
LLM Reasoning Engine: Specialized Training for Enhanced Mathematical Reasoning (Chen & Lin, KnowledgeNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.knowledgenlp-1.9.pdf