Code Execution as Grounded Supervision for LLM Reasoning

Dongwon Jung, Wenxuan Zhou, Muhao Chen


Abstract
Training large language models (LLMs) with chain-of-thought (CoT) supervision has proven effective for enhancing their reasoning abilities. However, obtaining reliable and accurate reasoning supervision remains a significant challenge. We propose a scalable method for generating a high-quality CoT supervision dataset by leveraging the determinism of program execution. Unlike existing reasoning dataset generation methods that rely on costly human annotations or error-prone LLM-generated CoT, our approach extracts verifiable, step-by-step reasoning traces from code execution and transforms them into a natural language CoT reasoning. Experiments on reasoning benchmarks across various domains show that our method effectively equips LLMs with transferable reasoning abilities across diverse tasks. Furthermore, the ablation studies validate that our method produces highly accurate reasoning data and reduces overall token length during inference by reducing meaningless repetition and overthinking.
Anthology ID:
2025.emnlp-main.1260
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24822–24833
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1260/
DOI:
Bibkey:
Cite (ACL):
Dongwon Jung, Wenxuan Zhou, and Muhao Chen. 2025. Code Execution as Grounded Supervision for LLM Reasoning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 24822–24833, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Code Execution as Grounded Supervision for LLM Reasoning (Jung et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1260.pdf
Checklist:
 2025.emnlp-main.1260.checklist.pdf