Do Code Semantics Help? A Comprehensive Study on Execution Trace-Based Information for Code Large Language Models

Jian Jornbowrl Wang, Xiaofei Xie, Qiang Hu, Shangqing Liu, Yi Li


Abstract
Code Large Language Models (Code LLMs) have opened a new era in programming with their impressive capabilities. However, recent research has revealed critical limitations in their ability to reason about runtime behavior and understand the actual functionality of programs, which poses significant challenges for their post-training and practical deployment. Specifically, Code LLMs encounter two principal issues: (1) a lack of proficiency in reasoning about program execution behavior, as they struggle to interpret what programs actually do during runtime, and (2) inconsistent and fragmented representation of semantic information, such as execution traces, across existing methods, which hinders their ability to generalize and reason effectively. These challenges underscore the necessity for more systematic approaches to enhance the reasoning capabilities of Code LLMs. To address these issues, we introduce a generic framework to support integrating semantic information (e.g., execution trace) to code task-relevant prompts, and conduct a comprehensive study to explore the role of semantic information in enhancing the reasoning ability of Code LLMs accordingly. Specifically, we focus on investigating the usefulness of trace-based semantic information in boosting supervised fine-tuning(SFT) and post-phase inference of Code LLMs. The experimental results surprisingly disagree with previous works and demonstrate that semantic information has limited usefulness for SFT and test time scaling of Code LLM.
Anthology ID:
2025.findings-emnlp.548
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10367–10385
Language:
URL:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.548/
DOI:
10.18653/v1/2025.findings-emnlp.548
Bibkey:
Cite (ACL):
Jian Jornbowrl Wang, Xiaofei Xie, Qiang Hu, Shangqing Liu, and Yi Li. 2025. Do Code Semantics Help? A Comprehensive Study on Execution Trace-Based Information for Code Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 10367–10385, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Do Code Semantics Help? A Comprehensive Study on Execution Trace-Based Information for Code Large Language Models (Wang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.548.pdf
Checklist:
 2025.findings-emnlp.548.checklist.pdf