GaoXiong Cao
2025
Speed Up Your Code: Progressive Code Acceleration Through Bidirectional Tree Editing
Longhui Zhang
|
Jiahao Wang
|
Meishan Zhang
|
GaoXiong Cao
|
Ensheng Shi
|
Mayuchi Mayuchi
|
Jun Yu
|
Honghai Liu
|
Jing Li
|
Min Zhang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) have made significant strides in code acceleration (CA) tasks. Current works typically fine-tune LLMs using slow-fast code pairs mined from online programming platforms. Although these methods are widely recognized for their effectiveness, the training data often lack clear code acceleration patterns and offer only limited speed improvements. Moreover, existing training methods, such as direct instruction fine-tuning (IFT), tend to overlook the hierarchical relationships among acceleration patterns. In this work, we introduce BITE, a novel training paradigm designed to improve LLMs’ CA capabilities through two key innovations: (1) Bidirectional tree editing, which generates high-quality training data by incrementally transforming given code into both its most efficient and least efficient variants, and (2) Progressive code acceleration learning, which enables LLMs to internalize multi-level CA strategies by learning increasingly sophisticated acceleration patterns. Additionally, we introduce a new CA evaluation benchmark and metric for comprehensive assessment of model performance on CA tasks. Extensive experiments on both our benchmark and existing benchmarks demonstrate the effectiveness of our approach. Notably, BITE enables Qwen-1.5B to outperform prompt-enhanced GPT-4 and current training-based methods on average across five programming languages.
Search
Fix author
Co-authors
- Jing Li (李婧) 1
- Honghai Liu 1
- Mayuchi Mayuchi 1
- Ensheng Shi 1
- Jiahao Wang 1
- show all...
Venues
- acl1