LEANCODE: Understanding Models Better for Code Simplification of Pre-trained Large Language Models
Yan Wang, Ling Ding, Tien N Nguyen, Shaohua Wang, Yanan Zheng
Abstract
Large Language Models for code often entail significant computational complexity, which grows significantly with the length of the input code sequence. We propose LeanCode for code simplification to reduce training and prediction time, leveraging code contexts in utilizing attention scores to represent the tokens’ importance. We advocate for the selective removal of tokens based on the average context-aware attention scores rather than average scores across all inputs. LeanCode uses the attention scores of ‘CLS’ tokens within the encoder for classification tasks, such as code search. It also employs the encoder-decoder attention scores to determine token significance for sequence-to-sequence tasks like code summarization. Our evaluation shows LeanCode‘s superiority over the SOTAs DietCode and SlimCode, with improvements of 60% and 16% for code search, and 29% and 27% for code summarization, respectively.- Anthology ID:
- 2025.acl-long.78
- Volume:
- Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1551–1567
- Language:
- URL:
- https://preview.aclanthology.org/landing_page/2025.acl-long.78/
- DOI:
- Cite (ACL):
- Yan Wang, Ling Ding, Tien N Nguyen, Shaohua Wang, and Yanan Zheng. 2025. LEANCODE: Understanding Models Better for Code Simplification of Pre-trained Large Language Models. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1551–1567, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- LEANCODE: Understanding Models Better for Code Simplification of Pre-trained Large Language Models (Wang et al., ACL 2025)
- PDF:
- https://preview.aclanthology.org/landing_page/2025.acl-long.78.pdf