What can Large Language Models Capture about Code Functional Equivalence?

Nickil Maveli, Antonio Vergari, Shay B Cohen


Abstract
Code-LLMs, LLMs pre-trained on large code corpora, have shown great progress in learning rich representations of the structure and syntax of code, successfully using it to generate or classify code fragments. At the same time, understanding if they are able to do so because they capture code semantics, and how well, is still an open question. In this paper, we tackle this problem by introducing SeqCoBench, a benchmark for systematically assessing how Code-LLMs can capture code functional equivalence. SeqCoBench contains over 20 code transformations that either preserve or alter the semantics of Python programs. We conduct extensive evaluations in different settings, including zero-shot and parameter-efficient finetuning methods on state-of-the-art (Code)-LLMs to see if they can discern semantically equivalent or different pairs of programs in SeqCoBench. We find that the performance gap between these LLMs and classical match-based retrieval scores is minimal, with both approaches showing a concerning lack of depth in understanding code semantics.
Anthology ID:
2025.findings-naacl.382
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6865–6903
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.382/
DOI:
Bibkey:
Cite (ACL):
Nickil Maveli, Antonio Vergari, and Shay B Cohen. 2025. What can Large Language Models Capture about Code Functional Equivalence?. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 6865–6903, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
What can Large Language Models Capture about Code Functional Equivalence? (Maveli et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.382.pdf