CodeSSM: Towards State Space Models for Code Understanding

Shweta Verma, Abhinav Anand, Mira Mezini


Abstract
Although transformers dominate many code-specific tasks, they have significant limitations. This paper explores State Space Models (SSMs) as a promising alternative for code understanding tasks such as retrieval, classification, and clone detection. We introduce CodeSSM, the first SSM-based model trained on code corpora to assess its effectiveness. Our results demonstrate that SSMs are more sample-efficient and can extrapolate to longer contexts beyond the pretraining length. Extensive experiments show that SSMs offer a viable alternative to transformers, addressing several their limitations. Additionally, CodeSSM reduces memory usage by up to 64% compared to transformers at a context length of 2048, with greater savings as context length grows.The code is available [here](https://github.com/abx04/CodeSSM).
Anthology ID:
2025.emnlp-main.1735
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
34207–34223
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1735/
DOI:
Bibkey:
Cite (ACL):
Shweta Verma, Abhinav Anand, and Mira Mezini. 2025. CodeSSM: Towards State Space Models for Code Understanding. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 34207–34223, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
CodeSSM: Towards State Space Models for Code Understanding (Verma et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1735.pdf
Checklist:
 2025.emnlp-main.1735.checklist.pdf