Robust Code Summarization

Debanjan Mondal, Abhilasha Lodha, Ankita Sahoo, Beena Kumari


Abstract
This paper delves into the intricacies of code summarization using advanced transformer-based language models. Through empirical studies, we evaluate the efficacy of code summarization by altering function and variable names to explore whether models truly understand code semantics or merely rely on textual cues. We have also introduced adversaries like dead code and commented code across three programming languages (Python, Javascript, and Java) to further scrutinize the model’s understanding. Ultimately, our research aims to offer valuable insights into the inner workings of transformer-based LMs, enhancing their ability to understand code and contributing to more efficient software development practices and maintenance workflows.
Anthology ID:
2023.genbench-1.5
Volume:
Proceedings of the 1st GenBench Workshop on (Benchmarking) Generalisation in NLP
Month:
December
Year:
2023
Address:
Singapore
Editors:
Dieuwke Hupkes, Verna Dankers, Khuyagbaatar Batsuren, Koustuv Sinha, Amirhossein Kazemnejad, Christos Christodoulopoulos, Ryan Cotterell, Elia Bruni
Venues:
GenBench | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
65–75
Language:
URL:
https://aclanthology.org/2023.genbench-1.5
DOI:
10.18653/v1/2023.genbench-1.5
Bibkey:
Cite (ACL):
Debanjan Mondal, Abhilasha Lodha, Ankita Sahoo, and Beena Kumari. 2023. Robust Code Summarization. In Proceedings of the 1st GenBench Workshop on (Benchmarking) Generalisation in NLP, pages 65–75, Singapore. Association for Computational Linguistics.
Cite (Informal):
Robust Code Summarization (Mondal et al., GenBench-WS 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.genbench-1.5.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-5/2023.genbench-1.5.mp4