Simple and Effective Baselines for Code Summarisation Evaluation

Jade Robinson, Jonathan K. Kummerfeld


Abstract
Code documentation is useful, but writing it is time-consuming. Different techniques for generating code summaries have emerged, but comparing them is difficult because human evaluation is expensive and automatic metrics are unreliable. In this paper, we introduce a simple new baseline in which we ask an LLM to give an overall score to a summary. Unlike n-gram and embedding-based baselines, our approach is able to consider the code when giving a score. This allows us to also make a variant that does not consider the reference summary at all, which could be used for other tasks, e.g., to evaluate the quality of documentation in code bases. We find that our method is as good or better than prior metrics, though we recommend using it in conjunction with embedding-based methods to avoid the risk of LLM-specific bias.
Anthology ID:
2025.alta-main.9
Volume:
Proceedings of The 23rd Annual Workshop of the Australasian Language Technology Association
Month:
November
Year:
2025
Address:
Sydney, Australia
Editors:
Jonathan K. Kummerfeld, Aditya Joshi, Mark Dras
Venue:
ALTA
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
112–131
Language:
URL:
https://preview.aclanthology.org/ingest-alta/2025.alta-main.9/
DOI:
Bibkey:
Cite (ACL):
Jade Robinson and Jonathan K. Kummerfeld. 2025. Simple and Effective Baselines for Code Summarisation Evaluation. In Proceedings of The 23rd Annual Workshop of the Australasian Language Technology Association, pages 112–131, Sydney, Australia. Association for Computational Linguistics.
Cite (Informal):
Simple and Effective Baselines for Code Summarisation Evaluation (Robinson & Kummerfeld, ALTA 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-alta/2025.alta-main.9.pdf