Pengyu Nie
2022
Impact of Evaluation Methodologies on Code Summarization
Pengyu Nie
|
Jiyang Zhang
|
Junyi Jessy Li
|
Ray Mooney
|
Milos Gligoric
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
There has been a growing interest in developing machine learning (ML) models for code summarization tasks, e.g., comment generation and method naming. Despite substantial increase in the effectiveness of ML models, the evaluation methodologies, i.e., the way people split datasets into training, validation, and test sets, were not well studied. Specifically, no prior work on code summarization considered the timestamps of code and comments during evaluation. This may lead to evaluations that are inconsistent with the intended use cases. In this paper, we introduce the time-segmented evaluation methodology, which is novel to the code summarization research community, and compare it with the mixed-project and cross-project methodologies that have been commonly used. Each methodology can be mapped to some use cases, and the time-segmented methodology should be adopted in the evaluation of ML models for code summarization. To assess the impact of methodologies, we collect a dataset of (code, comment) pairs with timestamps to train and evaluate several recent ML models for code summarization. Our experiments show that different methodologies lead to conflicting evaluation results. We invite the community to expand the set of methodologies used in evaluations.
2020
Learning to Update Natural Language Comments Based on Code Changes
Sheena Panthaplackel
|
Pengyu Nie
|
Milos Gligoric
|
Junyi Jessy Li
|
Raymond Mooney
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
We formulate the novel task of automatically updating an existing natural language comment based on changes in the body of code it accompanies. We propose an approach that learns to correlate changes across two distinct language representations, to generate a sequence of edits that are applied to the existing comment to reflect the source code modifications. We train and evaluate our model using a dataset that we collected from commit histories of open-source software projects, with each example consisting of a concurrent update to a method and its corresponding comment. We compare our approach against multiple baselines using both automatic metrics and human evaluation. Results reflect the challenge of this task and that our model outperforms baselines with respect to making edits.
Search
Co-authors
- Junyi Jessy Li 2
- Milos Gligoric 2
- Jiyang Zhang 1
- Ray Mooney 1
- Sheena Panthaplackel 1
- show all...
Venues
- acl2