Exploring Data Augmentation for Code Generation Tasks

Pinzhen Chen, Gerasimos Lampouras


Abstract
Advances in natural language processing, such as transfer learning from pre-trained language models, have impacted how models are trained for programming language tasks too. Previous research primarily explored code pre-training and expanded it through multi-modality and multi-tasking, yet the data for downstream tasks remain modest in size. Focusing on data utilization for downstream tasks, we propose and adapt augmentation methods that yield consistent improvements in code translation and summarization by up to 6.9% and 7.5% respectively. Further analysis suggests that our methods work orthogonally and show benefits in output code style and numeric consistency. We also discuss test data imperfections.
Anthology ID:
2023.findings-eacl.114
Volume:
Findings of the Association for Computational Linguistics: EACL 2023
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1542–1550
Language:
URL:
https://aclanthology.org/2023.findings-eacl.114
DOI:
10.18653/v1/2023.findings-eacl.114
Bibkey:
Cite (ACL):
Pinzhen Chen and Gerasimos Lampouras. 2023. Exploring Data Augmentation for Code Generation Tasks. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1542–1550, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Exploring Data Augmentation for Code Generation Tasks (Chen & Lampouras, Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2023.findings-eacl.114.pdf
Video:
 https://preview.aclanthology.org/add_acl24_videos/2023.findings-eacl.114.mp4