Chenxiao Liu
2023
Code Execution with Pre-trained Language Models
Chenxiao Liu
|
Shuai Lu
|
Weizhu Chen
|
Daxin Jiang
|
Alexey Svyatkovskiy
|
Shengyu Fu
|
Neel Sundaresan
|
Nan Duan
Findings of the Association for Computational Linguistics: ACL 2023
Code execution is a fundamental aspect of programming language semantics that reflects the exact behavior of the code. However, most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures. In this paper, we investigate how well pre-trained models can understand and perform code execution. We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution, which challenges existing models such as Codex. We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension. We evaluate CodeExecutor on code execution and show its promising performance and limitations. We also demonstrate its potential benefits for code intelligence tasks such as zero-shot code-to-code search and text-to-code generation. Our analysis provides insights into the learning and generalization abilities of pre-trained models for code execution.
2021
CodeQA: A Question Answering Dataset for Source Code Comprehension
Chenxiao Liu
|
Xiaojun Wan
Findings of the Association for Computational Linguistics: EMNLP 2021
We propose CodeQA, a free-form question answering dataset for the purpose of source code comprehension: given a code snippet and a question, a textual answer is required to be generated. CodeQA contains a Java dataset with 119,778 question-answer pairs and a Python dataset with 70,085 question-answer pairs. To obtain natural and faithful questions and answers, we implement syntactic rules and semantic analysis to transform code comments into question-answer pairs. We present the construction process and conduct systematic analysis of our dataset. Experiment results achieved by several neural baselines on our dataset are shown and discussed. While research on question-answering and machine reading comprehension develops rapidly, few prior work has drawn attention to code question answering. This new dataset can serve as a useful research benchmark for source code comprehension.
Search
Co-authors
- Xiaojun Wan 1
- Shuai Lu 1
- Weizhu Chen 1
- Daxin Jiang 1
- Alexey Svyatkovskiy 1
- show all...