2023
pdf
abs
Program Translation via Code Distillation
Yufan Huang
|
Mengnan Qi
|
Yongqiang Yao
|
Maoquan Wang
|
Bin Gu
|
Colin Clement
|
Neel Sundaresan
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Software version migration and program translation are an important and costly part of the lifecycle of large codebases. Traditional machine translation relies on parallel corpora for supervised translation, which is not feasible for program translation due to a dearth of aligned data. Recent unsupervised neural machine translation techniques have overcome data limitations by included techniques such as back translation and low level compiler intermediate representations (IR). These methods face significant challenges due to the noise in code snippet alignment and the diversity of IRs respectively. In this paper we propose a novel model called Code Distillation (CoDist) whereby we capture the semantic and structural equivalence of code in a language agnostic intermediate representation. Distilled code serves as a translation pivot for any programming language, leading by construction to parallel corpora which scale to all available source code by simply applying the distillation compiler. We demonstrate that our approach achieves state-of-the-art performance on CodeXGLUE and TransCoder GeeksForGeeks translation benchmarks, with an average absolute increase of 12.7% on the TransCoder GeeksforGeeks translation benchmark compare to TransCoder-ST.
pdf
abs
SUT: Active Defects Probing for Transcompiler Models
Mengnan Qi
|
Yufan Huang
|
Maoquan Wang
|
Yongqiang Yao
|
Zihan Liu
|
Bin Gu
|
Colin Clement
|
Neel Sundaresan
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Automatic Program translation has enormous application value and hence has been attracting significant interest from AI researchers. However, we observe that current program translation models still make elementary syntax errors, particularly, when the target language does not have syntax elements in the source language. Metrics like BLUE, CodeBLUE and computation accuracy may not expose these issues. In this paper we introduce a new metrics for programming language translation and these metrics address these basic syntax errors. We develop a novel active defects probing suite called Syntactic Unit Tests (SUT) which includes a highly interpretable evaluation harness for accuracy and test scoring. Experiments have shown that even powerful models like ChatGPT still make mistakes on these basic unit tests. Specifically, compared to previous program translation task evaluation dataset, its pass rate on our unit tests has decreased by 26.15%. Further our evaluation harness reveal syntactic element errors in which these models exhibit deficiencies.
2022
pdf
abs
Execution-based Evaluation for Data Science Code Generation Models
Junjie Huang
|
Chenglong Wang
|
Jipeng Zhang
|
Cong Yan
|
Haotian Cui
|
Jeevana Priya Inala
|
Colin Clement
|
Nan Duan
Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances)
Code generation models can benefit data scientists’ productivity by automatically generating code from context and text descriptions. An important measure of the modeling progress is whether a model can generate code that can correctly execute to solve the task. However, due to the lack of an evaluation dataset that directly supports execution-based model evaluation, existing work relies on code surface form similarity metrics (e.g., BLEU, CodeBLEU) for model selection, which can be inaccurate. To remedy this, we introduce ExeDS, an evaluation dataset for execution evaluation for data science code generation tasks. ExeDS contains a set of 534 problems from Jupyter Notebooks, each consisting of code context, task description, reference program, and the desired execution output. With ExeDS, we evaluate the execution performance of five state-of-the-art code generation models that have achieved high surface-form evaluation scores. Our experiments show that models with high surface-form scores do not necessarily perform well on execution metrics, and execution-based metrics can better capture model code generation errors. All the code and data will be released upon acceptance.
2021
pdf
abs
Long-Range Modeling of Source Code Files with eWASH: Extended Window Access by Syntax Hierarchy
Colin Clement
|
Shuai Lu
|
Xiaoyu Liu
|
Michele Tufano
|
Dawn Drain
|
Nan Duan
|
Neel Sundaresan
|
Alexey Svyatkovskiy
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Statistical language modeling and translation with transformers have found many successful applications in program understanding and generation tasks, setting high benchmarks for tools in modern software development environments. The finite context window of these neural models means, however, that they will be unable to leverage the entire relevant context of large files and packages for any given task. While there are many efforts to extend the context window, we introduce an architecture-independent approach for leveraging the syntactic hierarchies of source code for incorporating entire file-level context into a fixed-length window. Using concrete syntax trees of each source file we extract syntactic hierarchies and integrate them into context window by selectively removing from view more specific, less relevant scopes for a given task. We evaluate this approach on code generation tasks and joint translation of natural language and source code in Python programming language, achieving a new state-of-the-art in code completion and summarization for Python in the CodeXGLUE benchmark. We also introduce new CodeXGLUE benchmarks for user-experience-motivated tasks: code completion with normalized literals, method body completion/code summarization conditioned on file-level context.
2020
pdf
abs
PyMT5: multi-mode translation of natural language and Python code with transformers
Colin Clement
|
Dawn Drain
|
Jonathan Timcheck
|
Alexey Svyatkovskiy
|
Neel Sundaresan
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Simultaneously modeling source code and natural language has many exciting applications in automated software development and understanding. Pursuant to achieving such technology, we introduce PyMT5, the Python method text-to-text transfer transformer, which is trained to translate between all pairs of Python method feature combinations: a single model that can both predict whole methods from natural language documentation strings (docstrings) and summarize code into docstrings of any common style. We present an analysis and modeling effort of a large-scale parallel corpus of 26 million Python methods and 7.7 million method-docstring pairs, demonstrating that for docstring and method generation, PyMT5 outperforms similarly-sized auto-regressive language models (GPT2) which were English pre-trained or randomly initialized. On the CodeSearchNet test set, our best model predicts 92.1% syntactically correct method bodies, achieved a BLEU score of 8.59 for method generation and 16.3 for docstring generation (summarization), and achieved a ROUGE-L F-score of 24.8 for method generation and 36.7 for docstring generation.