Ion Stoica
2021
Grounded Graph Decoding improves Compositional Generalization in Question Answering
Yu Gai
|
Paras Jain
|
Wendi Zhang
|
Joseph Gonzalez
|
Dawn Song
|
Ion Stoica
Findings of the Association for Computational Linguistics: EMNLP 2021
Question answering models struggle to generalize to novel compositions of training patterns. Current end-to-end models learn a flat input embedding which can lose input syntax context. Prior approaches improve generalization by learning permutation invariant models, but these methods do not scale to more complex train-test splits. We propose Grounded Graph Decoding, a method to improve compositional generalization of language representations by grounding structured predictions with an attention mechanism. Grounding enables the model to retain syntax information from the input that significantly improves generalization to complex inputs. By predicting a structured graph containing conjunctions of query clauses, we learn a group invariant representation without making assumptions on the target domain. Our model performs competitively on the Compositional Freebase Questions (CFQ) dataset, a challenging benchmark for compositional generalization in question answering. Especially, our model effectively solves the MCD1 split with 98% accuracy. All source is available at https://github.com/gaiyu0/cfq.
Contrastive Code Representation Learning
Paras Jain
|
Ajay Jain
|
Tianjun Zhang
|
Pieter Abbeel
|
Joseph Gonzalez
|
Ion Stoica
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Recent work learns contextual representations of source code by reconstructing tokens from their context. For downstream semantic understanding tasks like code clone detection, these representations should ideally capture program functionality. However, we show that the popular reconstruction-based RoBERTa model is sensitive to source code edits, even when the edits preserve semantics. We propose ContraCode: a contrastive pre-training task that learns code functionality, not form. ContraCode pre-trains a neural network to identify functionally similar variants of a program among many non-equivalent distractors. We scalably generate these variants using an automated source-to-source compiler as a form of data augmentation. Contrastive pre-training outperforms RoBERTa on an adversarial code clone detection benchmark by 39% AUROC. Surprisingly, improved adversarial robustness translates to better accuracy over natural code; ContraCode improves summarization and TypeScript type inference accuracy by 2 to 13 percentage points over competitive baselines. All source is available at https://github.com/parasj/contracode.
Search
Co-authors
- Paras Jain 2
- Joseph Gonzalez 2
- Yu Gai 1
- Wendi Zhang 1
- Dawn Song 1
- show all...