Learning an Executable Neural Semantic Parser

Jianpeng Cheng, Siva Reddy, Vijay Saraswat, Mirella Lapata


Abstract
This article describes a neural semantic parser that maps natural language utterances onto logical forms that can be executed against a task-specific environment, such as a knowledge base or a database, to produce a response. The parser generates tree-structured logical forms with a transition-based approach, combining a generic tree-generation algorithm with domain-general grammar defined by the logical language. The generation process is modeled by structured recurrent neural networks, which provide a rich encoding of the sentential context and generation history for making predictions. To tackle mismatches between natural language and logical form tokens, various attention mechanisms are explored. Finally, we consider different training settings for the neural semantic parser, including fully supervised training where annotated logical forms are given, weakly supervised training where denotations are provided, and distant supervision where only unlabeled sentences and a knowledge base are available. Experiments across a wide range of data sets demonstrate the effectiveness of our parser.
Anthology ID:
J19-1002
Volume:
Computational Linguistics, Volume 45, Issue 1 - March 2019
Month:
March
Year:
2019
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
59–94
Language:
URL:
https://aclanthology.org/J19-1002
DOI:
10.1162/coli_a_00342
Bibkey:
Cite (ACL):
Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2019. Learning an Executable Neural Semantic Parser. Computational Linguistics, 45(1):59–94.
Cite (Informal):
Learning an Executable Neural Semantic Parser (Cheng et al., CL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/J19-1002.pdf