Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model
Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Liwei Chen, Vadim Sheinin
Abstract
Existing neural semantic parsers mainly utilize a sequence encoder, i.e., a sequential LSTM, to extract word order features while neglecting other valuable syntactic information such as dependency or constituent trees. In this paper, we first propose to use the syntactic graph to represent three types of syntactic information, i.e., word order, dependency and constituency features; then employ a graph-to-sequence model to encode the syntactic graph and decode a logical form. Experimental results on benchmark datasets show that our model is comparable to the state-of-the-art on Jobs640, ATIS, and Geo880. Experimental results on adversarial examples demonstrate the robustness of the model is also improved by encoding more syntactic information.- Anthology ID:
- D18-1110
- Volume:
- Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
- Month:
- October-November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Editors:
- Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 918–924
- Language:
- URL:
- https://aclanthology.org/D18-1110
- DOI:
- 10.18653/v1/D18-1110
- Cite (ACL):
- Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Liwei Chen, and Vadim Sheinin. 2018. Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 918–924, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model (Xu et al., EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/D18-1110.pdf
- Code
- IBM/Text-to-LogicForm