Addressing the Data Sparsity Issue in Neural AMR Parsing

Xiaochang Peng, Chuan Wang, Daniel Gildea, Nianwen Xue


Abstract
Neural attention models have achieved great success in different NLP tasks. However, they have not fulfilled their promise on the AMR parsing task due to the data sparsity issue. In this paper, we describe a sequence-to-sequence model for AMR parsing and present different ways to tackle the data sparsity problem. We show that our methods achieve significant improvement over a baseline neural attention model and our results are also competitive against state-of-the-art systems that do not use extra linguistic resources.
Anthology ID:
E17-1035
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
366–375
Language:
URL:
https://aclanthology.org/E17-1035
DOI:
Bibkey:
Cite (ACL):
Xiaochang Peng, Chuan Wang, Daniel Gildea, and Nianwen Xue. 2017. Addressing the Data Sparsity Issue in Neural AMR Parsing. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 366–375, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Addressing the Data Sparsity Issue in Neural AMR Parsing (Peng et al., EACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/E17-1035.pdf