Effective Inference for Generative Neural Parsing

Mitchell Stern, Daniel Fried, Dan Klein


Abstract
Generative neural models have recently achieved state-of-the-art results for constituency parsing. However, without a feasible search procedure, their use has so far been limited to reranking the output of external parsers in which decoding is more tractable. We describe an alternative to the conventional action-level beam search used for discriminative neural models that enables us to decode directly in these generative models. We then show that by improving our basic candidate selection strategy and using a coarse pruning function, we can improve accuracy while exploring significantly less of the search space. Applied to the model of Choe and Charniak (2016), our inference procedure obtains 92.56 F1 on section 23 of the Penn Treebank, surpassing prior state-of-the-art results for single-model systems.
Anthology ID:
D17-1178
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1695–1700
Language:
URL:
https://aclanthology.org/D17-1178
DOI:
10.18653/v1/D17-1178
Bibkey:
Cite (ACL):
Mitchell Stern, Daniel Fried, and Dan Klein. 2017. Effective Inference for Generative Neural Parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1695–1700, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Effective Inference for Generative Neural Parsing (Stern et al., EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/D17-1178.pdf
Data
Penn Treebank