Constituency Parsing with a Self-Attentive Encoder

Nikita Kitaev, Dan Klein


Abstract
We demonstrate that replacing an LSTM encoder with a self-attentive architecture can lead to improvements to a state-of-the-art discriminative constituency parser. The use of attention makes explicit the manner in which information is propagated between different locations in the sentence, which we use to both analyze our model and propose potential improvements. For example, we find that separating positional and content information in the encoder can lead to improved parsing accuracy. Additionally, we evaluate different approaches for lexical representation. Our parser achieves new state-of-the-art results for single models trained on the Penn Treebank: 93.55 F1 without the use of any external data, and 95.13 F1 when using pre-trained word representations. Our parser also outperforms the previous best-published accuracy figures on 8 of the 9 languages in the SPMRL dataset.
Anthology ID:
P18-1249
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2676–2686
Language:
URL:
https://aclanthology.org/P18-1249
DOI:
10.18653/v1/P18-1249
Bibkey:
Cite (ACL):
Nikita Kitaev and Dan Klein. 2018. Constituency Parsing with a Self-Attentive Encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Constituency Parsing with a Self-Attentive Encoder (Kitaev & Klein, ACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/P18-1249.pdf
Note:
 P18-1249.Notes.pdf
Poster:
 P18-1249.Poster.pdf
Code
 nikitakit/self-attentive-parser +  additional community code
Data
Penn Treebank