Abstract
Transition-based models can be fast and accurate for constituent parsing. Compared with chart-based models, they leverage richer features by extracting history information from a parser stack, which consists of a sequence of non-local constituents. On the other hand, during incremental parsing, constituent information on the right hand side of the current word is not utilized, which is a relative weakness of shift-reduce parsing. To address this limitation, we leverage a fast neural model to extract lookahead features. In particular, we build a bidirectional LSTM model, which leverages full sentence information to predict the hierarchy of constituents that each word starts and ends. The results are then passed to a strong transition-based constituent parser as lookahead features. The resulting parser gives 1.3% absolute improvement in WSJ and 2.3% in CTB compared to the baseline, giving the highest reported accuracies for fully-supervised parsing.- Anthology ID:
- Q17-1004
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 5
- Month:
- Year:
- 2017
- Address:
- Cambridge, MA
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 45–58
- Language:
- URL:
- https://aclanthology.org/Q17-1004
- DOI:
- 10.1162/tacl_a_00045
- Cite (ACL):
- Jiangming Liu and Yue Zhang. 2017. Shift-Reduce Constituent Parsing with Neural Lookahead Features. Transactions of the Association for Computational Linguistics, 5:45–58.
- Cite (Informal):
- Shift-Reduce Constituent Parsing with Neural Lookahead Features (Liu & Zhang, TACL 2017)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/Q17-1004.pdf
- Code
- SUTDNLP/LookAheadConparser