Transition-based Parsing with Stack-Transformers

Ramón Fernandez Astudillo, Miguel Ballesteros, Tahira Naseem, Austin Blodgett, Radu Florian


Abstract
Modeling the parser state is key to good performance in transition-based parsing. Recurrent Neural Networks considerably improved the performance of transition-based systems by modelling the global state, e.g. stack-LSTM parsers, or local state modeling of contextualized features, e.g. Bi-LSTM parsers. Given the success of Transformer architectures in recent parsing systems, this work explores modifications of the sequence-to-sequence Transformer architecture to model either global or local parser states in transition-based parsing. We show that modifications of the cross attention mechanism of the Transformer considerably strengthen performance both on dependency and Abstract Meaning Representation (AMR) parsing tasks, particularly for smaller models or limited training data.
Anthology ID:
2020.findings-emnlp.89
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1001–1007
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.89
DOI:
10.18653/v1/2020.findings-emnlp.89
Bibkey:
Cite (ACL):
Ramón Fernandez Astudillo, Miguel Ballesteros, Tahira Naseem, Austin Blodgett, and Radu Florian. 2020. Transition-based Parsing with Stack-Transformers. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1001–1007, Online. Association for Computational Linguistics.
Cite (Informal):
Transition-based Parsing with Stack-Transformers (Fernandez Astudillo et al., Findings 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/2020.findings-emnlp.89.pdf
Data
Penn Treebank