Abstract
Neural encoders have allowed dependency parsers to shift from higher-order structured models to simpler first-order ones, making decoding faster and still achieving better accuracy than non-neural parsers. This has led to a belief that neural encoders can implicitly encode structural constraints, such as siblings and grandparents in a tree. We tested this hypothesis and found that neural parsers may benefit from higher-order features, even when employing a powerful pre-trained encoder, such as BERT. While the gains of higher-order features are small in the presence of a powerful encoder, they are consistent for long-range dependencies and long sentences. In particular, higher-order models are more accurate on full sentence parses and on the exact match of modifier lists, indicating that they deal better with larger, more complex structures.- Anthology ID:
- 2020.acl-main.776
- Volume:
- Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Editors:
- Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 8795–8800
- Language:
- URL:
- https://aclanthology.org/2020.acl-main.776
- DOI:
- 10.18653/v1/2020.acl-main.776
- Cite (ACL):
- Erick Fonseca and André F. T. Martins. 2020. Revisiting Higher-Order Dependency Parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8795–8800, Online. Association for Computational Linguistics.
- Cite (Informal):
- Revisiting Higher-Order Dependency Parsers (Fonseca & Martins, ACL 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2020.acl-main.776.pdf