Abstract
We introduce context embeddings, dense vectors derived from a language model that represent the left/right context of a word instance, and demonstrate that context embeddings significantly improve the accuracy of our transition based parser. Our model consists of a bidirectional LSTM (BiLSTM) based language model that is pre-trained to predict words in plain text, and a multi-layer perceptron (MLP) decision model that uses features from the language model to predict the correct actions for an ArcHybrid transition based parser. We participated in the CoNLL 2017 UD Shared Task as the “Koç University” team and our system was ranked 7th out of 33 systems that parsed 81 treebanks in 49 languages.- Anthology ID:
- K17-3008
- Volume:
- Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies
- Month:
- August
- Year:
- 2017
- Address:
- Vancouver, Canada
- Editors:
- Jan Hajič, Dan Zeman
- Venue:
- CoNLL
- SIG:
- SIGNLL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 80–87
- Language:
- URL:
- https://aclanthology.org/K17-3008
- DOI:
- 10.18653/v1/K17-3008
- Cite (ACL):
- Ömer Kırnap, Berkay Furkan Önder, and Deniz Yuret. 2017. Parsing with Context Embeddings. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 80–87, Vancouver, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Parsing with Context Embeddings (Kırnap et al., CoNLL 2017)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/K17-3008.pdf