Neural Probabilistic Model for Non-projective MST Parsing

Xuezhe Ma, Eduard Hovy


Abstract
In this paper, we propose a probabilistic parsing model that defines a proper conditional probability distribution over non-projective dependency trees for a given sentence, using neural representations as inputs. The neural network architecture is based on bi-directional LSTMCNNs, which automatically benefits from both word- and character-level representations, by using a combination of bidirectional LSTMs and CNNs. On top of the neural network, we introduce a probabilistic structured layer, defining a conditional log-linear model over non-projective trees. By exploiting Kirchhoff’s Matrix-Tree Theorem (Tutte, 1984), the partition functions and marginals can be computed efficiently, leading to a straightforward end-to-end model training procedure via back-propagation. We evaluate our model on 17 different datasets, across 14 different languages. Our parser achieves state-of-the-art parsing performance on nine datasets.
Anthology ID:
I17-1007
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Editors:
Greg Kondrak, Taro Watanabe
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
59–69
Language:
URL:
https://aclanthology.org/I17-1007
DOI:
Bibkey:
Cite (ACL):
Xuezhe Ma and Eduard Hovy. 2017. Neural Probabilistic Model for Non-projective MST Parsing. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 59–69, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Neural Probabilistic Model for Non-projective MST Parsing (Ma & Hovy, IJCNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/I17-1007.pdf
Data
Penn Treebank