Dependency Grammar Induction with Neural Lexicalization and Big Training Data

Wenjuan Han, Yong Jiang, Kewei Tu


Abstract
We study the impact of big models (in terms of the degree of lexicalization) and big data (in terms of the training corpus size) on dependency grammar induction. We experimented with L-DMV, a lexicalized version of Dependency Model with Valence (Klein and Manning, 2004) and L-NDMV, our lexicalized extension of the Neural Dependency Model with Valence (Jiang et al., 2016). We find that L-DMV only benefits from very small degrees of lexicalization and moderate sizes of training corpora. L-NDMV can benefit from big training data and lexicalization of greater degrees, especially when enhanced with good model initialization, and it achieves a result that is competitive with the current state-of-the-art.
Anthology ID:
D17-1176
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1683–1688
Language:
URL:
https://aclanthology.org/D17-1176
DOI:
10.18653/v1/D17-1176
Bibkey:
Cite (ACL):
Wenjuan Han, Yong Jiang, and Kewei Tu. 2017. Dependency Grammar Induction with Neural Lexicalization and Big Training Data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1683–1688, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Dependency Grammar Induction with Neural Lexicalization and Big Training Data (Han et al., EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/D17-1176.pdf
Attachment:
 D17-1176.Attachment.pdf