@inproceedings{han-etal-2017-dependency,
    title = "Dependency Grammar Induction with Neural Lexicalization and Big Training Data",
    author = "Han, Wenjuan  and
      Jiang, Yong  and
      Tu, Kewei",
    editor = "Palmer, Martha  and
      Hwa, Rebecca  and
      Riedel, Sebastian",
    booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
    month = sep,
    year = "2017",
    address = "Copenhagen, Denmark",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/D17-1176/",
    doi = "10.18653/v1/D17-1176",
    pages = "1683--1688",
    abstract = "We study the impact of big models (in terms of the degree of lexicalization) and big data (in terms of the training corpus size) on dependency grammar induction. We experimented with L-DMV, a lexicalized version of Dependency Model with Valence (Klein and Manning, 2004) and L-NDMV, our lexicalized extension of the Neural Dependency Model with Valence (Jiang et al., 2016). We find that L-DMV only benefits from very small degrees of lexicalization and moderate sizes of training corpora. L-NDMV can benefit from big training data and lexicalization of greater degrees, especially when enhanced with good model initialization, and it achieves a result that is competitive with the current state-of-the-art."
}Markdown (Informal)
[Dependency Grammar Induction with Neural Lexicalization and Big Training Data](https://preview.aclanthology.org/iwcs-25-ingestion/D17-1176/) (Han et al., EMNLP 2017)
ACL