Wenyu Du
2021
Linguistic Dependencies and Statistical Dependence
Jacob Louis Hoover
|
Wenyu Du
|
Alessandro Sordoni
|
Timothy J. O’Donnell
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Are pairs of words that tend to occur together also likely to stand in a linguistic dependency? This empirical question is motivated by a long history of literature in cognitive science, psycholinguistics, and NLP. In this work we contribute an extensive analysis of the relationship between linguistic dependencies and statistical dependence between words. Improving on previous work, we introduce the use of large pretrained language models to compute contextualized estimates of the pointwise mutual information between words (CPMI). For multiple models and languages, we extract dependency trees which maximize CPMI, and compare to gold standard linguistic dependencies. Overall, we find that CPMI dependencies achieve an unlabelled undirected attachment score of at most ≈ 0.5. While far above chance, and consistently above a non-contextualized PMI baseline, this score is generally comparable to a simple baseline formed by connecting adjacent words. We analyze which kinds of linguistic dependencies are best captured in CPMI dependencies, and also find marked differences between the estimates of the large pretrained language models, illustrating how their different training schemes affect the type of dependencies they capture.
End-to-End AMR Coreference Resolution
Qiankun Fu
|
Linfeng Song
|
Wenyu Du
|
Yue Zhang
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Although parsing to Abstract Meaning Representation (AMR) has become very popular and AMR has been shown effective on the many sentence-level downstream tasks, little work has studied how to generate AMRs that can represent multi-sentence information. We introduce the first end-to-end AMR coreference resolution model in order to build multi-sentence AMRs. Compared with the previous pipeline and rule-based approaches, our model alleviates error propagation and it is more robust for both in-domain and out-domain situations. Besides, the document-level AMRs obtained by our model can significantly improve over the AMRs generated by a rule-based method (Liu et al., 2015) on text summarization.
2020
Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach
Wenyu Du
|
Zhouhan Lin
|
Yikang Shen
|
Timothy J. O’Donnell
|
Yoshua Bengio
|
Yue Zhang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
It is commonly believed that knowledge of syntactic structure should improve language modeling. However, effectively and computationally efficiently incorporating syntactic structure into neural language models has been a challenging topic. In this paper, we make use of a multi-task objective, i.e., the models simultaneously predict words as well as ground truth parse trees in a form called “syntactic distances”, where information between these two separate objectives shares the same intermediate representation. Experimental results on the Penn Treebank and Chinese Treebank datasets show that when ground truth parse trees are provided as additional training signals, the model is able to achieve lower perplexity and induce trees with better quality.
Search
Co-authors
- Timothy O’Donnell 2
- Yue Zhang 2
- Jacob Louis Hoover 1
- Alessandro Sordoni 1
- Qiankun Fu 1
- show all...