Yan Cong


2021

pdf bib
Pragmatic competence of pre-trained language models through the lens of discourse connectives
Lalchand Pandia | Yan Cong | Allyson Ettinger
Proceedings of the 25th Conference on Computational Natural Language Learning

As pre-trained language models (LMs) continue to dominate NLP, it is increasingly important that we understand the depth of language capabilities in these models. In this paper, we target pre-trained LMs’ competence in pragmatics, with a focus on pragmatics relating to discourse connectives. We formulate cloze-style tests using a combination of naturally-occurring data and controlled inputs drawn from psycholinguistics. We focus on testing models’ ability to use pragmatic cues to predict discourse connectives, models’ ability to understand implicatures relating to connectives, and the extent to which models show humanlike preferences regarding temporal dynamics of connectives. We find that although models predict connectives reasonably well in the context of naturally-occurring data, when we control contexts to isolate high-level pragmatic cues, model sensitivity is much lower. Models also do not show substantial humanlike temporal preferences. Overall, the findings suggest that at present, dominant pre-training paradigms do not result in substantial pragmatic competence in our models.

2015

pdf bib
The Invertible Construction in Chinese
Yan Cong | Chu-Ren Huang | Lian-Hee Wee
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation