Shiva Upadhye


2020

pdf
Predicting Reference: What do Language Models Learn about Discourse Models?
Shiva Upadhye | Leon Bergen | Andrew Kehler
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Whereas there is a growing literature that probes neural language models to assess the degree to which they have latently acquired grammatical knowledge, little if any research has investigated their acquisition of discourse modeling ability. We address this question by drawing on a rich psycholinguistic literature that has established how different contexts affect referential biases concerning who is likely to be referred to next. The results reveal that, for the most part, the prediction behavior of neural language models does not resemble that of human language users.