Shiva Upadhye


2025

pdf bib
SPACER: A Parallel Dataset of Speech Production And Comprehension of Error Repairs
Shiva Upadhye | Jiaxuan Li | Richard Futrell
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

Speech errors are a natural part of communication, yet they rarely lead to complete communicative failure because both speakers and comprehenders can detect and correct errors. Although prior research has examined error monitoring and correction in production and comprehension separately, integrated investigation of both systems has been impeded by the scarcity of parallel data. In this study, we present SPACER, a parallel dataset that captures how naturalistic speech errors are corrected by both speakers and comprehenders. We focus on single-word substitution errors extracted from the Switchboard speech corpus, accompanied by speaker’s self-repairs and comprehenders’ responses from an offline text-editing experiment. Our exploratory analysis suggests asymmetries in error correction strategies: speakers are more likely to repair errors that introduce greater semantic and phonemic deviations, whereas comprehenders tend to correct errors that are phonemically similar to more plausible alternatives or do not fit into prior contexts. Our dataset enables future research on the integrated approach of language production and comprehension.

2020

pdf bib
Predicting Reference: What do Language Models Learn about Discourse Models?
Shiva Upadhye | Leon Bergen | Andrew Kehler
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Whereas there is a growing literature that probes neural language models to assess the degree to which they have latently acquired grammatical knowledge, little if any research has investigated their acquisition of discourse modeling ability. We address this question by drawing on a rich psycholinguistic literature that has established how different contexts affect referential biases concerning who is likely to be referred to next. The results reveal that, for the most part, the prediction behavior of neural language models does not resemble that of human language users.