Judith Degen


2024

pdf bib
Can Syntactic Log-Odds Ratio Predict Acceptability and Satiation?
Jiayi Lu | Jonathan Merchan | Lian Wang | Judith Degen
Proceedings of the Society for Computation in Linguistics 2024

2023

pdf
Expectations over Unspoken Alternatives Predict Pragmatic Inferences
Jennifer Hu | Roger Levy | Judith Degen | Sebastian Schuster
Transactions of the Association for Computational Linguistics, Volume 11

Scalar inferences (SI) are a signature example of how humans interpret language based on unspoken alternatives. While empirical studies have demonstrated that human SI rates are highly variable—both within instances of a single scale, and across different scales—there have been few proposals that quantitatively explain both cross- and within-scale variation. Furthermore, while it is generally assumed that SIs arise through reasoning about unspoken alternatives, it remains debated whether humans reason about alternatives as linguistic forms, or at the level of concepts. Here, we test a shared mechanism explaining SI rates within and across scales: context-driven expectations about the unspoken alternatives. Using neural language models to approximate human predictive distributions, we find that SI rates are captured by the expectedness of the strong scalemate as an alternative. Crucially, however, expectedness robustly predicts cross-scale variation only under a meaning-based view of alternatives. Our results suggest that pragmatic inferences arise from context-driven expectations over alternatives, and these expectations operate at the level of concepts.1

2021

pdf
Modeling cross-linguistic production of referring expressions
Brandon Waldon | Judith Degen
Proceedings of the Society for Computation in Linguistics 2021

pdf
Predicting scalar inferences from “or” to “not both” using neural sentence encoders
Elissa Li | Sebastian Schuster | Judith Degen
Proceedings of the Society for Computation in Linguistics 2021

2020

pdf
Modeling Behavior in Truth Value Judgment Task Experiments
Brandon Waldon | Judith Degen
Proceedings of the Society for Computation in Linguistics 2020

pdf
Harnessing the linguistic signal to predict scalar inferences
Sebastian Schuster | Yuxing Chen | Judith Degen
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Pragmatic inferences often subtly depend on the presence or absence of linguistic features. For example, the presence of a partitive construction (of the) increases the strength of a so-called scalar inference: listeners perceive the inference that Chris did not eat all of the cookies to be stronger after hearing “Chris ate some of the cookies” than after hearing the same utterance without a partitive, “Chris ate some cookies”. In this work, we explore to what extent neural network sentence encoders can learn to predict the strength of scalar inferences. We first show that an LSTM-based sentence encoder trained on an English dataset of human inference strength ratings is able to predict ratings with high accuracy (r = 0.78). We then probe the model’s behavior using manually constructed minimal sentence pairs and corpus data. We first that the model inferred previously established associations between linguistic features and inference strength, suggesting that the model learns to use linguistic features to predict pragmatic inferences.