Margarita Ryzhova


2024

pdf
Large language models fail to derive atypicality inferences in a human-like manner
Charlotte Kurch | Margarita Ryzhova | Vera Demberg
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

Recent studies have claimed that large language models (LLMs) are capable of drawing pragmatic inferences (Qiu et al., 2023; Hu et al., 2022; Barattieri di San Pietro et al., 2023). The present paper sets out to test LLM’s abilities on atypicality inferences, a type of pragmatic inference that is triggered through informational redundancy. We test several state-of-the-art LLMs in a zero-shot setting and find that LLMs fail to systematically fail to derive atypicality inferences. Our robustness analysis indicates that when inferences are seemingly derived in a few-shot settings, these results can be attributed to shallow pattern matching and not pragmatic inferencing. We also analyse the performance of the LLMs at the different derivation steps required for drawing atypicality inferences – our results show that models have access to script knowledge and can use it to identify redundancies and accommodate the atypicality inference. The failure instead seems to stem from not reacting to the subtle maxim of quantity violations introduced by the informationally redundant utterances.

pdf
Do large language models and humans have similar behaviours in causal inference with script knowledge?
Xudong Hong | Margarita Ryzhova | Daniel Biondi | Vera Demberg
Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)

Recently, large pre-trained language models (LLMs) have demonstrated superior language understanding abilities, including zero-shot causal reasoning. However, it is unclear to what extent their capabilities are similar to human ones. We here study the processing of an event B in a script-based story, which causally depends on a previous event A. In our manipulation, event A is stated, negated, or omitted in an earlier section of the text. We first conducted a self-paced reading experiment, which showed that humans exhibit significantly longer reading times when causal conflicts exist (¬ A → B) than under logical conditions (A → B). However, reading times remain similar when cause A is not explicitly mentioned, indicating that humans can easily infer event B from their script knowledge. We then tested a variety of LLMs on the same data to check to what extent the models replicate human behavior. Our experiments show that 1) only recent LLMs, like GPT-3 or Vicuna, correlate with human behavior in the ¬ A → B condition. 2) Despite this correlation, all models still fail to predict that nil → B is less surprising than ¬ A → B, indicating that LLMs still have difficulties integrating script knowledge.