Do large language models and humans have similar behaviours in causal inference with script knowledge?

Xudong Hong, Margarita Ryzhova, Daniel Biondi, Vera Demberg


Abstract
Recently, large pre-trained language models (LLMs) have demonstrated superior language understanding abilities, including zero-shot causal reasoning. However, it is unclear to what extent their capabilities are similar to human ones. We here study the processing of an event B in a script-based story, which causally depends on a previous event A. In our manipulation, event A is stated, negated, or omitted in an earlier section of the text. We first conducted a self-paced reading experiment, which showed that humans exhibit significantly longer reading times when causal conflicts exist (¬ A → B) than under logical conditions (A → B). However, reading times remain similar when cause A is not explicitly mentioned, indicating that humans can easily infer event B from their script knowledge. We then tested a variety of LLMs on the same data to check to what extent the models replicate human behavior. Our experiments show that 1) only recent LLMs, like GPT-3 or Vicuna, correlate with human behavior in the ¬ A → B condition. 2) Despite this correlation, all models still fail to predict that nil → B is less surprising than ¬ A → B, indicating that LLMs still have difficulties integrating script knowledge.
Anthology ID:
2024.starsem-1.34
Volume:
Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Danushka Bollegala, Vered Shwartz
Venue:
*SEM
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
421–437
Language:
URL:
https://aclanthology.org/2024.starsem-1.34
DOI:
Bibkey:
Cite (ACL):
Xudong Hong, Margarita Ryzhova, Daniel Biondi, and Vera Demberg. 2024. Do large language models and humans have similar behaviours in causal inference with script knowledge?. In Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024), pages 421–437, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Do large language models and humans have similar behaviours in causal inference with script knowledge? (Hong et al., *SEM 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/jeptaln-2024-ingestion/2024.starsem-1.34.pdf