Abstract
Recent studies have claimed that large language models (LLMs) are capable of drawing pragmatic inferences (Qiu et al., 2023; Hu et al., 2022; Barattieri di San Pietro et al., 2023). The present paper sets out to test LLM’s abilities on atypicality inferences, a type of pragmatic inference that is triggered through informational redundancy. We test several state-of-the-art LLMs in a zero-shot setting and find that LLMs fail to systematically fail to derive atypicality inferences. Our robustness analysis indicates that when inferences are seemingly derived in a few-shot settings, these results can be attributed to shallow pattern matching and not pragmatic inferencing. We also analyse the performance of the LLMs at the different derivation steps required for drawing atypicality inferences – our results show that models have access to script knowledge and can use it to identify redundancies and accommodate the atypicality inference. The failure instead seems to stem from not reacting to the subtle maxim of quantity violations introduced by the informationally redundant utterances.- Anthology ID:
- 2024.cmcl-1.8
- Volume:
- Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand
- Editors:
- Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke, Yohei Oseki
- Venues:
- CMCL | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 86–100
- Language:
- URL:
- https://aclanthology.org/2024.cmcl-1.8
- DOI:
- Cite (ACL):
- Charlotte Kurch, Margarita Ryzhova, and Vera Demberg. 2024. Large language models fail to derive atypicality inferences in a human-like manner. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 86–100, Bangkok, Thailand. Association for Computational Linguistics.
- Cite (Informal):
- Large language models fail to derive atypicality inferences in a human-like manner (Kurch et al., CMCL-WS 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.cmcl-1.8.pdf