Causal Understanding by LLMs: The Role of Uncertainty
Oscar William Lithgow-Serrano, Vani Kanjirangat, Alessandro Antonucci
Abstract
Recent papers show LLMs achieve near-random accuracy in causal relation classification, raising questions about whether such failures arise from limited pretraining exposure or deeper representational gaps. We investigate this under uncertainty-based evaluation, testing whether pretraining exposure to causal examples improves causal understanding using >18K PubMed sentences—half from The Pile corpus, half post-2024—across seven models (Pythia-1.4B/7B/12B, GPT-J-6B, Dolly-7B/12B, Qwen-7B). We analyze model behavior through: (i) causal classification, where the model identifies causal relationships in text, and (ii) verbatim memorization probing, where we assess whether the model prefers previously seen causal statements over their paraphrases. Models perform four-way classification (direct/conditional/correlational/no-relationship) and select between originals and their generated paraphrases. Results show almost identical accuracy on seen/unseen sentences (p>0.05), no memorization bias (24.8% original selection), output distribution over the possible options almost flat — with entropic values near the maximum (1.35/1.39), confirming random guessing. Instruction-tuned models show severe miscalibration (Qwen: >95% confidence, 32.8% accuracy, ECE=0.49). Conditional relations induce highest entropy (+11% vs direct). These findings suggest that failures in causal understanding arise from the lack of structured causal representation, rather than insufficient exposure to causal examples during pretraining.- Anthology ID:
- 2025.uncertainlp-main.19
- Volume:
- Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025)
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editor:
- Noidea Noidea
- Venues:
- UncertaiNLP | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 208–228
- Language:
- URL:
- https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.19/
- DOI:
- Cite (ACL):
- Oscar William Lithgow-Serrano, Vani Kanjirangat, and Alessandro Antonucci. 2025. Causal Understanding by LLMs: The Role of Uncertainty. In Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025), pages 208–228, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- Causal Understanding by LLMs: The Role of Uncertainty (William Lithgow-Serrano et al., UncertaiNLP 2025)
- PDF:
- https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.19.pdf