Representations of Fact, Fiction and Forecast in Large Language Models: Epistemics and Attitudes

Meng Li, Michael Vrazitulis, David Schlangen


Abstract
Rational speakers are supposed to know what they know and what they do not know, and to generate expressions matching the strength of evidence. In contrast, it is still a challenge for current large language models to generate corresponding utterances based on the assessment of facts and confidence in an uncertain real-world environment. While it has recently become popular to estimate and calibrate confidence of LLMs with verbalized uncertainty, what is lacking is a careful examination of the linguistic knowledge of uncertainty encoded in the latent space of LLMs. In this paper, we draw on typological frameworks of epistemic expressions to evaluate LLMs’ knowledge of epistemic modality, using controlled stories. Our experiments show that the performance of LLMs in generating epistemic expressions is limited and not robust, and hence the expressions of uncertainty generated by LLMs are not always reliable. To build uncertainty-aware LLMs, it is necessary to enrich semantic knowledge of epistemic modality in LLMs.
Anthology ID:
2025.acl-long.1345
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
27734–27757
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1345/
DOI:
Bibkey:
Cite (ACL):
Meng Li, Michael Vrazitulis, and David Schlangen. 2025. Representations of Fact, Fiction and Forecast in Large Language Models: Epistemics and Attitudes. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 27734–27757, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Representations of Fact, Fiction and Forecast in Large Language Models: Epistemics and Attitudes (Li et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1345.pdf