@inproceedings{toney-wails-2025-certain,
    title = "Certain but not Probable? Differentiating Certainty from Probability in {LLM} Token Outputs for Probabilistic Scenarios",
    author = "Toney, Autumn  and
      Wails, Ryan",
    editor = "Noidea, Noidea",
    booktitle = "Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025)",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.6/",
    pages = "51--60",
    ISBN = "979-8-89176-349-4",
    abstract = "Reliable uncertainty quantification (UQ) is essential for ensuring trustworthy downstream use of large language models, especially when they are deployed in decision-support and other knowledge-intensive applications. Model certainty can be estimated from token logits, with derived probability and entropy values offering insight into performance on the prompt task. However, this approach may be inadequate for probabilistic scenarios, where the probabilities of token outputs are expected to align with the theoretical probabilities of the possible outcomes. We investigate the relationship between token certainty and alignment with theoretical probability distributions in well-defined probabilistic scenarios. Using GPT-4.1 and DeepSeek-Chat, we evaluate model responses to ten prompts involving probability (e.g., roll a six-sided die), both with and without explicit probability cues in the prompt (e.g., roll a fair six-sided die). We measure two dimensions: (1) response validity with respect to scenario constraints, and (2) alignment between token-level output probabilities and theoretical probabilities. Our results indicate that, while both models achieve perfect in-domain response accuracy across all prompt scenarios, their token-level probability and entropy values consistently diverge from the corresponding theoretical distributions."
}Markdown (Informal)
[Certain but not Probable? Differentiating Certainty from Probability in LLM Token Outputs for Probabilistic Scenarios](https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.6/) (Toney & Wails, UncertaiNLP 2025)
ACL