2025
pdf
bib
abs
Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph
Roman Vashurin
|
Ekaterina Fadeeva
|
Artem Vazhentsev
|
Lyudmila Rvanova
|
Daniil Vasilev
|
Akim Tsvigun
|
Sergey Petrakov
|
Rui Xing
|
Abdelrahman Sadallah
|
Kirill Grishchenkov
|
Alexander Panchenko
|
Timothy Baldwin
|
Preslav Nakov
|
Maxim Panov
|
Artem Shelmanov
Transactions of the Association for Computational Linguistics, Volume 13
The rapid proliferation of large language models (LLMs) has stimulated researchers to seek effective and efficient approaches to deal with LLM hallucinations and low-quality outputs. Uncertainty quantification (UQ) is a key element of machine learning applications in dealing with such challenges. However, research to date on UQ for LLMs has been fragmented in terms of techniques and evaluation methodologies. In this work, we address this issue by introducing a novel benchmark that implements a collection of state-of-the-art UQ baselines and offers an environment for controllable and consistent evaluation of novel UQ techniques over various text generation tasks. Our benchmark also supports the assessment of confidence normalization methods in terms of their ability to provide interpretable scores. Using our benchmark, we conduct a large-scale empirical investigation of UQ and normalization techniques across eleven tasks, identifying the most effective approaches.
2024
pdf
bib
abs
Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification
Ekaterina Fadeeva
|
Aleksandr Rubashevskii
|
Artem Shelmanov
|
Sergey Petrakov
|
Haonan Li
|
Hamdy Mubarak
|
Evgenii Tsymbalov
|
Gleb Kuzmin
|
Alexander Panchenko
|
Timothy Baldwin
|
Preslav Nakov
|
Maxim Panov
Findings of the Association for Computational Linguistics: ACL 2024
Large language models (LLMs) are notorious for hallucinating, i.e., producing erroneous claims in their output. Such hallucinations can be dangerous, as occasional factual inaccuracies in the generated text might be obscured by the rest of the output being generally factually correct, making it extremely hard for the users to spot them. Current services that leverage LLMs usually do not provide any means for detecting unreliable generations. Here, we aim to bridge this gap. In particular, we propose a novel fact-checking and hallucination detection pipeline based on token-level uncertainty quantification. Uncertainty scores leverage information encapsulated in the output of a neural network or its layers to detect unreliable predictions, and we show that they can be used to fact-check the atomic claims in the LLM output. Moreover, we present a novel token-level uncertainty quantification method that removes the impact of uncertainty about what claim to generate on the current step and what surface form to use. Our method Claim Conditioned Probability (CCP) measures only the uncertainty of a particular claim value expressed by the model. Experiments on the task of biography generation demonstrate strong improvements for CCP compared to the baselines for seven different LLMs and four languages. Human evaluation reveals that the fact-checking pipeline based on uncertainty quantification is competitive with a fact-checking tool that leverages external knowledge.
2023
pdf
bib
abs
LM-Polygraph: Uncertainty Estimation for Language Models
Ekaterina Fadeeva
|
Roman Vashurin
|
Akim Tsvigun
|
Artem Vazhentsev
|
Sergey Petrakov
|
Kirill Fedyanin
|
Daniil Vasilev
|
Elizaveta Goncharova
|
Alexander Panchenko
|
Maxim Panov
|
Timothy Baldwin
|
Artem Shelmanov
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Recent advancements in the capabilities of large language models (LLMs) have paved the way for a myriad of groundbreaking applications in various fields. However, a significant challenge arises as these models often “hallucinate”, i.e., fabricate facts without providing users an apparent means to discern the veracity of their statements. Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of LLMs. However, to date, research on UE methods for LLMs has been focused primarily on theoretical rather than engineering contributions. In this work, we tackle this issue by introducing LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python. Additionally, it introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores, empowering end-users to discern unreliable responses. LM-Polygraph is compatible with the most recent LLMs, including BLOOMz, LLaMA-2, ChatGPT, and GPT-4, and is designed to support future releases of similarly-styled LMs.