Yiyou Sun
2025
Uncertainty Propagation on LLM Agent
Qiwei Zhao
|
Dong Li
|
Yanchi Liu
|
Wei Cheng
|
Yiyou Sun
|
Mika Oishi
|
Takao Osaki
|
Katsushi Matsuda
|
Huaxiu Yao
|
Chen Zhao
|
Haifeng Chen
|
Xujiang Zhao
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) integrated into multi-step agent systems enable complex decision-making processes across various applications. However, their outputs often lack reliability, making uncertainty estimation crucial. Existing uncertainty estimation methods primarily focus on final-step outputs, which fail to account for cumulative uncertainty over the multi-step decision-making process and the dynamic interactions between agents and their environments. To address these limitations, we propose SAUP (Situation Awareness Uncertainty Propagation), a novel framework that propagates uncertainty through each step of an LLM-based agent’s reasoning process. SAUP incorporates situational awareness by assigning situational weights to each step’s uncertainty during the propagation. Our method, compatible with various one-step uncertainty estimation techniques, provides a comprehensive and accurate uncertainty measure. Extensive experiments on benchmark datasets demonstrate that SAUP significantly outperforms existing state-of-the-art methods, achieving up to 20% improvement in AUROC.
2024
Uncertainty Quantification for In-Context Learning of Large Language Models
Chen Ling
|
Xujiang Zhao
|
Xuchao Zhang
|
Wei Cheng
|
Yanchi Liu
|
Yiyou Sun
|
Mika Oishi
|
Takao Osaki
|
Katsushi Matsuda
|
Jie Ji
|
Guangji Bai
|
Liang Zhao
|
Haifeng Chen
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLM’s response, such as hallucination, have also been actively discussed. Existing works have been devoted to quantifying the uncertainty in LLM’s response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning. In this work, we delve into the predictive uncertainty of LLMs associated with in-context learning, highlighting that such uncertainties may stem from both the provided demonstrations (aleatoric uncertainty) and ambiguities tied to the model’s configurations (epistemic uncertainty). We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties. The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion. Extensive experiments are conducted to demonstrate the effectiveness of the decomposition. The code and data are available at: https://github.com/lingchen0331/UQ_ICL.
Search
Fix author
Co-authors
- Haifeng Chen 2
- Wei Cheng 2
- Yanchi Liu 2
- Katsushi Matsuda 2
- Mika Oishi 2
- show all...