Yisong Yue
2025
Beyond Numeric Rewards: In-Context Dueling Bandits with LLM Agents
Fanzeng Xia
|
Hao Liu
|
Yisong Yue
|
Tongxin Li
Findings of the Association for Computational Linguistics: ACL 2025
In-Context Reinforcement Learning (ICRL) is a frontier paradigm to solve Reinforcement Learning (RL) problems in the foundation-model era. While ICRL capabilities have been demonstrated in transformers through task-specific training, the potential of large language models (LLMs) out of the box remains largely unexplored. This paper investigates whether LLMs can generalize cross-domain to perform ICRL on the Dueling Bandits (DB) problem, a stateless preference-based RL setting. We find that top-performing LLMs exhibit a notable zero-shot capacity for relative decision-making, which translates to low short-term weak regret across all DB environments by quickly including the best arm in duels. However, an optimality gap still exists between LLMs and classic DB algorithms in terms of strong regret. LLMs struggle to converge and consistently exploit even when explicitly prompted to do so, and they are sensitive to prompt variations. To bridge this gap, we propose an agentic-flow framework—LLM with Enhanced Algorithmic Dueling (LEAD)—which integrates off-the-shelf DB algorithm support with LLM agents through fine-grained adaptive interplay. We show that LEAD inherits theoretical guarantees from classic DB algorithms on both weak and strong regret. We validate its efficacy and robustness even with noisy and adversarial prompts. The design of such an agentic framework sheds light on how to enhance the trustworthiness of general-purpose LLMs generalized to in-context decision-making tasks.
2024
Uncertainty Calibration for Tool-Using Language Agents
Hao Liu
|
Zi-Yi Dou
|
Yixin Wang
|
Nanyun Peng
|
Yisong Yue
Findings of the Association for Computational Linguistics: EMNLP 2024
There is increasing interest in equipping language models with the ability to leverage external tools for complex, goal-oriented tasks. However, interacting with external tools introduces inherent uncertainties due to imperfections and misalignments between the tools’ outputs and the agents’ internal models, often leading to suboptimal outcomes. We thus study the problem of tool-use calibration in language agents, and identify prompt design and execution trace selection as two primary areas that suffer from miscalibration. We then propose ProbeCal, which recalibrates the internal probabilities of tool-using language agents to better reflect the actual effectiveness of tool, and enables a more appropriate selection of prompts and execution paths. We empirically show that ProbeCal can significantly and consistently improve off-the-shelf language models in tool-using applications.
2010
Multi-Level Structured Models for Document-Level Sentiment Classification
Ainur Yessenalina
|
Yisong Yue
|
Claire Cardie
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing