Cheryl Lee
2025
UniDebugger: Hierarchical Multi-Agent Framework for Unified Software Debugging
Cheryl Lee
|
Chunqiu Steven Xia
|
Longji Yang
|
Jen-tse Huang
|
Zhouruixing Zhu
|
Lingming Zhang
|
Michael R. Lyu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Software debugging is a time-consuming endeavor involving a series of steps, such as fault localization and patch generation, each requiring thorough analysis and a deep understanding of the underlying logic. While large language models (LLMs) demonstrate promising potential in coding tasks, their performance in debugging remains limited. Current LLM-based methods often focus on isolated steps and struggle with complex bugs. In this paper, we propose the first end-to-end framework, UniDebugger, for unified debugging through multi-agent synergy. It mimics the entire cognitive processes of developers, with each agent specialized as a particular component of this process rather than mirroring the actions of an independent expert as in previous multi-agent systems. Agents are coordinated through a three-level design, following a cognitive model of debugging, allowing adaptive handling of bugs with varying complexities. Experiments on extensive benchmarks demonstrate that UniDebugger significantly outperforms state-of-the-art repair methods, fixing 1.25x to 2.56x bugs on the repo-level benchmark, Defects4J. This performance is achieved without requiring ground-truth root-cause code statements, unlike the baselines. Our source code is available on an anonymous link: https://github.com/BEbillionaireUSD/UniDebugger.
Learning to Ask: When LLM Agents Meet Unclear Instruction
Wenxuan Wang
|
Shi Juluan
|
Zixuan Ling
|
Yuk-Kit Chan
|
Chaozheng Wang
|
Cheryl Lee
|
Youliang Yuan
|
Jen-tse Huang
|
Wenxiang Jiao
|
Michael R. Lyu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Equipped with the capability to call functions, modern LLM agents can leverage external tools for addressing a range of tasks unattainable through language skills alone. However, the effective execution of these tools relies heavily not just on the advanced capabilities of LLM agents but also on precise user instructions, which often cannot be ensured in the real world. To evaluate the performance of LLM agents tool-use under imperfect instructions, we meticulously examine the real-world instructions queried from users, analyze the error patterns, and build a challenging tool-use benchmark called Noisy ToolBench. We find that due to the next-token prediction training objective, LLM agents tend to arbitrarily generate the missed argument, which may lead to hallucinations and risks. To address this issue, we propose a novel framework, Ask-when-Needed, which prompts LLM agents to ask questions to users whenever they encounter obstacles due to unclear instructions. Moreover, to reduce the manual labor involved in user-LLM interaction and assess LLM agents’ performance in tool utilization from both accuracy and efficiency perspectives, we design an automated evaluation tool named ToolEvaluator. Our experiments demonstrate that the Ask-when-Needed significantly outperforms existing frameworks for tool learning in the Noisy ToolBench. We will release all related code and datasets to support future research.
Search
Fix author
Co-authors
- Jen-tse Huang 2
- Michael R. Lyu 2
- Yuk-Kit Chan 1
- Wenxiang Jiao 1
- Shi Juluan 1
- show all...