Alex Chen
2025
Enabling Self-Improving Agents to Learn at Test Time With Human-In-The-Loop Guidance
Yufei He
|
Ruoyu Li
|
Alex Chen
|
Yue Liu
|
Yulin Chen
|
Yuan Sui
|
Cheng Chen
|
Yi Zhu
|
Luca Luo
|
Frank Yang
|
Bryan Hooi
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Large language model (LLM) agents often struggle in environments where rules and required domain knowledge frequently change, such as regulatory compliance and user risk screening. To address this limitation, we propose the Adaptive Reflective Interactive Agent (ARIA), an LLM agent framework designed specifically to continuously learn updated domain knowledge at test time. ARIA assesses its own uncertainty through structured self-dialogue, proactively identifying knowledge gaps and requesting targeted explanations or corrections from human experts. It then systematically updates an internal, timestamped knowledge repository with provided human guidance, detecting and resolving conflicting or outdated knowledge through comparisons and clarification queries. We evaluate ARIA on the realistic customer due diligence name screening task on a global payment platform, alongside publicly available dynamic knowledge tasks. Results demonstrate significant improvements in adaptability and accuracy compared to baselines using standard offline fine-tuning and existing self-improving agents. ARIA has been deployed on a global payment platform serving over 150 million monthly active users.
Search
Fix author
Co-authors
- Yulin Chen 1
- Cheng Chen 1
- Yufei He 1
- Bryan Hooi 1
- Ruoyu Li 1
- show all...