SAMULE: Self-Learning Agents Enhanced by Multi-level Reflection

Yubin Ge, Salvatore Romeo, Jason Cai, Monica Sunkara, Yi Zhang


Abstract
Despite the rapid advancements in LLM agents, they still face the challenge of generating meaningful reflections due to inadequate error analysis and a reliance on rare successful trajectories, especially in complex tasks. In this work, we propose SAMULE, a new framework for self-learning agents powered by a retrospective language model that is trained based on Multi-Level Reflection Synthesis. It first synthesizes high-quality reflections across three complementary levels: Single-Trajectory Learning (micro-level) for detailed error correction; Intra-Task Learning (meso-level) to build error taxonomies across multiple trials of the same task, and Inter-Task Learning (macro-level) to extract transferable insights based on same typed errors from diverse task failures. Then we fine-tune a language model serving as the retrospective model to generate reflections during inference. We further extend our framework to interactive settings through a foresight-based reflection mechanism, enabling agents to proactively reflect and adapt during user interactions by comparing predicted and actual responses. Extensive experiments on three challenging benchmarks—TravelPlanner, NATURAL PLAN, and Tau-bench—demonstrate that our approach significantly outperforms reflection-based baselines. Our results highlight the critical role of well-designed reflection synthesis and failure-centric learning in building self-improving LLM agents.
Anthology ID:
2025.emnlp-main.839
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16602–16621
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.839/
DOI:
Bibkey:
Cite (ACL):
Yubin Ge, Salvatore Romeo, Jason Cai, Monica Sunkara, and Yi Zhang. 2025. SAMULE: Self-Learning Agents Enhanced by Multi-level Reflection. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 16602–16621, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
SAMULE: Self-Learning Agents Enhanced by Multi-level Reflection (Ge et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.839.pdf
Checklist:
 2025.emnlp-main.839.checklist.pdf