Positive Experience Reflection for Agents in Interactive Text Environments

Philip Lippmann, Matthijs T. J. Spaan, Jie Yang


Abstract
Intelligent agents designed for interactive environments face significant challenges in text-based games, a domain that demands complex reasoning and adaptability. While agents based on large language models (LLMs) using self-reflection have shown promise, they struggle when initially successful and exhibit reduced effectiveness when using smaller LLMs. We introduce Sweet&Sour, a novel approach that addresses these limitations in existing reflection methods by incorporating positive experiences and managed memory to enrich the context available to the agent at decision time. Our comprehensive analysis spans both closed- and open-source LLMs and demonstrates the effectiveness of Sweet&Sour in improving agent performance, particularly in scenarios where previous approaches fall short.
Anthology ID:
2025.realm-1.10
Volume:
Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Ehsan Kamalloo, Nicolas Gontier, Xing Han Lu, Nouha Dziri, Shikhar Murty, Alexandre Lacoste
Venues:
REALM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
131–142
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.realm-1.10/
DOI:
Bibkey:
Cite (ACL):
Philip Lippmann, Matthijs T. J. Spaan, and Jie Yang. 2025. Positive Experience Reflection for Agents in Interactive Text Environments. In Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025), pages 131–142, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Positive Experience Reflection for Agents in Interactive Text Environments (Lippmann et al., REALM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.realm-1.10.pdf