Language Guided Exploration for RL Agents in Text Environments

Hitesh Golchha, Sahil Yerawar, Dhruvesh Patel, Soham Dan, Keerthiram Murugesan


Abstract
Real-world sequential decision making is characterized by sparse rewards and large decision spaces, posing significant difficulty for experiential learning systems like tabula rasa reinforcement learning (RL) agents. Large Language Models (LLMs), with a wealth of world knowledge, can help RL agents learn quickly and adapt to distribution shifts. In this work, we introduce Language Guided Exploration (LGE) framework, which uses a pre-trained language model (called GUIDE ) to provide decision-level guidance to an RL agent (called EXPLORER ). We observe that on ScienceWorld (Wang et al., 2022), a challenging text environment, LGE outperforms vanilla RL agents significantly and also outperforms other sophisticated methods like Behaviour Cloning and Text Decision Transformer.
Anthology ID:
2024.findings-naacl.7
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
93–102
Language:
URL:
https://aclanthology.org/2024.findings-naacl.7
DOI:
Bibkey:
Cite (ACL):
Hitesh Golchha, Sahil Yerawar, Dhruvesh Patel, Soham Dan, and Keerthiram Murugesan. 2024. Language Guided Exploration for RL Agents in Text Environments. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 93–102, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Language Guided Exploration for RL Agents in Text Environments (Golchha et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2024.findings-naacl.7.pdf
Copyright:
 2024.findings-naacl.7.copyright.pdf