Natural Language-based State Representation in Deep Reinforcement Learning

Md Masudur Rahman, Yexiang Xue


Abstract
This paper investigates the potential of using natural language descriptions as an alternative to direct image-based observations for learning policies in reinforcement learning. Due to the inherent challenges in managing image-based observations, which include abundant information and irrelevant features, we propose a method that compresses images into a natural language form for state representation. This approach allows better interpretability and leverages the processing capabilities of large-language models. We conducted several experiments involving tasks that required image-based observation. The results demonstrated that policies trained using natural language descriptions of images yield better generalization than those trained directly from images, emphasizing the potential of this approach in practical settings.
Anthology ID:
2024.findings-naacl.83
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1310–1319
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-naacl.83/
DOI:
10.18653/v1/2024.findings-naacl.83
Bibkey:
Cite (ACL):
Md Masudur Rahman and Yexiang Xue. 2024. Natural Language-based State Representation in Deep Reinforcement Learning. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 1310–1319, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Natural Language-based State Representation in Deep Reinforcement Learning (Rahman & Xue, Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-naacl.83.pdf