Dongwon Ryu
2023
A Minimal Approach for Natural Language Action Space in Text-based Games
Dongwon Ryu
|
Meng Fang
|
Gholamreza Haffari
|
Shirui Pan
|
Ehsan Shareghi
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)
Text-based games (TGs) are language-based interactive environments for reinforcement learning. While language models (LMs) and knowledge graphs (KGs) are commonly used for handling large action space in TGs, it is unclear whether these techniques are necessary or overused. In this paper, we revisit the challenge of exploring the action space in TGs and propose 𝜖-admissible exploration, a minimal approach of utilizing admissible actions, for training phase. Additionally, we present a text-based actor-critic (TAC) agent that produces textual commands for game, solely from game observations, without requiring any KG or LM. Our method, on average across 10 games from Jericho, outperforms strong baselines and state-of-the-art agents that use LM and KG. Our approach highlights that a much lighter model design, with a fresh perspective on utilizing the information within the environments, suffices for an effective exploration of exponentially large action spaces.
2022
Fire Burns, Sword Cuts: Commonsense Inductive Bias for Exploration in Text-based Games
Dongwon Ryu
|
Ehsan Shareghi
|
Meng Fang
|
Yunqiu Xu
|
Shirui Pan
|
Reza Haf
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Text-based games (TGs) are exciting testbeds for developing deep reinforcement learning techniques due to their partially observed environments and large action spaces. In these games, the agent learns to explore the environment via natural language interactions with the game simulator. A fundamental challenge in TGs is the efficient exploration of the large action space when the agent has not yet acquired enough knowledge about the environment. We propose CommExpl, an exploration technique that injects external commonsense knowledge, via a pretrained language model (LM), into the agent during training when the agent is the most uncertain about its next action. Our method exhibits improvement on the collected game scores during the training in four out of nine games from Jericho. Additionally, the produced trajectory of actions exhibit lower perplexity, when tested with a pretrained LM, indicating better closeness to human language.
2021
Multilingual Simultaneous Neural Machine Translation
Philip Arthur
|
Dongwon Ryu
|
Gholamreza Haffari
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Search
Co-authors
- Gholamreza Haffari 2
- Meng Fang 2
- Shirui Pan 2
- Ehsan Shareghi 2
- Philip Arthur 1
- show all...