Log In Sign Up

Generalization in Text-based Games via Hierarchical Reinforcement Learning

by   Yunqiu Xu, et al.

Deep reinforcement learning provides a promising approach for text-based games in studying natural language communication between humans and artificial agents. However, the generalization still remains a big challenge as the agents depend critically on the complexity and variety of training tasks. In this paper, we address this problem by introducing a hierarchical framework built upon the knowledge graph-based RL agent. In the high level, a meta-policy is executed to decompose the whole game into a set of subtasks specified by textual goals, and select one of them based on the KG. Then a sub-policy in the low level is executed to conduct goal-conditioned reinforcement learning. We carry out experiments on games with various difficulty levels and show that the proposed method enjoys favorable generalizability.


page 30

page 31

page 32

page 35

page 36

page 38

page 39

page 40


Deep Reinforcement Learning with Stacked Hierarchical Attention for Text-based Games

We study reinforcement learning (RL) for text-based games, which are int...

Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning

Text-based adventure games provide a platform on which to explore reinfo...

Language Expansion In Text-Based Games

Text-based games are suitable test-beds for designing agents that can le...

Eye of the Beholder: Improved Relation Generalization for Text-based Reinforcement Learning Agents

Text-based games (TBGs) have become a popular proving ground for the dem...

Learning Two-Step Hybrid Policy for Graph-Based Interpretable Reinforcement Learning

We present a two-step hybrid reinforcement learning (RL) policy that is ...

Hierarchical Reinforcement Learning for Deep Goal Reasoning: An Expressiveness Analysis

Hierarchical DQN (h-DQN) is a two-level architecture of feedforward neur...

Concept Learning in Deep Reinforcement Learning

Deep reinforcement learning techniques have shown to be a promising path...

Code Repositories