Counting to Explore and Generalize in Text-based Games

06/29/2018
by   Xingdi Yuan, et al.
0

We propose a recurrent RL agent with an episodic exploration mechanism that helps discovering good policies in text-based game environments. We show promising results on a set of generated text-based games of varying difficulty where the goal is to collect a coin located at the end of a chain of rooms. In contrast to previous text-based RL approaches, we observe that our agent learns policies that generalize to unseen games of greater difficulty.

READ FULL TEXT
research
08/11/2021

An Approach to Partial Observability in Games: Learning to Both Act and Observe

Reinforcement learning (RL) is successful at learning to play games wher...
research
09/24/2020

Bootstrapped Q-learning with Context Relevant Observation Pruning to Generalize in Text-based Games

We show that Reinforcement Learning (RL) methods for solving Text-Based ...
research
10/16/2021

Case-based Reasoning for Better Generalization in Text-Adventure Games

Text-based games (TBG) have emerged as promising environments for drivin...
research
04/20/2018

Delegating via Quitting Games

Delegation allows an agent to request that another agent completes a tas...
research
02/21/2020

Learning Dynamic Knowledge Graphs to Generalize on Text-Based Games

Playing text-based games requires skill in processing natural language a...
research
03/30/2020

Agent57: Outperforming the Atari Human Benchmark

Atari games have been a long-standing benchmark in the reinforcement lea...
research
05/29/2018

Observe and Look Further: Achieving Consistent Performance on Atari

Despite significant advances in the field of deep Reinforcement Learning...

Please sign up or login with your details

Forgot password? Click here to reset