A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning

06/03/2021
by   Mingde Zhao, et al.
33

We present an end-to-end, model-based deep reinforcement learning agent which dynamically attends to relevant parts of its state, in order to plan and to generalize better out-of-distribution. The agent's architecture uses a set representation and a bottleneck mechanism, forcing the number of entities to which the agent attends at each planning step to be small. In experiments with customized MiniGrid environments with different dynamics, we observe that the design allows agents to learn to plan effectively, by attending to the relevant objects, leading to better out-of-distribution generalization.

READ FULL TEXT

page 6

page 22

research
10/24/2020

Planning with Exploration: Addressing Dynamics Bottleneck in Model-based Reinforcement Learning

Model-based reinforcement learning is a framework in which an agent lear...
research
11/08/2020

On the role of planning in model-based deep reinforcement learning

Model-based planning is often thought to be necessary for deep, careful ...
research
10/29/2019

Generalization of Reinforcement Learners with Working and Episodic Memory

Memory is an important aspect of intelligence and plays a role in many d...
research
01/08/2022

Assessing Policy, Loss and Planning Combinations in Reinforcement Learning using a New Modular Architecture

The model-based reinforcement learning paradigm, which uses planning alg...
research
04/09/2015

Projective simulation with generalization

The ability to generalize is an important feature of any intelligent age...
research
10/19/2018

Planification par fusions incrémentales de graphes

In this paper, we introduce a generic and fresh model for distributed pl...
research
07/05/2020

Selective Dyna-style Planning Under Limited Model Capacity

In model-based reinforcement learning, planning with an imperfect model ...

Please sign up or login with your details

Forgot password? Click here to reset