Learning model-based planning from scratch

by   Razvan Pascanu, et al.

Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the "Imagination-based Planner", the first model-based, sequential decision-making agent that can learn to construct, evaluate, and execute plans. Before any action, it can perform a variable number of imagination steps, which involve proposing an imagined action and evaluating it with its model-based imagination. All imagined actions and outcomes are aggregated, iteratively, into a "plan context" which conditions future real and imagined actions. The agent can even decide how to imagine: testing out alternative imagined actions, chaining sequences of actions together, or building a more complex "imagination tree" by navigating flexibly among the previously imagined states using a learned policy. And our agent can learn to plan economically, jointly optimizing for external rewards and computational costs associated with using its imagination. We show that our architecture can learn to solve a challenging continuous control problem, and also learn elaborate planning strategies in a discrete maze-solving task. Our work opens a new direction toward learning the components of a model-based planning system and how to use them.


page 6

page 11


Thinker: Learning to Plan and Act

We propose the Thinker algorithm, a novel approach that enables reinforc...

Dual policy as self-model for planning

Planning is a data efficient decision-making strategy where an agent sel...

Lifted Sequential Planning with Lazy Constraint Generation Solvers

This paper studies the possibilities made open by the use of Lazy Clause...

Efficient, Safe, and Probably Approximately Complete Learning of Action Models

In this paper we explore the theoretical boundaries of planning in a set...

DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation

VLN-CE is a recently released embodied task, where AI agents need to nav...

Finding Counterfactually Optimal Action Sequences in Continuous State Spaces

Humans performing tasks that involve taking a series of multiple depende...

Procedure Planning in Instructional Videosvia Contextual Modeling and Model-based Policy Learning

Learning new skills by observing humans' behaviors is an essential capab...

Please sign up or login with your details

Forgot password? Click here to reset