The Predictron: End-To-End Learning and Planning

by   David Silver, et al.

One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple "imagined" planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths. The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures.


page 4

page 11


Value Iteration Networks

We introduce the value iteration network (VIN): a fully differentiable n...

Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

Constructing agents with planning capabilities has long been one of the ...

TreeQN and ATreeC: Differentiable Tree Planning for Deep Reinforcement Learning

Combining deep model-free reinforcement learning with on-line planning i...

Planning with Abstract Learned Models While Learning Transferable Subtasks

We introduce an algorithm for model-based hierarchical reinforcement lea...

Universal Planning Networks

A key challenge in complex visuomotor control is learning abstract repre...

QMDP-Net: Deep Learning for Planning under Partial Observability

This paper introduces the QMDP-net, a neural network architecture for pl...

Code Repositories


WIP implementation of "The Predictron: End-To-End Learning and Planning" ( in Chainer

view repo