The Predictron: End-To-End Learning and Planning

by   David Silver, et al.

One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple "imagined" planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths. The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures.


page 4

page 11


Value Iteration Networks

We introduce the value iteration network (VIN): a fully differentiable n...

Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

Constructing agents with planning capabilities has long been one of the ...

TreeQN and ATreeC: Differentiable Tree Planning for Deep Reinforcement Learning

Combining deep model-free reinforcement learning with on-line planning i...

Universal Planning Networks

A key challenge in complex visuomotor control is learning abstract repre...

Incorporating Orientations into End-to-end Driving Model for Steering Control

In this paper, we present a novel end-to-end deep neural network model f...

QMDP-Net: Deep Learning for Planning under Partial Observability

This paper introduces the QMDP-net, a neural network architecture for pl...

Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing

Rising concern for the societal implications of artificial intelligence ...

Code Repositories


WIP implementation of "The Predictron: End-To-End Learning and Planning" ( in Chainer

view repo

Please sign up or login with your details

Forgot password? Click here to reset