Learning Latent Dynamics for Planning from Pixels

11/12/2018 ∙ by Danijar Hafner, et al. ∙ 6

Planning has been very successful for control tasks with known environment dynamics. To leverage planning in unknown environments, the agent needs to learn the dynamics from interactions with the world. However, learning dynamics models that are accurate enough for planning has been a long-standing challenge, especially in image-based domains. We propose the Deep Planning Network (PlaNet), a purely model-based agent that learns the environment dynamics from pixels and chooses actions through online planning in latent space. To achieve high performance, the dynamics model must accurately predict the rewards ahead for multiple time steps. We approach this problem using a latent dynamics model with both deterministic and stochastic transition function and a generalized variational inference objective that we name latent overshooting. Using only pixel observations, our agent solves continuous control tasks with contact dynamics, partial observability, and sparse rewards. PlaNet uses significantly fewer episodes and reaches final performance close to and sometimes higher than top model-free algorithms.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 15

page 17

Code Repositories

PlaNet

Deep Planning Network: Control from pixels by latent planning with learned dynamics


view repo

PlaNet_PyTorch

Unofficial re-implementation of "Learning Latent Dynamics for Planning from Pixels" (https://arxiv.org/abs/1811.04551 ) with PyTorch


view repo

cwvae

Clockwork Variational Autoencoder


view repo

cwvae-jax

Clockwork VAEs in JAX/Flax


view repo

softagent

Algorithms for deformable object manipulation benchmarked in SoftGym


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.