Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images

06/24/2015
by   Manuel Watter, et al.
0

We introduce Embed to Control (E2C), a method for model learning and control of non-linear dynamical systems from raw pixel images. E2C consists of a deep generative model, belonging to the family of variational autoencoders, that learns to generate image trajectories from a latent space in which the dynamics is constrained to be locally linear. Our model is derived directly from an optimal control formulation in latent space, supports long-term prediction of image sequences and exhibits strong performance on a variety of complex control problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2019

Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control

Many real-world sequential decision-making problems can be formulated as...
research
03/02/2020

Predictive Coding for Locally-Linear Control

High-dimensional observations and unknown dynamics are major challenges ...
research
03/26/2021

Deformable Linear Object Prediction Using Locally Linear Latent Dynamics

We propose a framework for deformable linear object prediction. Predicti...
research
09/06/2021

Supervised DKRC with Images for Offline System Identification

Koopman spectral theory has provided a new perspective in the field of d...
research
07/03/2020

First Steps: Latent-Space Control with Semantic Constraints for Quadruped Locomotion

Traditional approaches to quadruped control frequently employ simplified...
research
10/24/2021

DiffSRL: Learning Dynamic-aware State Representation for Deformable Object Control with Differentiable Simulator

Dynamic state representation learning is an important task in robot lear...
research
09/29/2022

Learning Parsimonious Dynamics for Generalization in Reinforcement Learning

Humans are skillful navigators: We aptly maneuver through new places, re...

Please sign up or login with your details

Forgot password? Click here to reset