Control-Aware Representations for Model-based Reinforcement Learning

06/24/2020
by   Brandon Cui, et al.
0

A major challenge in modern reinforcement learning (RL) is efficient control of dynamical systems from high-dimensional sensory observations. Learning controllable embedding (LCE) is a promising approach that addresses this challenge by embedding the observations into a lower-dimensional latent space, estimating the latent dynamics, and utilizing it to perform control in the latent space. Two important questions in this area are how to learn a representation that is amenable to the control problem at hand, and how to achieve an end-to-end framework for representation learning and control. In this paper, we take a few steps towards addressing these questions. We first formulate a LCE model to learn representations that are suitable to be used by a policy iteration style algorithm in the latent space. We call this model control-aware representation learning (CARL). We derive a loss function for CARL that has close connection to the prediction, consistency, and curvature (PCC) principle for representation learning. We derive three implementations of CARL. In the offline implementation, we replace the locally-linear control algorithm (e.g., iLQR) used by the existing LCE methods with a RL algorithm, namely model-based soft actor-critic, and show that it results in significant improvement. In online CARL, we interleave representation learning and control, and demonstrate further gain in performance. Finally, we propose value-guided CARL, a variation in which we optimize a weighted version of the CARL loss function, where the weights depend on the TD-error of the current policy. We evaluate the proposed algorithms by extensive experiments on benchmark tasks and compare them with several LCE baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2019

Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control

Many real-world sequential decision-making problems can be formulated as...
research
10/24/2021

DiffSRL: Learning Dynamic-aware State Representation for Deformable Object Control with Differentiable Simulator

Dynamic state representation learning is an important task in robot lear...
research
01/31/2023

CRC-RL: A Novel Visual Feature Representation Architecture for Unsupervised Reinforcement Learning

This paper addresses the problem of visual feature representation learni...
research
09/02/2022

Multi-Step Prediction in Linearized Latent State Spaces for Representation Learning

In this paper, we derive a novel method as a generalization over LCEs su...
research
11/27/2018

Learning State Representations in Complex Systems with Multimodal Data

Representation learning becomes especially important for complex systems...
research
06/15/2020

Analytic Manifold Learning: Unifying and Evaluating Representations for Continuous Control

We address the problem of learning reusable state representations from s...
research
07/05/2019

Dependency-aware Attention Control for Unconstrained Face Recognition with Image Sets

This paper targets the problem of image set-based face verification and ...

Please sign up or login with your details

Forgot password? Click here to reset