Learning Actionable Representations from Visual Observations

08/02/2018
by   Debidatta Dwibedi, et al.
0

In this work we explore a new approach for robots to teach themselves about the world simply by observing it. In particular we investigate the effectiveness of learning task-agnostic representations for continuous control tasks. We extend Time-Contrastive Networks (TCN) that learn from visual observations by embedding multiple frames jointly in the embedding space as opposed to a single frame. We show that by doing so, we are now able to encode both position and velocity attributes significantly more accurately. We test the usefulness of this self-supervised approach in a reinforcement learning setting. We show that the representations learned by agents observing themselves take random actions, or other agents perform tasks successfully, can enable the learning of continuous control policies using algorithms like Proximal Policy Optimization (PPO) using only the learned embeddings as input. We also demonstrate significant improvements on the real-world Pouring dataset with a relative error reduction of 39.4 static attributes compared to the single-frame baseline. Video results are available at https://sites.google.com/view/actionablerepresentations .

READ FULL TEXT

page 1

page 3

research
04/16/2019

Temporal Cycle-Consistency Learning

We introduce a self-supervised representation learning method based on t...
research
11/14/2020

PLAS: Latent Action Space for Offline Reinforcement Learning

The goal of offline reinforcement learning is to learn a policy from a f...
research
06/07/2021

XIRL: Cross-embodiment Inverse Reinforcement Learning

We investigate the visual cross-embodiment imitation setting, in which a...
research
10/27/2021

DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations

Top-performing Model-Based Reinforcement Learning (MBRL) agents, such as...
research
04/26/2022

Stochastic Coherence Over Attention Trajectory For Continuous Learning In Video Streams

Devising intelligent agents able to live in an environment and learn by ...
research
05/31/2020

Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos

Learning meaningful visual representations in an embedding space can fac...
research
01/30/2022

Contrastive Learning from Demonstrations

This paper presents a framework for learning visual representations from...

Please sign up or login with your details

Forgot password? Click here to reset