Representation Matters: Improving Perception and Exploration for Robotics

11/03/2020
by   Markus Wulfmeier, et al.
0

Projecting high-dimensional environment observations into lower-dimensional structured representations can considerably improve data-efficiency for reinforcement learning in domains with limited data such as robotics. Can a single generally useful representation be found? In order to answer this question, it is important to understand how the representation will be used by the agent and what properties such a 'good' representation should have. In this paper we systematically evaluate a number of common learnt and hand-engineered representations in the context of three robotics tasks: lifting, stacking and pushing of 3D blocks. The representations are evaluated in two use-cases: as input to the agent, or as a source of auxiliary tasks. Furthermore, the value of each representation is evaluated in terms of three properties: dimensionality, observability and disentanglement. We can significantly improve performance in both use-cases and demonstrate that some representations can perform commensurate to simulator states as agent inputs. Finally, our results challenge common intuitions by demonstrating that: 1) dimensionality strongly matters for task generation, but is negligible for inputs, 2) observability of task-relevant aspects mostly affects the input representation use-case, and 3) disentanglement leads to better auxiliary tasks, but has only limited benefits for input representations. This work serves as a step towards a more systematic understanding of what makes a 'good' representation for control in robotics, enabling practitioners to make more informed choices for developing new learned or hand-engineered representations.

READ FULL TEXT

page 1

page 2

page 3

page 9

page 11

research
03/30/2022

Investigating the Properties of Neural Network Representations in Reinforcement Learning

In this paper we investigate the properties of representations learned b...
research
09/10/2019

Discovery of Useful Questions as Auxiliary Tasks

Arguably, intelligent agents ought to be able to discover their own ques...
research
04/25/2023

Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks

Auxiliary tasks improve the representations learned by deep reinforcemen...
research
02/10/2023

Reinforcement Learning from Multiple Sensors via Joint Representations

In many scenarios, observations from more than one sensor modality are a...
research
11/18/2018

Self-Organizing Maps for Storage and Transfer of Knowledge in Reinforcement Learning

The idea of reusing or transferring information from previously learned ...
research
04/01/2020

Work in Progress: Temporally Extended Auxiliary Tasks

Predictive auxiliary tasks have been shown to improve performance in num...
research
04/14/2023

Model Predictive Control with Self-supervised Representation Learning

Over the last few years, we have not seen any major developments in mode...

Please sign up or login with your details

Forgot password? Click here to reset