Visuomotor Control in Multi-Object Scenes Using Object-Aware Representations

05/12/2022
by   Negin Heravi, et al.
0

Perceptual understanding of the scene and the relationship between its different components is important for successful completion of robotic tasks. Representation learning has been shown to be a powerful technique for this, but most of the current methodologies learn task specific representations that do not necessarily transfer well to other tasks. Furthermore, representations learned by supervised methods require large labeled datasets for each task that are expensive to collect in the real world. Using self-supervised learning to obtain representations from unlabeled data can mitigate this problem. However, current self-supervised representation learning methods are mostly object agnostic, and we demonstrate that the resulting representations are insufficient for general purpose robotics tasks as they fail to capture the complexity of scenes with many components. In this paper, we explore the effectiveness of using object-aware representation learning techniques for robotic tasks. Our self-supervised representations are learned by observing the agent freely interacting with different parts of the environment and is queried in two different settings: (i) policy learning and (ii) object location prediction. We show that our model learns control policies in a sample-efficient manner and outperforms state-of-the-art object agnostic techniques as well as methods trained on raw RGB images. Our results show a 20 percent increase in performance in low data regimes (1000 trajectories) in policy training using implicit behavioral cloning (IBC). Furthermore, our method outperforms the baselines for the task of object localization in multi-object scenes.

READ FULL TEXT

page 5

page 8

page 11

research
05/17/2022

Self-Supervised Learning of Multi-Object Keypoints for Robotic Manipulation

In recent years, policy learning methods using either reinforcement or i...
research
11/16/2018

Grasp2Vec: Learning Object Representations from Self-Supervised Grasping

Well structured visual representations can make robot learning faster an...
research
01/03/2019

Self-supervised Learning of Image Embedding for Continuous Control

Operating directly from raw high dimensional sensory inputs like images ...
research
01/27/2021

Learning task-agnostic representation via toddler-inspired learning

One of the inherent limitations of current AI systems, stemming from the...
research
03/15/2022

Task-Agnostic Robust Representation Learning

It has been reported that deep learning models are extremely vulnerable ...
research
08/16/2022

Matching Multiple Perspectives for Efficient Representation Learning

Representation learning approaches typically rely on images of objects c...
research
08/03/2020

Predicting What You Already Know Helps: Provable Self-Supervised Learning

Self-supervised representation learning solves auxiliary prediction task...

Please sign up or login with your details

Forgot password? Click here to reset