Grasp2Vec: Learning Object Representations from Self-Supervised Grasping

11/16/2018
by   Eric Jang, et al.
26

Well structured visual representations can make robot learning faster and can improve generalization. In this paper, we study how we can acquire effective object-centric representations for robotic manipulation tasks without human labeling by using autonomous robot interaction with the environment. Such representation learning methods can benefit from continuous refinement of the representation as the robot collects more experience, allowing them to scale effectively without human intervention. Our representation learning approach is based on object persistence: when a robot removes an object from a scene, the representation of that scene should change according to the features of the object that was removed. We formulate an arithmetic relationship between feature vectors from this observation, and use it to learn a representation of scenes and objects that can then be used to identify object instances, localize them in the scene, and perform goal-directed grasping tasks where the robot must retrieve commanded objects from a bin. The same grasping procedure can also be used to automatically collect training data for our method, by recording images of scenes, grasping and removing an object, and recording the outcome. Our experiments demonstrate that this self-supervised approach for tasked grasping substantially outperforms direct reinforcement learning from images and prior representation learning methods.

READ FULL TEXT

page 3

page 5

page 8

page 11

page 12

page 13

page 14

page 15

research
05/12/2022

Visuomotor Control in Multi-Object Scenes Using Object-Aware Representations

Perceptual understanding of the scene and the relationship between its d...
research
09/28/2022

Human-in-the-loop Robotic Grasping using BERT Scene Representation

Current NLP techniques have been greatly applied in different domains. I...
research
03/15/2020

Active Perception and Representation for Robotic Manipulation

The vast majority of visual animals actively control their eyes, heads, ...
research
08/26/2020

Self-Supervised Goal-Conditioned Pick and Place

Robots have the capability to collect large amounts of data autonomously...
research
06/16/2020

Depth by Poking: Learning to Estimate Depth from Self-Supervised Grasping

Accurate depth estimation remains an open problem for robotic manipulati...
research
09/30/2020

S3K: Self-Supervised Semantic Keypoints for Robotic Manipulation via Multi-View Consistency

A robot's ability to act is fundamentally constrained by what it can per...
research
11/15/2021

Semantically Grounded Object Matching for Robust Robotic Scene Rearrangement

Object rearrangement has recently emerged as a key competency in robot m...

Please sign up or login with your details

Forgot password? Click here to reset