Depth by Poking: Learning to Estimate Depth from Self-Supervised Grasping

06/16/2020
by   Ben Goodrich, et al.
0

Accurate depth estimation remains an open problem for robotic manipulation; even state of the art techniques including structured light and LiDAR sensors fail on reflective or transparent surfaces. We address this problem by training a neural network model to estimate depth from RGB-D images, using labels from physical interactions between a robot and its environment. Our network predicts, for each pixel in an input image, the z position that a robot's end effector would reach if it attempted to grasp or poke at the corresponding position. Given an autonomous grasping policy, our approach is self-supervised as end effector position labels can be recovered through forward kinematics, without human annotation. Although gathering such physical interaction data is expensive, it is necessary for training and routine operation of state of the art manipulation systems. Therefore, this depth estimator comes “for free” while collecting data for other tasks (e.g., grasping, pushing, placing). We show our approach achieves significantly lower root mean squared error than traditional structured light sensors and unsupervised deep learning methods on difficult, industry-scale jumbled bin datasets.

READ FULL TEXT

page 1

page 2

page 4

research
05/31/2019

2.5D Image based Robotic Grasping

We consider the problem of robotic grasping using depth + RGB informatio...
research
11/16/2018

Grasp2Vec: Learning Object Representations from Self-Supervised Grasping

Well structured visual representations can make robot learning faster an...
research
02/17/2022

TransCG: A Large-Scale Real-World Dataset for Transparent Object Depth Completion and Grasping

Transparent objects are common in our daily life and frequently handled ...
research
11/06/2020

Learning a Geometric Representation for Data-Efficient Depth Estimation via Gradient Field and Contrastive Loss

Estimating a depth map from a single RGB image has been investigated wid...
research
01/13/2023

Co-manipulation of soft-materials estimating deformation from depth images

Human-robot co-manipulation of soft materials, such as fabrics, composit...
research
03/02/2019

Robot Learning via Human Adversarial Games

Much work in robotics has focused on "human-in-the-loop" learning techni...
research
09/10/2020

Self-supervised Depth Denoising Using Lower- and Higher-quality RGB-D sensors

Consumer-level depth cameras and depth sensors embedded in mobile device...

Please sign up or login with your details

Forgot password? Click here to reset