Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data

05/13/2020
by   Mohi Khansari, et al.
0

This paper introduces Action Image, a new grasp proposal representation that allows learning an end-to-end deep-grasping policy. Our model achieves 84% grasp success on 172 real world objects while being trained only in simulation on 48 objects with just naive domain randomization. Similar to computer vision problems, such as object detection, Action Image builds on the idea that object features are invariant to translation in image space. Therefore, grasp quality is invariant when evaluating the object-gripper relationship; a successful grasp for an object depends on its local context, but is independent of the surrounding environment. Action Image represents a grasp proposal as an image and uses a deep convolutional network to infer grasp quality. We show that by using an Action Image representation, trained networks are able to extract local, salient features of grasping tasks that generalize across different objects and environments. We show that this representation works on a variety of inputs, including color images (RGB), depth images (D), and combined color-depth (RGB-D). Our experimental results demonstrate that networks utilizing an Action Image representation exhibit strong domain transfer between training on simulated data and inference on real-world sensor streams. Finally, our experiments show that a network trained with Action Image improves grasp success (84% vs. 53%) over a baseline model with the same structure, but using actions encoded as vectors.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 6

research
03/03/2020

Robotic Grasping through Combined image-Based Grasp Proposal and 3D Reconstruction

We present a novel approach to robotic grasp planning using both a learn...
research
06/14/2017

Learning a visuomotor controller for real world robotic grasping using simulated depth images

We want to build robots that are useful in unstructured real world appli...
research
07/06/2021

Fast-Learning Grasping and Pre-Grasping via Clutter Quantization and Q-map Masking

Grasping objects in cluttered scenarios is a challenging task in robotic...
research
09/04/2019

Directional Semantic Grasping of Real-World Objects: From Simulation to Reality

We present a deep reinforcement learning approach to grasp semantically ...
research
09/21/2022

GP-net: Grasp Proposal for Mobile Manipulators

We present the Grasp Proposal Network (GP-net), a Convolutional Neural N...
research
03/29/2021

6-DoF Contrastive Grasp Proposal Network

Proposing grasp poses for novel objects is an essential component for an...
research
01/12/2021

Transferring Experience from Simulation to the Real World for Precise Pick-And-Place Tasks in Highly Cluttered Scenes

In this paper, we introduce a novel learning-based approach for grasping...

Please sign up or login with your details

Forgot password? Click here to reset