On-Policy Pixel-Level Grasping Across the Gap Between Simulation and Reality

04/08/2022
by   Dexin Wang, et al.
0

Grasp detection in cluttered scenes is a very challenging task for robots. Generating synthetic grasping data is a popular way to train and test grasp methods, as is Dex-net and GraspNet; yet, these methods generate training grasps on 3D synthetic object models, but evaluate at images or point clouds with different distributions, which reduces performance on real scenes due to sparse grasp labels and covariate shift. To solve existing problems, we propose a novel on-policy grasp detection method, which can train and test on the same distribution with dense pixel-level grasp labels generated on RGB-D images. A Parallel-Depth Grasp Generation (PDG-Generation) method is proposed to generate a parallel depth image through a new imaging model of projecting points in parallel; then this method generates multiple candidate grasps for each pixel and obtains robust grasps through flatness detection, force-closure metric and collision detection. Then, a large comprehensive Pixel-Level Grasp Pose Dataset (PLGP-Dataset) is constructed and released; distinguished with previous datasets with off-policy data and sparse grasp samples, this dataset is the first pixel-level grasp dataset, with the on-policy distribution where grasps are generated based on depth images. Lastly, we build and test a series of pixel-level grasp detection networks with a data augmentation process for imbalance training, which learn grasp poses in a decoupled manner on the input RGB-D images. Extensive experiments show that our on-policy grasp method can largely overcome the gap between simulation and reality, and achieves the state-of-the-art performance. Code and data are provided at https://github.com/liuchunsense/PLGP-Dataset.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 8

page 10

page 11

page 12

research
05/17/2020

SGDN: Segmentation-Based Grasp Detection Network For Unsymmetrical Three-Finger Gripper

In this paper, we present Segmentation-Based Grasp Detection Network (SG...
research
12/10/2022

Towards Scale Balanced 6-DoF Grasp Detection in Cluttered Scenes

In this paper, we focus on the problem of feature learning in the presen...
research
10/16/2022

Learning 6-DoF Task-oriented Grasp Detection via Implicit Estimation and Visual Affordance

Currently, task-oriented grasp detection approaches are mostly based on ...
research
06/22/2022

Hybrid Physical Metric For 6-DoF Grasp Pose Detection

6-DoF grasp pose detection of multi-grasp and multi-object is a challeng...
research
02/22/2018

Driver Hand Localization and Grasp Analysis: A Vision-based Real-time Approach

Extracting hand regions and their grasp information from images robustly...
research
03/01/2022

Data-efficient learning of object-centric grasp preferences

Grasping made impressive progress during the last few years thanks to de...
research
05/17/2023

Sim-MEES: Modular End-Effector System Grasping Dataset for Mobile Manipulators in Cluttered Environments

In this paper, we present Sim-MEES: a large-scale synthetic dataset that...

Please sign up or login with your details

Forgot password? Click here to reset