Combining RGB and Points to Predict Grasping Region for Robotic Bin-Picking

04/16/2019
by   Quanquan Shao, et al.
0

This paper focuses on a robotic picking tasks in cluttered scenario. Because of the diversity of objects and clutter by placing, it is much difficult to recognize and estimate their pose before grasping. Here, we use U-net, a special Convolution Neural Networks (CNN), to combine RGB images and depth information to predict picking region without recognition and pose estimation. The efficiency of diverse visual input of the network were compared, including RGB, RGB-D and RGB-Points. And we found the RGB-Points input could get a precision of 95.74

READ FULL TEXT

page 2

page 4

research
04/16/2019

Suction Grasp Region Prediction using Self-supervised Learning for Object Picking in Dense Clutter

This paper focuses on robotic picking tasks in cluttered scenario. Becau...
research
11/15/2022

Grasping the Inconspicuous

Transparent objects are common in day-to-day life and hence find many ap...
research
01/08/2022

Mushrooms Detection, Localization and 3D Pose Estimation using RGB-D Sensor for Robotic-picking Applications

In this paper, we propose mushrooms detection, localization and 3D pose ...
research
05/25/2020

LyRN (Lyapunov Reaching Network): A Real-Time Closed Loop approach from Monocular Vision

We propose a closed-loop, multi-instance control algorithm for visually ...
research
10/01/2019

Omnipush: accurate, diverse, real-world dataset of pushing dynamics with RGB-D video

Pushing is a fundamental robotic skill. Existing work has shown how to e...
research
09/24/2019

6D Pose Estimation with Correlation Fusion

6D object pose estimation is widely applied in robotic tasks such as gra...
research
10/21/2021

Fuzzy-Depth Objects Grasping Based on FSG Algorithm and a Soft Robotic Hand

Autonomous grasping is an important factor for robots physically interac...

Please sign up or login with your details

Forgot password? Click here to reset