GKNet: grasp keypoint network for grasp candidates detection

06/16/2021
by   Ruinian Xu, et al.
6

Contemporary grasp detection approaches employ deep learning to achieve robustness to sensor and object model uncertainty. The two dominant approaches design either grasp-quality scoring or anchor-based grasp recognition networks. This paper presents a different approach to grasp detection by treating it as keypoint detection. The deep network detects each grasp candidate as a pair of keypoints, convertible to the grasp representation g = x, y, w, θ^T, rather than a triplet or quartet of corner points. Decreasing the detection difficulty by grouping keypoints into pairs boosts performance. To further promote dependencies between keypoints, the general non-local module is incorporated into the proposed learning framework. A final filtering strategy based on discrete and continuous orientation prediction removes false correspondences and further improves grasp detection performance. GKNet, the approach presented here, achieves the best balance of accuracy and speed on the Cornell and the abridged Jacquard dataset (96.9 fps). Follow-up experiments on a manipulator evaluate GKNet using 4 types of grasping experiments reflecting different nuisance sources: static grasping, dynamic grasping, grasping at varied camera angles, and bin picking. GKNet outperforms reference baselines in static and dynamic grasping experiments while showing robustness to varied camera viewpoints and bin picking experiments. The results confirm the hypothesis that grasp keypoints are an effective output representation for deep grasp networks that provide robustness to expected nuisance factors.

READ FULL TEXT

page 2

page 5

page 7

page 9

page 11

page 15

page 17

page 19

research
02/18/2019

MetaGrasp: Data Efficient Grasping by Affordance Interpreter Network

Data-driven approach for grasping shows significant advance recently. Bu...
research
04/14/2018

Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach

This paper presents a real-time, object-independent grasp synthesis meth...
research
04/29/2022

Eyes on the Prize: Improved Perception for Robust Dynamic Grasping

This paper is concerned with perception challenges for robust grasping i...
research
06/24/2019

Learning Grasp Affordance Reasoning through Semantic Relations

Reasoning about object affordances allows an autonomous agent to perform...
research
09/23/2018

Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter

Camera viewpoint selection is an important aspect of visual grasp detect...
research
07/05/2021

GraspME – Grasp Manifold Estimator

In this paper, we introduce a Grasp Manifold Estimator (GraspME) to dete...
research
04/14/2022

GloCAL: Glocalized Curriculum-Aided Learning of Multiple Tasks with Application to Robotic Grasping

The domain of robotics is challenging to apply deep reinforcement learni...

Please sign up or login with your details

Forgot password? Click here to reset