Transferring Category-based Functional Grasping Skills by Latent Space Non-Rigid Registration

09/14/2018
by   Diego Rodriguez, et al.
0

Objects within a category are often similar in their shape and usage. When we---as humans---want to grasp something, we transfer our knowledge from past experiences and adapt it to novel objects. In this paper, we propose a new approach for transferring grasping skills that accumulates grasping knowledge into a category-level canonical model. Grasping motions for novel instances of the category are inferred from geometric deformations between the observed instance and the canonical shape. Correspondences between the shapes are established by means of a non-rigid registration method that combines the Coherent Point Drift approach with subspace methods. By incorporating category-level information into the registration, we avoid unlikely shapes and focus on deformations actually observed within the category. Control poses for generating grasping motions are accumulated in the canonical model from grasping definitions of known objects. According to the estimated shape parameters of a novel instance, the control poses are transformed towards it. The category-level model makes our method particularly relevant for on-line grasping, where fully-observed objects are not easily available. This is demonstrated through experiments in which objects with occluded handles are successfully grasped.

READ FULL TEXT

page 1

page 6

page 7

research
09/14/2018

Transferring Grasping Skills to Novel Instances by Latent Space Non-Rigid Registration

Robots acting in open environments need to be able to handle novel objec...
research
08/17/2020

Category-Level 3D Non-Rigid Registration from Single-View RGB Images

In this paper, we propose a novel approach to solve the 3D non-rigid reg...
research
10/18/2018

Learning Postural Synergies for Categorical Grasping through Shape Space Registration

Every time a person encounters an object with a given degree of familiar...
research
04/03/2022

Learning High-DOF Reaching-and-Grasping via Dynamic Representation of Gripper-Object Interaction

We approach the problem of high-DOF reaching-and-grasping via learning j...
research
10/21/2022

Neural Fields for Robotic Object Manipulation from a Single Image

We present a unified and compact representation for object rendering, 3D...
research
11/03/2020

Leaf-like Origami with Bistability for Self-Adaptive Grasping Motions

The leaf-like origami structure was inspired by geometric patterns found...
research
09/25/2016

Fast Blended Transformations for Partial Shape Registration

Automatic estimation of skinning transformations is a popular way to def...

Please sign up or login with your details

Forgot password? Click here to reset