Associating Grasp Configurations with Hierarchical Features in Convolutional Neural Networks

09/13/2016
by   Li Yang Ku, et al.
0

In this work, we provide a solution for posturing the anthropomorphic Robonaut-2 hand and arm for grasping based on visual information. A mapping from visual features extracted from a convolutional neural network (CNN) to grasp points is learned. We demonstrate that a CNN pre-trained for image classification can be applied to a grasping task based on a small set of grasping examples. Our approach takes advantage of the hierarchical nature of the CNN by identifying features that capture the hierarchical support relations between filters in different CNN layers and locating their 3D positions by tracing activations backwards in the CNN. When this backward trace terminates in the RGB-D image, important manipulable structures are thereby localized. These features that reside in different layers of the CNN are then associated with controllers that engage different kinematic subchains in the hand/arm system for grasping. A grasping dataset is collected using demonstrated hand/object relationships for Robonaut-2 to evaluate the proposed approach in terms of the precision of the resulting preshape postures. We demonstrate that this approach outperforms baseline approaches in cluttered scenarios on the grasping dataset and a point cloud based approach on a grasping task using Robonaut-2.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

research
09/17/2018

PointNetGPD: Detecting Grasp Configurations from Point Sets

In this paper, we propose an end-to-end grasp evaluation model to addres...
research
08/16/2021

Intent-based Object Grasping by a Robot using Deep Learning

A robot needs to predict an ideal rectangle for optimal object grasping ...
research
02/16/2018

Improved GQ-CNN: Deep Learning Model for Planning Robust Grasps

Recent developments in the field of robot grasping have shown great impr...
research
02/27/2021

Object affordance as a guide for grasp-type recognition

Recognizing human grasping strategies is an important factor in robot te...
research
03/18/2022

Grasp Pre-shape Selection by Synthetic Training: Eye-in-hand Shared Control on the Hannes Prosthesis

We consider the task of object grasping with a prosthetic hand capable o...
research
06/19/2021

Grasping Benchmarks: Normalizing for Object Size & Approximating Hand Workspaces

The varied landscape of robotic hand designs makes it difficult to set a...
research
06/02/2016

Dictionary Learning for Robotic Grasp Recognition and Detection

The ability to grasp ordinary and potentially never-seen objects is an i...

Please sign up or login with your details

Forgot password? Click here to reset