Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection

03/07/2016
by   Sergey Levine, et al.
0

We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 7

page 8

page 9

research
09/15/2023

Two-fingered Hand with Gear-type Synchronization Mechanism with Magnet for Improved Small and Offset Objects Grasping: F2 Hand

A problem that plagues robotic grasping is the misalignment of the objec...
research
03/18/2022

Grasp Pre-shape Selection by Synthetic Training: Eye-in-hand Shared Control on the Hannes Prosthesis

We consider the task of object grasping with a prosthetic hand capable o...
research
05/12/2022

Economical Precise Manipulation and Auto Eye-Hand Coordination with Binocular Visual Reinforcement Learning

Precision robotic manipulation tasks (insertion, screwing, precisely pic...
research
07/15/2021

Real-Time Grasping Strategies Using Event Camera

Robotic vision plays a key role for perceiving the environment in graspi...
research
03/02/2019

Evaluation of state representation methods in robot hand-eye coordination learning from demonstration

We evaluate different state representation methods in robot hand-eye coo...
research
03/08/2021

From Hand-Perspective Visual Information to Grasp Type Probabilities: Deep Learning via Ranking Labels

Limb deficiency severely affects the daily lives of amputees and drives ...
research
05/02/2023

EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable Rendering and Space Exploration

Hand-eye calibration is a critical task in robotics, as it directly affe...

Please sign up or login with your details

Forgot password? Click here to reset