Real-Time Deep Learning Approach to Visual Servo Control and Grasp Detection for Autonomous Robotic Manipulation

10/13/2020
by   E. G. Ribeiro, et al.
0

In order to explore robotic grasping in unstructured and dynamic environments, this work addresses the visual perception phase involved in the task. This phase involves the processing of visual data to obtain the location of the object to be grasped, its pose and the points at which the robot`s grippers must make contact to ensure a stable grasp. For this, the Cornell Grasping dataset is used to train a convolutional neural network that, having an image of the robot`s workspace, with a certain object, is able to predict a grasp rectangle that symbolizes the position, orientation and opening of the robot`s grippers before its closing. In addition to this network, which runs in real-time, another one is designed to deal with situations in which the object moves in the environment. Therefore, the second network is trained to perform a visual servo control, ensuring that the object remains in the robot`s field of view. This network predicts the proportional values of the linear and angular velocities that the camera must have so that the object is always in the image processed by the grasp network. The dataset used for training was automatically generated by a Kinova Gen3 manipulator. The robot is also used to evaluate the applicability in real-time and obtain practical results from the designed algorithms. Moreover, the offline results obtained through validation sets are also analyzed and discussed regarding their efficiency and processing speed. The developed controller was able to achieve a millimeter accuracy in the final position considering a target object seen for the first time. To the best of our knowledge, we have not found in the literature other works that achieve such precision with a controller learned from scratch. Thus, this work presents a new system for autonomous robotic manipulation with high processing speed and the ability to generalize to several different objects.

READ FULL TEXT

page 7

page 9

page 10

page 11

page 15

page 16

page 19

page 20

research
09/19/2018

RPRG: Toward Real-time Robotic Perception, Reasoning and Grasping with One Multi-task Convolutional Neural Network

Autonomous robotic grasp plays an important role in intelligent robotics...
research
02/24/2018

Visual Manipulation Relationship Network

Grasping is one of the most significant manip- ulation in everyday life,...
research
07/17/2021

Dual Quaternion-Based Visual Servoing for Grasping Moving Objects

This paper presents a new dual quaternion-based formulation for pose-bas...
research
01/20/2022

DFBVS: Deep Feature-Based Visual Servo

Classical Visual Servoing (VS) rely on handcrafted visual features, whic...
research
02/20/2020

Contact-less manipulation of millimeter-scale objects via ultrasonic levitation

Although general purpose robotic manipulators are becoming more capable ...
research
09/23/2018

Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter

Camera viewpoint selection is an important aspect of visual grasp detect...
research
04/09/2018

Automated pick-up of suturing needles for robotic surgical assistance

Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for pros...

Please sign up or login with your details

Forgot password? Click here to reset