LyRN (Lyapunov Reaching Network): A Real-Time Closed Loop approach from Monocular Vision

05/25/2020
by   Zheyu Zhuang, et al.
0

We propose a closed-loop, multi-instance control algorithm for visually guided reaching based on novel learning principles. A control Lyapunov function methodology is used to design a reaching action for a complex multi-instance task in the case where full state information (poses of all potential reaching points) is available. The proposed algorithm uses monocular vision and manipulator joint angles as the input to a deep convolution neural network to predict the value of the control Lyapunov function (cLf) and corresponding velocity control. The resulting network output is used in real-time as visual control for the grasping task with the multi-instance capability emerging naturally from the design of the control Lyapunov function. We demonstrate the proposed algorithm grasping mugs (textureless and symmetric objects) on a table-top from an over-the-shoulder monocular RGB camera. The manipulator dynamically converges to the best-suited target among multiple identical instances from any random initial pose within the workspace. The system trained with only simulated data is able to achieve 90.3 success rate in the real-world experiments with up to 85Hz closed-loop control on one GTX 1080Ti GPU and significantly outperforms a Pose-Based-Visual-Servo (PBVS) grasping system adapted from a state-of-the-art single shot RGB 6D pose estimation algorithm. A key contribution of the paper is the inclusion of a first-order differential constraint associated with the cLf as a regularisation term during learning, and we provide evidence that this leads to more robust and reliable reaching/grasping performance than vanilla regression on general control inputs.

READ FULL TEXT

page 1

page 3

page 4

research
12/09/2019

Grasping in the Wild:Learning 6DoF Closed-Loop Grasping from Low-Cost Demonstrations

Intelligent manipulation benefits from the capacity to flexibly control ...
research
06/03/2023

Development of On-Ground Hardware In Loop Simulation Facility for Space Robotics

Over a couple of decades, space junk has increased rapidly, which has ca...
research
04/16/2019

Combining RGB and Points to Predict Grasping Region for Robotic Bin-Picking

This paper focuses on a robotic picking tasks in cluttered scenario. Bec...
research
01/16/2020

Predicting Target Feature Configuration of Non-stationary Objects for Grasping with Image-Based Visual Servoing

In this paper we consider the problem of the final approach stage of clo...
research
08/26/2020

Indirect Object-to-Robot Pose Estimation from an External Monocular RGB Camera

We present a robotic grasping system that uses a single external monocul...
research
11/05/2020

Improving Robotic Grasping on Monocular Images Via Multi-Task Learning and Positional Loss

In this paper, we introduce two methods of improving real-time object gr...
research
10/19/2022

Neural Co-Processors for Restoring Brain Function: Results from a Cortical Model of Grasping

Objective: A major challenge in closed-loop brain-computer interfaces (B...

Please sign up or login with your details

Forgot password? Click here to reset