Vision-based Teleoperation of Shadow Dexterous Hand using End-to-End Deep Neural Network

09/17/2018
by   Shuang Li, et al.
0

In this paper, we present TeachNet, a novel neural network architecture for intuitive and markerless vision-based teleoperation of dexterous robotic hands. Robot joint angles are directly generated from depth images of the human hand that produce visually similar robot hand poses in an end-to-end fashion. The special structure of TeachNet, combined with a consistency loss function, handles the differences in appearance and anatomy between human and robotic hands. A synchronized human-robot training set is generated from an existing dataset of labeled depth images of the human hand and from simulated depth images of a robotic hand. The final training set includes 400K pairwise depth images and corresponding joint angles of a Shadow C6 robotic hand. The network evaluation results verify the superiority of TeachNet, especially regarding the high-precision condition. Imitation experiments and grasp tasks teleoperated by novice users demonstrate that TeachNet is more reliable and faster than the state-of-the-art vision-based teleoperation method.

READ FULL TEXT
research
03/11/2020

A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU

In this paper, we present a multimodal mobile teleoperation system that ...
research
05/24/2021

User-oriented Natural Human-Robot Control with Thin-Plate Splines and LRCN

We propose a real-time vision-based teleoperation approach for robotic a...
research
02/24/2019

Vision Based Picking System for Automatic Express Package Dispatching

This paper presents a vision based robotic system to handle the picking ...
research
03/05/2022

A Modular Approach to the Embodiment of Hand Motions from Human Demonstrations

Manipulating objects with robotic hands is a complicated task. Not only ...
research
06/02/2020

Object-Independent Human-to-Robot Handovers using Real Time Robotic Vision

We present an approach for safe and object-independent human-to-robot ha...
research
04/15/2019

Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping

Many previous works approach vision-based robotic grasping by training a...
research
02/09/2021

Where is my hand? Deep hand segmentation for visual self-recognition in humanoid robots

The ability to distinguish between the self and the background is of par...

Please sign up or login with your details

Forgot password? Click here to reset