Where is my hand? Deep hand segmentation for visual self-recognition in humanoid robots

02/09/2021
by   Alexandre Almeida, et al.
24

The ability to distinguish between the self and the background is of paramount importance for robotic tasks. The particular case of hands, as the end effectors of a robotic system that more often enter into contact with other elements of the environment, must be perceived and tracked with precision to execute the intended tasks with dexterity and without colliding with obstacles. They are fundamental for several applications, from Human-Robot Interaction tasks to object manipulation. Modern humanoid robots are characterized by high number of degrees of freedom which makes their forward kinematics models very sensitive to uncertainty. Thus, resorting to vision sensing can be the only solution to endow these robots with a good perception of the self, being able to localize their body parts with precision. In this paper, we propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view. It is known that CNNs require a huge amount of data to be trained. To overcome the challenge of labeling real-world images, we propose the use of simulated datasets exploiting domain randomization techniques. We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy. We focus our attention on developing a methodology that requires low amounts of data to achieve reasonable performance while giving detailed insight on how to properly generate variability in the training dataset. Moreover, we analyze the fine-tuning process within the complex model of Mask-RCNN, understanding which weights should be transferred to the new task of segmenting robot hands. Our final model was trained solely on synthetic images and achieves an average IoU of 82 56.3 images and 3 hours of training time using a single GPU.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 7

page 9

page 11

research
09/16/2018

Segmenting Unknown 3D Objects from Real Depth Images using Mask R-CNN Trained on Synthetic Point Clouds

The ability to segment unknown objects in depth images has potential to ...
research
02/07/2023

Self-Supervised Unseen Object Instance Segmentation via Long-Term Robot Interaction

We introduce a novel robotic system for improving unseen object instance...
research
02/28/2021

Learning Human-like Hand Reaching for Human-Robot Handshaking

One of the first and foremost non-verbal interactions that humans perfor...
research
10/24/2019

RoboNet: Large-Scale Multi-Robot Learning

Robot learning has emerged as a promising tool for taming the complexity...
research
10/21/2016

Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies

While deep learning has had significant successes in computer vision tha...
research
09/17/2018

Vision-based Teleoperation of Shadow Dexterous Hand using End-to-End Deep Neural Network

In this paper, we present TeachNet, a novel neural network architecture ...
research
02/07/2019

Virtual Training for a Real Application: Accurate Object-Robot Relative Localization without Calibration

Localizing an object accurately with respect to a robot is a key step fo...

Please sign up or login with your details

Forgot password? Click here to reset