Multimodal Fusion of EMG and Vision for Human Grasp Intent Inference in Prosthetic Hand Control

04/08/2021
by   Mehrshad Zandigohar, et al.
0

For lower arm amputees, robotic prosthetic hands offer the promise to regain the capability to perform fine object manipulation in activities of daily living. Current control methods based on physiological signals such as EEG and EMG are prone to poor inference outcomes due to motion artifacts, variability of skin electrode junction impedance over time, muscle fatigue, and other factors. Visual evidence is also susceptible to its own artifacts, most often due to object occlusion, lighting changes, variable shapes of objects depending on view-angle, among other factors. Multimodal evidence fusion using physiological and vision sensor measurements is a natural approach due to the complementary strengths of these modalities. In this paper, we present a Bayesian evidence fusion framework for grasp intent inference using eye-view video, gaze, and EMG from the forearm processed by neural network models. We analyze individual and fused performance as a function of time as the hand approaches the object to grasp it. For this purpose, we have also developed novel data processing and augmentation techniques to train neural network components. Our experimental data analyses demonstrate that EMG and visual evidence show complementary strengths, and as a consequence, fusion of multimodal evidence can outperform each individual evidence modality at any given time. Specifically, results indicate that, on average, fusion improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66 visual evidence individually. An overall fusion accuracy of 95.3 labels (compared to a chance level of 7.7 analysis indicate that the correct grasp is inferred sufficiently early and with high confidence compared to the top contender, in order to allow successful robot actuation to close the loop.

READ FULL TEXT

page 1

page 3

page 7

page 8

research
03/08/2021

HANDS: A Multimodal Dataset for Modeling Towards Human Grasp Intent Inference in Prosthetic Hands

Upper limb and hand functionality is critical to many activities of dail...
research
03/08/2021

From Hand-Perspective Visual Information to Grasp Type Probabilities: Deep Learning via Ranking Labels

Limb deficiency severely affects the daily lives of amputees and drives ...
research
01/13/2021

Towards Creating a Deployable Grasp Type Probability Estimator for a Prosthetic Hand

For lower arm amputees, prosthetic hands promise to restore most of phys...
research
10/16/2017

The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?

A successful grasp requires careful balancing of the contact forces. Ded...
research
03/10/2023

Force Feedback Control For Dexterous Robotic Hands Using Conditional Postural Synergies

We present a force feedback controller for a dexterous robotic hand equi...
research
04/19/2021

Segmentation and Classification of EMG Time-Series During Reach-to-Grasp Motion

The electromyography (EMG) signals have been widely utilized in human ro...

Please sign up or login with your details

Forgot password? Click here to reset