Multimodal Fusion of EMG and Vision for Human Grasp Intent Inference in Prosthetic Hand Control

by   Mehrshad Zandigohar, et al.

For lower arm amputees, robotic prosthetic hands offer the promise to regain the capability to perform fine object manipulation in activities of daily living. Current control methods based on physiological signals such as EEG and EMG are prone to poor inference outcomes due to motion artifacts, variability of skin electrode junction impedance over time, muscle fatigue, and other factors. Visual evidence is also susceptible to its own artifacts, most often due to object occlusion, lighting changes, variable shapes of objects depending on view-angle, among other factors. Multimodal evidence fusion using physiological and vision sensor measurements is a natural approach due to the complementary strengths of these modalities. In this paper, we present a Bayesian evidence fusion framework for grasp intent inference using eye-view video, gaze, and EMG from the forearm processed by neural network models. We analyze individual and fused performance as a function of time as the hand approaches the object to grasp it. For this purpose, we have also developed novel data processing and augmentation techniques to train neural network components. Our experimental data analyses demonstrate that EMG and visual evidence show complementary strengths, and as a consequence, fusion of multimodal evidence can outperform each individual evidence modality at any given time. Specifically, results indicate that, on average, fusion improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66 visual evidence individually. An overall fusion accuracy of 95.3 labels (compared to a chance level of 7.7 analysis indicate that the correct grasp is inferred sufficiently early and with high confidence compared to the top contender, in order to allow successful robot actuation to close the loop.



page 1

page 3

page 7

page 8


HANDS: A Multimodal Dataset for Modeling Towards Human Grasp Intent Inference in Prosthetic Hands

Upper limb and hand functionality is critical to many activities of dail...

From Hand-Perspective Visual Information to Grasp Type Probabilities: Deep Learning via Ranking Labels

Limb deficiency severely affects the daily lives of amputees and drives ...

Towards Creating a Deployable Grasp Type Probability Estimator for a Prosthetic Hand

For lower arm amputees, prosthetic hands promise to restore most of phys...

The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?

A successful grasp requires careful balancing of the contact forces. Ded...

Decoding of Grasp Motions from EEG Signals Based on a Novel Data Augmentation Strategy

Electroencephalogram (EEG) based brain-computer interface (BCI) systems ...

A Network-based Multimodal Data Fusion Approach for Characterizing Dynamic Multimodal Physiological Patterns

Characterizing the dynamic interactive patterns of complex systems helps...

Jointly Optimizing Sensing Pipelines for Multimodal Mixed Reality Interaction

Natural human interactions for Mixed Reality Applications are overwhelmi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.