Commuting Conditional GANs for Robust Multi-Modal Fusion

06/10/2019
by   Siddharth Roheda, et al.
0

This paper presents a data driven approach to multi-modal fusion, where optimal features for each sensor are selected from a common hidden space between the different modalities. The existence of such a hidden space is then used in order to detect damaged sensors and safeguard the performance of the system. Experimental results show that such an approach can make the system robust against noisy/damaged sensors, without requiring human intervention to inform the system about the damage.

READ FULL TEXT
research
08/13/2019

MEx: Multi-modal Exercises Dataset for Human Activity Recognition

MEx: Multi-modal Exercises Dataset is a multi-sensor, multi-modal datase...
research
09/25/2022

Multimodal Learning with Channel-Mixing and Masked Autoencoder on Facial Action Unit Detection

Recent studies utilizing multi-modal data aimed at building a robust mod...
research
02/07/2022

CZU-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensors

Human action recognition has been widely used in many fields of life, an...
research
04/16/2023

TransFusionOdom: Interpretable Transformer-based LiDAR-Inertial Fusion Odometry Estimation

Multi-modal fusion of sensors is a commonly used approach to enhance the...
research
11/15/2013

Deterministic Bayesian Information Fusion and the Analysis of its Performance

This paper develops a mathematical and computational framework for analy...
research
09/13/2022

Towards Efficient Architecture and Algorithms for Sensor Fusion

The safety of an automated vehicle hinges crucially upon the accuracy of...
research
01/06/2022

Multi-modal data fusion of Voice and EMG data for Robotic Control

Wearable electronic equipment is constantly evolving and is increasing t...

Please sign up or login with your details

Forgot password? Click here to reset