SelectFusion: A Generic Framework to Selectively Learn Multisensory Fusion

12/30/2019
by   Changhao Chen, et al.
37

Autonomous vehicles and mobile robotic systems are typically equipped with multiple sensors to provide redundancy. By integrating the observations from different sensors, these mobile agents are able to perceive the environment and estimate system states, e.g. locations and orientations. Although deep learning approaches for multimodal odometry estimation and localization have gained traction, they rarely focus on the issue of robust sensor fusion - a necessary consideration to deal with noisy or incomplete sensor observations in the real world. Moreover, current deep odometry models also suffer from a lack of interpretability. To this extent, we propose SelectFusion, an end-to-end selective sensor fusion module which can be applied to useful pairs of sensor modalities such as monocular images and inertial measurements, depth images and LIDAR point clouds. During prediction, the network is able to assess the reliability of the latent features from different sensor modalities and estimate both trajectory at scale and global pose. In particular, we propose two fusion modules based on different attention strategies: deterministic soft fusion and stochastic hard fusion, and we offer a comprehensive study of the new strategies compared to trivial direct fusion. We evaluate all fusion strategies in both ideal conditions and on progressively degraded datasets that present occlusions, noisy and missing data and time misalignment between sensors, and we investigate the effectiveness of the different fusion strategies in attending the most reliable features, which in itself, provides insights into the operation of the various models.

READ FULL TEXT

page 1

page 3

page 10

page 15

research
03/04/2019

Selective Sensor Fusion for Neural Visual-Inertial Odometry

Deep learning approaches for Visual-Inertial Odometry (VIO) have proven ...
research
03/08/2023

Robust Multimodal Fusion for Human Activity Recognition

The proliferation of IoT and mobile devices equipped with heterogeneous ...
research
10/19/2022

MMRNet: Improving Reliability for Multimodal Computer Vision for Bin Picking via Multimodal Redundancy

Recently, there has been tremendous interest in industry 4.0 infrastruct...
research
04/16/2023

TransFusionOdom: Interpretable Transformer-based LiDAR-Inertial Fusion Odometry Estimation

Multi-modal fusion of sensors is a commonly used approach to enhance the...
research
08/23/2023

Path-Constrained State Estimation for Rail Vehicles

Globally rising demand for transportation by rail is pushing existing in...
research
06/26/2022

AFT-VO: Asynchronous Fusion Transformers for Multi-View Visual Odometry Estimation

Motion estimation approaches typically employ sensor fusion techniques, ...
research
11/16/2022

Real Estate Attribute Prediction from Multiple Visual Modalities with Missing Data

The assessment and valuation of real estate requires large datasets with...

Please sign up or login with your details

Forgot password? Click here to reset