Selective Sensor Fusion for Neural Visual-Inertial Odometry

03/04/2019
by   Changhao Chen, et al.
0

Deep learning approaches for Visual-Inertial Odometry (VIO) have proven successful, but they rarely focus on incorporating robust fusion strategies for dealing with imperfect input sensory data. We propose a novel end-to-end selective sensor fusion framework for monocular VIO, which fuses monocular images and inertial measurements in order to estimate the trajectory whilst improving robustness to real-life issues, such as missing and corrupted data or bad sensor synchronization. In particular, we propose two fusion modalities based on different masking strategies: deterministic soft fusion and stochastic hard fusion, and we compare with previously proposed direct fusion baselines. During testing, the network is able to selectively process the features of the available sensor modalities and produce a trajectory at scale. We present a thorough investigation on the performances on three public autonomous driving, Micro Aerial Vehicle (MAV) and hand-held VIO datasets. The results demonstrate the effectiveness of the fusion strategies, which offer better performances compared to direct fusion, particularly in presence of corrupted data. In addition, we study the interpretability of the fusion networks by visualising the masking layers in different scenarios and with varying data corruption, revealing interesting correlations between the fusion networks and imperfect sensory input data.

READ FULL TEXT

page 2

page 12

research
12/30/2019

SelectFusion: A Generic Framework to Selectively Learn Multisensory Fusion

Autonomous vehicles and mobile robotic systems are typically equipped wi...
research
10/12/2018

Modeling Varying Camera-IMU Time Offset in Optimization-Based Visual-Inertial Odometry

Combining cameras and inertial measurement units (IMUs) has been proven ...
research
06/11/2020

PRGFlow: Benchmarking SWAP-Aware Unified Deep Visual Inertial Odometry

Odometry on aerial robots has to be of low latency and high robustness w...
research
09/16/2019

DeepTIO: A Deep Thermal-Inertial Odometry with Visual Hallucination

Visual odometry shows excellent performance in a wide range of environme...
research
01/29/2019

Deep Neural Networks with Auxiliary-Model Regulated Gating for Resilient Multi-Modal Sensor Fusion

Deep neural networks allow for fusion of high-level features from multip...
research
09/02/2021

MIR-VIO: Mutual Information Residual-based Visual Inertial Odometry with UWB Fusion for Robust Localization

For many years, there has been an impressive progress on visual odometry...
research
06/29/2021

Towards Generalisable Deep Inertial Tracking via Geometry-Aware Learning

Autonomous navigation in uninstrumented and unprepared environments is a...

Please sign up or login with your details

Forgot password? Click here to reset