UNO: Uncertainty-aware Noisy-Or Multimodal Fusion for Unanticipated Input Degradation

11/06/2019
by   Junjiao Tian, et al.
0

The fusion of multiple sensor modalities, especially through deep learning architectures, has been an active area of study. However, an under-explored aspect of such work is whether the methods can be robust to degradations across their input modalities, especially when they must generalize to degradations not seen during training. In this work, we propose an uncertainty-aware fusion scheme to effectively fuse inputs that might suffer from a range of known and unknown degradations. Specifically, we analyze a number of uncertainty measures, each of which captures a different aspect of uncertainty, and we propose a novel way to fuse degraded inputs by scaling modality-specific output softmax probabilities. We additionally propose a novel data-dependent spatial temperature scaling method to complement these existing uncertainty measures. Finally, we integrate the uncertainty-scaled output from each modality using a probabilistic noisy-or fusion method. In a photo-realistic simulation environment (AirSim), we show that our method achieves significantly better results on a semantic segmentation task, compared to state-of-art fusion architectures, on a range of degradations (e.g. fog, snow, frost, and various other types of noise), some of which are unknown during training. We specifically improve upon the state-of-art[1] by 28 degradations. [1] Abhinav Valada, Rohit Mohan, and Wolfram Burgard. Self-Supervised Model Adaptation for Multimodal Semantic Segmentation. In: arXiv e-prints, arXiv:1808.03833 (Aug. 2018), arXiv:1808.03833. arXiv: 1808.03833 [cs.CV].

READ FULL TEXT

page 1

page 4

page 5

research
03/31/2022

Dynamic Multimodal Fusion

Deep multimodal learning has achieved great progress in recent years. Ho...
research
03/14/2023

Robust Fusion for Bayesian Semantic Mapping

The integration of semantic information in a map allows robots to unders...
research
08/11/2018

Self-Supervised Model Adaptation for Multimodal Semantic Segmentation

Learning to reliably perceive and understand the scene is an integral en...
research
07/30/2018

Modular Sensor Fusion for Semantic Segmentation

Sensor fusion is a fundamental process in robotic systems as it extends ...
research
03/12/2020

MVLoc: Multimodal Variational Geometry-Aware Learning for Visual Localization

Recent learning-based research has achieved impressive results in the fi...
research
12/04/2021

Channel Exchanging Networks for Multimodal and Multitask Dense Image Prediction

Multimodal fusion and multitask learning are two vital topics in machine...
research
04/17/2023

ProPanDL: A Modular Architecture for Uncertainty-Aware Panoptic Segmentation

We introduce ProPanDL, a family of networks capable of uncertainty-aware...

Please sign up or login with your details

Forgot password? Click here to reset