Self-Supervised Model Adaptation for Multimodal Semantic Segmentation

08/11/2018
by   Abhinav Valada, et al.
0

Learning to reliably perceive and understand the scene is an integral enabler for robots to operate in the real-world. This problem is inherently challenging due to the multitude of object types as well as appearance changes caused by varying illumination and weather conditions. Leveraging complementary modalities can enable learning of semantically richer representations that are resilient to such perturbations. Despite the tremendous progress in recent years, most multimodal convolutional neural network approaches directly concatenate feature maps from individual modality streams rendering the model incapable of focusing only on relevant complementary information for fusion. To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner. Specifically, we propose an architecture consisting of two modality-specific encoder streams that fuse intermediate encoder representations into a single decoder using our proposed self-supervised model adaptation fusion mechanism which optimally combines complementary features. As intermediate representations are not aligned across modalities, we introduce an attention scheme for better correlation. In addition, we propose a computationally efficient unimodal segmentation architecture termed AdapNet++ that incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling that has a larger effective receptive field with more than 10x fewer parameters, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. Comprehensive empirical evaluations on several benchmarks demonstrate that both our unimodal and multimodal architectures achieve state-of-the-art performance.

READ FULL TEXT
research
03/30/2023

Complementary Random Masking for RGB-Thermal Semantic Segmentation

RGB-thermal semantic segmentation is one potential solution to achieve r...
research
12/25/2019

Multi-Modal Attention-based Fusion Model for Semantic Segmentation of RGB-Depth Images

The 3D scene understanding is mainly considered as a crucial requirement...
research
11/06/2019

UNO: Uncertainty-aware Noisy-Or Multimodal Fusion for Unanticipated Input Degradation

The fusion of multiple sensor modalities, especially through deep learni...
research
03/29/2022

Self-Supervised Leaf Segmentation under Complex Lighting Conditions

As an essential prerequisite task in image-based plant phenotyping, leaf...
research
03/20/2021

Paying Attention to Multiscale Feature Maps in Multimodal Image Matching

We propose an attention-based approach for multimodal image patch matchi...
research
06/29/2019

RFBNet: Deep Multimodal Networks with Residual Fusion Blocks for RGB-D Semantic Segmentation

Signals from RGB and depth data carry complementary information about th...
research
03/13/2023

Mirror U-Net: Marrying Multimodal Fission with Multi-task Learning for Semantic Segmentation in Medical Imaging

Positron Emission Tomography (PET) and Computer Tomography (CT) are rout...

Please sign up or login with your details

Forgot password? Click here to reset