Multi-Modal Attention-based Fusion Model for Semantic Segmentation of RGB-Depth Images

12/25/2019
by   Fahimeh Fooladgar, et al.
5

The 3D scene understanding is mainly considered as a crucial requirement in computer vision and robotics applications. One of the high-level tasks in 3D scene understanding is semantic segmentation of RGB-Depth images. With the availability of RGB-D cameras, it is desired to improve the accuracy of the scene understanding process by exploiting the depth features along with the appearance features. As depth images are independent of illumination, they can improve the quality of semantic labeling alongside RGB images. Consideration of both common and specific features of these two modalities improves the performance of semantic segmentation. One of the main problems in RGB-Depth semantic segmentation is how to fuse or combine these two modalities to achieve more advantages of each modality while being computationally efficient. Recently, the methods that encounter deep convolutional neural networks have reached the state-of-the-art results by early, late, and middle fusion strategies. In this paper, an efficient encoder-decoder model with the attention-based fusion block is proposed to integrate mutual influences between feature maps of these two modalities. This block explicitly extracts the interdependences among concatenated feature maps of these modalities to exploit more powerful feature maps from RGB-Depth images. The extensive experimental results on three main challenging datasets of NYU-V2, SUN RGB-D, and Stanford 2D-3D-Semantic show that the proposed network outperforms the state-of-the-art models with respect to computational cost as well as model size. Experimental results also illustrate the effectiveness of the proposed lightweight attention-based fusion model in terms of accuracy.

READ FULL TEXT

page 1

page 2

page 10

page 11

research
12/17/2018

Learning Common Representation from RGB and Depth Images

We propose a new deep learning architecture for the tasks of semantic se...
research
05/24/2019

ACNet: Attention Based Network to Exploit Complementary Features for RGBD Semantic Segmentation

Compared to RGB semantic segmentation, RGBD semantic segmentation can ac...
research
03/31/2020

Attention-based Multi-modal Fusion Network for Semantic Scene Completion

This paper presents an end-to-end 3D convolutional network named attenti...
research
09/24/2018

Incorporating Luminance, Depth and Color Information by Fusion-based Networks for Semantic Segmentation

Semantic segmentation is paramount to accomplish many scene understandin...
research
04/10/2022

Scale Invariant Semantic Segmentation with RGB-D Fusion

In this paper, we propose a neural network architecture for scale-invari...
research
08/11/2018

Self-Supervised Model Adaptation for Multimodal Semantic Segmentation

Learning to reliably perceive and understand the scene is an integral en...
research
05/11/2023

EAML: Ensemble Self-Attention-based Mutual Learning Network for Document Image Classification

In the recent past, complex deep neural networks have received huge inte...

Please sign up or login with your details

Forgot password? Click here to reset