Single Frame Semantic Segmentation Using Multi-Modal Spherical Images

08/18/2023
by   Suresh Guttikonda, et al.
0

In recent years, the research community has shown a lot of interest to panoramic images that offer a 360-degree directional perspective. Multiple data modalities can be fed, and complimentary characteristics can be utilized for more robust and rich scene interpretation based on semantic segmentation, to fully realize the potential. Existing research, however, mostly concentrated on pinhole RGB-X semantic segmentation. In this study, we propose a transformer-based cross-modal fusion architecture to bridge the gap between multi-modal fusion and omnidirectional scene perception. We employ distortion-aware modules to address extreme object deformations and panorama distortions that result from equirectangular representation. Additionally, we conduct cross-modal interactions for feature rectification and information exchange before merging the features in order to communicate long-range contexts for bi-modal and tri-modal feature streams. In thorough tests using combinations of four different modality types in three indoor panoramic-view datasets, our technique achieved state-of-the-art mIoU performance: 60.60 Stanford2D3DS (RGB-HHA), 71.97 (RGB-D). We plan to release all codes and trained models soon.

READ FULL TEXT

page 7

page 10

page 11

research
03/09/2022

CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers

The performance of semantic segmentation of RGB images can be advanced b...
research
03/31/2020

Attention-based Multi-modal Fusion Network for Semantic Scene Completion

This paper presents an end-to-end 3D convolutional network named attenti...
research
02/08/2023

SwinCross: Cross-modal Swin Transformer for Head-and-Neck Tumor Segmentation in PET/CT Images

Radiotherapy (RT) combined with cetuximab is the standard treatment for ...
research
03/02/2023

Delivering Arbitrary-Modal Semantic Segmentation

Multimodal fusion can make semantic segmentation more robust. However, f...
research
07/17/2020

Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation

Depth information has proven to be a useful cue in the semantic segmenta...
research
02/27/2022

DXM-TransFuse U-net: Dual Cross-Modal Transformer Fusion U-net for Automated Nerve Identification

Accurate nerve identification is critical during surgical procedures for...
research
08/15/2023

UniTR: A Unified and Efficient Multi-Modal Transformer for Bird's-Eye-View Representation

Jointly processing information from multiple sensors is crucial to achie...

Please sign up or login with your details

Forgot password? Click here to reset