A Tri-attention Fusion Guided Multi-modal Segmentation Network

11/02/2021
by   Tongxue Zhou, et al.
0

In the field of multimodal segmentation, the correlation between different modalities can be considered for improving the segmentation results. Considering the correlation between different MR modalities, in this paper, we propose a multi-modality segmentation network guided by a novel tri-attention fusion. Our network includes N model-independent encoding paths with N image sources, a tri-attention fusion block, a dual-attention fusion block, and a decoding path. The model independent encoding paths can capture modality-specific features from the N modalities. Considering that not all the features extracted from the encoders are useful for segmentation, we propose to use dual attention based fusion to re-weight the features along the modality and space paths, which can suppress less informative features and emphasize the useful ones for each modality at different positions. Since there exists a strong correlation between different modalities, based on the dual attention fusion block, we propose a correlation attention module to form the tri-attention fusion block. In the correlation attention module, a correlation description block is first used to learn the correlation between modalities and then a constraint based on the correlation is used to guide the network to learn the latent correlated features which are more relevant for segmentation. Finally, the obtained fused feature representation is projected by the decoder to obtain the segmentation results. Our experiment results tested on BraTS 2018 dataset for brain tumor segmentation demonstrate the effectiveness of our proposed method.

READ FULL TEXT

page 4

page 30

page 33

research
02/05/2021

3D Medical Multi-modal Segmentation Network Guided by Multi-source Correlation Constraint

In the field of multimodal segmentation, the correlation between differe...
research
03/19/2020

Brain tumor segmentation with missing modalities via latent multi-source correlation representation

Multimodal MR images can provide complementary information for accurate ...
research
09/05/2020

Max-Fusion U-Net for Multi-Modal Pathology Segmentation with Attention and Dynamic Resampling

Automatic segmentation of multi-sequence (multi-modal) cardiac MR (CMR) ...
research
12/25/2018

MMFNet: A Multi-modality MRI Fusion Network for Segmentation of Nasopharyngeal Carcinoma

Segmentation of nasopharyngeal carcinoma (NPC) from Magnetic Resonance (...
research
04/08/2021

Multimodal Fusion Refiner Networks

Tasks that rely on multi-modal information typically include a fusion mo...
research
08/26/2022

TFusion: Transformer based N-to-One Multimodal Fusion Block

People perceive the world with different senses, such as sight, hearing,...
research
08/04/2023

Multi-interactive Feature Learning and a Full-time Multi-modality Benchmark for Image Fusion and Segmentation

Multi-modality image fusion and segmentation play a vital role in autono...

Please sign up or login with your details

Forgot password? Click here to reset