Max-Fusion U-Net for Multi-Modal Pathology Segmentation with Attention and Dynamic Resampling

09/05/2020
by   Haochuan Jiang, et al.
10

Automatic segmentation of multi-sequence (multi-modal) cardiac MR (CMR) images plays a significant role in diagnosis and management for a variety of cardiac diseases. However, the performance of relevant algorithms is significantly affected by the proper fusion of the multi-modal information. Furthermore, particular diseases, such as myocardial infarction, display irregular shapes on images and occupy small regions at random locations. These facts make pathology segmentation of multi-modal CMR images a challenging task. In this paper, we present the Max-Fusion U-Net that achieves improved pathology segmentation performance given aligned multi-modal images of LGE, T2-weighted, and bSSFP modalities. Specifically, modality-specific features are extracted by dedicated encoders. Then they are fused with the pixel-wise maximum operator. Together with the corresponding encoding features, these representations are propagated to decoding layers with U-Net skip-connections. Furthermore, a spatial-attention module is applied in the last decoding layer to encourage the network to focus on those small semantically meaningful pathological regions that trigger relatively high responses by the network neurons. We also use a simple image patch extraction strategy to dynamically resample training examples with varying spacial and batch sizes. With limited GPU memory, this strategy reduces the imbalance of classes and forces the model to focus on regions around the interested pathology. It further improves segmentation accuracy and reduces the mis-classification of pathology. We evaluate our methods using the Myocardial pathology segmentation (MyoPS) combining the multi-sequence CMR dataset which involves three modalities. Extensive experiments demonstrate the effectiveness of the proposed model which outperforms the related baselines.

READ FULL TEXT

page 2

page 5

page 7

page 8

page 9

research
02/11/2020

Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis

Magnetic resonance imaging (MRI) is a widely used neuroimaging technique...
research
11/02/2021

A Tri-attention Fusion Guided Multi-modal Segmentation Network

In the field of multimodal segmentation, the correlation between differe...
research
02/05/2021

3D Medical Multi-modal Segmentation Network Guided by Multi-source Correlation Constraint

In the field of multimodal segmentation, the correlation between differe...
research
11/17/2020

Anatomy Prior Based U-net for Pathology Segmentation with Attention

Pathological area segmentation in cardiac magnetic resonance (MR) images...
research
01/06/2020

Unpaired Multi-modal Segmentation via Knowledge Distillation

Multi-modal learning is typically performed with network architectures c...
research
04/09/2018

HyperDense-Net: A hyper-densely connected CNN for multi-modal image segmentation

Recently, dense connections have attracted substantial attention in comp...
research
02/26/2015

Coercive Region-level Registration for Multi-modal Images

We propose a coercive approach to simultaneously register and segment mu...

Please sign up or login with your details

Forgot password? Click here to reset