MMFNet: A Multi-modality MRI Fusion Network for Segmentation of Nasopharyngeal Carcinoma

12/25/2018
by   Huai Chen, et al.
8

Segmentation of nasopharyngeal carcinoma (NPC) from Magnetic Resonance (MRI) Images is a crucial procedure for radiotherapy to improve clinical outcome and reduce radiation-associated toxicity. It is a time-consuming and label-intensive work for radiologists to manually mark the boundary of NPC slice by slice. In addition, due to the complex anatomical structure of NPC, automatic algorithms based on single-modality MRI do not have enough capability to get accurate delineation. To address the problem of weak distinction between normal adjacent tissues and lesion region in one single modality MRI, we propose a multi-modality MRI fusion network (MMFNet) to take advantage of three modalities MRI to realize NPC's segmentation. The backbone is a multi-encoder-based network, which is composed with several modality-specific encoders and one single decoder. The skip connection layer is utilized to combine low-level features from different modalities MRI with high-level features. Additionally, a fusion block is proposed to effectively fuse features from multi-modality MRI. Specifically speaking, the fusion block firstly highlight informative features and regions of interest, and then these weighted features will by fused and be further refined by a residual fusion block. Moreover, a training strategy named self-transfer is proposed to initializing encoders for multi-encoder-based network, it can stimulate encoders to make full mining of specific modality MRI. Our proposed framework can effectively make use of multi-modality medical datasets and the proposed modules such as fusion block and self-transfer can easily generalize to other multi-modality-based tasks.

READ FULL TEXT

page 5

page 19

page 23

page 24

research
08/13/2020

Multi-Modality Pathology Segmentation Framework: Application to Cardiac Magnetic Resonance Images

Multi-sequence of cardiac magnetic resonance (CMR) images can provide co...
research
07/10/2023

K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality Assessment

The problem of how to assess cross-modality medical image synthesis has ...
research
11/02/2021

A Tri-attention Fusion Guided Multi-modal Segmentation Network

In the field of multimodal segmentation, the correlation between differe...
research
02/05/2021

3D Medical Multi-modal Segmentation Network Guided by Multi-source Correlation Constraint

In the field of multimodal segmentation, the correlation between differe...
research
10/04/2017

Constructing multi-modality and multi-classifier radiomics predictive models through reliable classifier fusion

Radiomics aims to extract and analyze large numbers of quantitative feat...
research
01/07/2022

United adversarial learning for liver tumor segmentation and detection of multi-modality non-contrast MRI

Simultaneous segmentation and detection of liver tumors (hemangioma and ...
research
08/11/2021

Learning Deep Multimodal Feature Representation with Asymmetric Multi-layer Fusion

We propose a compact and effective framework to fuse multimodal features...

Please sign up or login with your details

Forgot password? Click here to reset