Toward Unpaired Multi-modal Medical Image Segmentation via Learning Structured Semantic Consistency

06/21/2022
by   Jie Yang, et al.
29

Integrating multi-modal data to improve medical image analysis has received great attention recently. However, due to the modal discrepancy, how to use a single model to process the data from multiple modalities is still an open issue. In this paper, we propose a novel scheme to achieve better pixel-level segmentation for unpaired multi-modal medical images. Different from previous methods which adopted both modality-specific and modality-shared modules to accommodate the appearance variance of different modalities while extracting the common semantic information, our method is based on a single Transformer with a carefully designed External Attention Module (EAM) to learn the structured semantic consistency (i.e. semantic class representations and their correlations) between modalities in the training phase. In practice, the above-mentioned structured semantic consistency across modalities can be progressively achieved by implementing the consistency regularization at the modality-level and image-level respectively. The proposed EAMs are adopted to learn the semantic consistency for different scale representations and can be discarded once the model is optimized. Therefore, during the testing phase, we only need to maintain one Transformer for all modal predictions, which nicely balances the model's ease of use and simplicity. To demonstrate the effectiveness of the proposed method, we conduct the experiments on two medical image segmentation scenarios: (1) cardiac structure segmentation, and (2) abdominal multi-organ segmentation. Extensive results show that the proposed method outperforms the state-of-the-art methods by a wide margin, and even achieves competitive performance with extremely limited training samples (e.g., 1 or 3 annotated CT or MRI images) for one specific modality.

READ FULL TEXT

page 10

page 12

research
07/21/2021

Modality-aware Mutual Learning for Multi-modal Medical Image Segmentation

Liver cancer is one of the most common cancers worldwide. Due to inconsp...
research
06/23/2022

Toward Clinically Assisted Colorectal Polyp Recognition via Structured Cross-modal Representation Consistency

The colorectal polyps classification is a critical clinical examination....
research
01/06/2020

Unpaired Multi-modal Segmentation via Knowledge Distillation

Multi-modal learning is typically performed with network architectures c...
research
07/26/2023

Multi-modal Learning with Missing Modality via Shared-Specific Feature Modelling

The missing modality issue is critical but non-trivial to be solved by m...
research
11/04/2021

Towards dynamic multi-modal phenotyping using chest radiographs and physiological data

The healthcare domain is characterized by heterogeneous data modalities,...
research
08/26/2023

ReFuSeg: Regularized Multi-Modal Fusion for Precise Brain Tumour Segmentation

Semantic segmentation of brain tumours is a fundamental task in medical ...
research
02/08/2023

Multi-Modal Evaluation Approach for Medical Image Segmentation

Manual segmentation of medical images (e.g., segmenting tumors in CT sca...

Please sign up or login with your details

Forgot password? Click here to reset