Unified cross-modality feature disentangler for unsupervised multi-domain MRI abdomen organs segmentation

07/19/2020
by   Jue Jiang, et al.
0

Our contribution is a unified cross-modality feature disentagling approach for multi-domain image translation and multiple organ segmentation. Using CT as the labeled source domain, our approach learns to segment multi-modal (T1-weighted and T2-weighted) MRI having no labeled data. Our approach uses a variational auto-encoder (VAE) to disentangle the image content from style. The VAE constrains the style feature encoding to match a universal prior (Gaussian) that is assumed to span the styles of all the source and target modalities. The extracted image style is converted into a latent style scaling code, which modulates the generator to produce multi-modality images according to the target domain code from the image content features. Finally, we introduce a joint distribution matching discriminator that combines the translated images with task-relevant segmentation probability maps to further constrain and regularize image-to-image (I2I) translations. We performed extensive comparisons to multiple state-of-the-art I2I translation and segmentation methods. Our approach resulted in the lowest average multi-domain image reconstruction error of 1.34±0.04. Our approach produced an average Dice similarity coefficient (DSC) of 0.85 for T1w and 0.90 for T2w MRI for multi-organ segmentation, which was highly comparable to a fully supervised MRI multi-organ segmentation network (DSC of 0.86 for T1w and 0.90 for T2w MRI).

READ FULL TEXT

page 7

page 8

research
07/31/2019

Unsupervised Domain Adaptation via Disentangled Representations: Application to Cross-Modality Liver Segmentation

A deep learning model trained on some labeled data from a certain source...
research
06/26/2023

Progressive Energy-Based Cooperative Learning for Multi-Domain Image-to-Image Translation

This paper studies a novel energy-based cooperative learning framework f...
research
07/18/2020

PSIGAN: Joint probabilistic segmentation and image distribution matching for unpaired cross-modality adaptation based MRI segmentation

We developed a new joint probabilistic segmentation and image distributi...
research
03/05/2021

Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain Adaptation

Despite the successes of deep neural networks on many challenging vision...
research
12/23/2020

ICMSC: Intra- and Cross-modality Semantic Consistency for Unsupervised Domain Adaptation on Hip Joint Bone Segmentation

Unsupervised domain adaptation (UDA) for cross-modality medical image se...
research
07/16/2021

Unpaired cross-modality educed distillation (CMEDL) applied to CT lung tumor segmentation

Accurate and robust segmentation of lung cancers from CTs is needed to m...
research
06/05/2023

Cross-Modal Vertical Federated Learning for MRI Reconstruction

Federated learning enables multiple hospitals to cooperatively learn a s...

Please sign up or login with your details

Forgot password? Click here to reset