-
Adversarial Synthesis Learning Enables Segmentation Without Target Modality Ground Truth
A lack of generalizability is one key limitation of deep learning based ...
read it
-
Cross-modality Knowledge Transfer for Prostate Segmentation from CT Scans
Creating large scale high-quality annotations is a known challenge in me...
read it
-
Planar 3D Transfer Learning for End to End Unimodal MRI Unbalanced Data Segmentation
We present a novel approach of 2D to 3D transfer learning based on mappi...
read it
-
Synthetic Perfusion Maps: Imaging Perfusion Deficits in DSC-MRI with Deep Learning
In this work, we present a novel convolutional neural net- work based me...
read it
-
Translating and Segmenting Multimodal Medical Volumes with Cycle- and Shape-Consistency Generative Adversarial Network
Synthesized medical images have several important applications, e.g., as...
read it
-
Random smooth gray value transformations for cross modality learning with gray value invariant networks
Random transformations are commonly used for augmentation of the trainin...
read it
-
Deep Cerebellar Nuclei Segmentation via Semi-Supervised Deep Context-Aware Learning from 7T Diffusion MRI
Deep cerebellar nuclei are a key structure of the cerebellum that are in...
read it
SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth
A key limitation of deep convolutional neural networks (DCNN) based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated if the manually traced images in one imaging modality (e.g., MRI) are able to train a segmentation network for another imaging modality (e.g., CT). In this paper, we propose an end-to-end synthetic segmentation network (SynSeg-Net) to train a segmentation network for a target imaging modality without having manual labels. SynSeg-Net is trained by using (1) unpaired intensity images from source and target modalities, and (2) manual labels only from source modality. SynSeg-Net is enabled by the recent advances of cycle generative adversarial networks (CycleGAN) and DCNN. We evaluate the performance of the SynSeg-Net on two experiments: (1) MRI to CT splenomegaly synthetic segmentation for abdominal images, and (2) CT to MRI total intracranial volume synthetic segmentation (TICV) for brain images. The proposed end-to-end approach achieved superior performance to two stage methods. Moreover, the SynSeg-Net achieved comparable performance to the traditional segmentation network using target modality labels in certain scenarios. The source code of SynSeg-Net is publicly available (https://github.com/MASILab/SynSeg-Net).
READ FULL TEXT
Comments
There are no comments yet.