Magnetic resonance (MR) imaging has been widely utilized to diagnose patients, as it is non-ionizing, non-invasive, and has a range of contrast mechanisms. However, MR images do not directly provide electron density information, which is essential for some applications such as MR-based radiotherapy treatment planning or attenuation correction in hybrid PET/MR scanners. A straightforward solution is to separately scan a computed tomography (CT) image, but this is time-consuming, costly, potentially harmful to patients, and requires accurate MR/CT registrations. Therefore, to avoid the CT scan, a variety of approaches have been proposed to synthesize CT images from available MR images [1, 4, 5, 6, 7]. For example, by using paired MR and CT atlases, atlas-based methods  first register multiple atlas MR images to a subject MR image, and then the warped atlas CT images are combined to synthesize a subject CT image. Deep learning-based methods 
have designed different convolutional neural network (CNN) structures to directly learn the MR-to-CT mapping.
Although these methods can produce good synthetic images, they rely on a large number of paired CT and MR images, which are hard to obtain in practice, especially for specific MR tissue contrasts. To relax the requirement of paired data, Wolterink et al.  and Chartsias et al.  used a cycleGAN  for MR-to-CT synthesis on unpaired data with promising results. They used a CNN to learn the MR-to-CT mapping with the help of an adversarial loss, which forces synthetic CT images to be indistinguishable from real CT images. To ensure the synthetic CT image correctly corresponds to an input MR image, another CNN is utilized to map synthetic CT back to the MR domain and the reconstructed image should be identical to the input MR image (i.e., cycle-consistency loss).
However, due to a lack of direct constraints between the synthetic and input images, the cycleGAN cannot guarantee structural consistency between these two images. As shown in Fig. 1, the reconstructed MR image is almost identical to the input MR image, indicating the cycle consistency is well kept, but the synthetic CT image is quite different from the ground-truth, especially for the skull region, which illustrates that the structure of the synthetic CT image is not consistent with that of the input MR image. To overcome this, Zhang et al.  trained two auxiliary CNNs respectively for segmenting MR and CT images and also defined a loss to force the segmentation of the synthetic image to be the same as the ground-truth segmentation of the input image. This requires a training dataset with ground-truth segmentations of MR and CT images, which further complicates the training data requirements.
In this work, we propose a structure-constrained cycleGAN to constrain structural consistency without requiring ground-truth segmentations. By using the modality independent neighborhood descriptor , we define a structure-consistency loss enforcing the extracted features in the synthetic image to be voxel-wise close to the ones extracted in the input image. Additionally, we use a position-based selection strategy for selecting training images instead of a completely random selection scheme. Experimental results on synthesizing CT images from brain MR images show that our method achieves significantly better results compared to a conventional cycleGAN with various metrics, and approximates the cycleGAN trained with paired data.
In this section, we introduce our proposed structure-constrained cycleGAN. As shown in Fig. 2, our method contains two generators and , which provide the MR-to-CT and CT-to-MR mappings, respectively. In addition, discriminator is used to distinguish between real and synthetic CT images, and discriminator is for MR images. Our training loss includes three types of terms: an adversarial loss  for matching the distribution of synthetic images to target CT or MR domain; a cycle-consistency loss  to prevent generators from producing synthetic images that are irrelevant to the inputs; and a structure-consistency loss to constrain structural consistency between input and synthetic images.
2.1 Adversarial loss
The adversarial loss  is applied to both generators. For the generator and its discriminator , the adversarial loss is defined as
where and denote the unpaired input CT and MR images. During the training phase, tries to generate a synthetic CT image close to a real CT image, i.e., , while is to distinguish between a synthetic CT image and a real image , i.e., . Similarly, the adversarial loss for and is defined as
2.2 Cycle-consistency loss
To prevent the generators from producing synthetic images that are irrelevant to the inputs, a cycle-consistency loss  is utilized for and forcing the reconstructed images and to be identical to their inputs and . This loss is written as
; (c) the CT image paired with MR image in (a); (d) visual examples of MIND features extracted at voxelswithin paired MR and CT images in (a) and (c).
2.3 Structure-consistency loss
Since the cycle-consistency loss does not necessarily ensure structural consistency (as discussed in Sec. 1), our method uses an extra structure-consistency loss between the synthetic and input images. However, as these two images are respectively in MR and CT domains, we first map these images into a common feature domain by using a modal-independent structural feature, and then the structural consistency between the synthetic and input images is measured in this feature domain. In this work, we use the modality independent neighborhood descriptor (MIND)  as the structural feature. MIND is defined using a non-local patch-based self-similarity and depends on image local structure instead of intensity values. It has been previously applied to MR/CT image registration as a similarity metric. Figure 3(d) shows visual examples of MIND features extracted at different voxels in MR and CT images. In the following paragraphs, we introduce the MIND feature and our structure-consistency loss in detail.
The MIND feature extracts distinctive image structure by comparing each patch with all its neighbors in a non-local region. As shown in Fig. 3(a), for voxel in image , the MIND feature is an
-length vector, wheredenotes a non-local region around voxel , and each component for a voxel is defined as
where is a normalization constant so that the maximal component of is 1. denotes the distance between two image patches respectively centered at voxel and voxel in image , and, which can be written as
where is the 4-neighborhood of voxel .
where is an all-one kernel of the same size as patch , and denotes translated by . By doing this, the structural feature can be extracted via several simple operations and the gradients of these operations can be easily computed.
Based on the MIND feature introduced above, the structure-consistency loss in our method is defined to enforce the extracted MIND features in the synthetic images or to be voxel-wise close to the ones extracted in their inputs or , which can be written as
where and respectively denote the number of voxels in input images and , and is the norm. In this work, we use a non-local region and a patch for computing structure-consistency loss. Furthermore, instead of an all-one kernel , we utilize a Gaussian kernel
with standard deviationto reweight the importance of voxels within patch in Eqn. 7. In preliminary experiments, we tried different non-local regions, patch sizes, and values, but did not observe improved performance.
2.4 Training loss
Given the definitions of adversarial, cycle-consistency, and structure-consistency losses above, the training loss of our proposed method is defined as:
2.5 Network structure
Our method is composed of four trainable neural networks, i.e., two generators, and , and two discriminators, and , and we use the same network structures as [8, 6] in this work. That is, two generators, and. The two discriminators, and
, are 2D FCNs consisting of five convolutional layers to classify whetheroverlapping image patches are real or synthetic. For further details, please refer to .
2.6 Position-based selection strategy
Although our input MR and CT slices are unpaired, we can get the positions of their slices within the volumes. Slices in the middle of the volume necessarily have more brain tissue than peripheral slices. Thus, instead of feeding in slices at extremely different positions of the brain, e.g., a peripheral CT slice and a medial MR slice, we input training slices at similar positions; this is referred to as a position-based selection (PBS) strategy. That is, the MR and CT slices are linearly aligned considering their respective numbers of slices within the volumes, and given the -th MR slice in its volume, the index of corresponding CT slice selected by our method is determined by
where and respectively denote the number of slices in unpaired MR and CT volumes. denotes the rounding function, and is a random integer within the range of . This strategy forces the discriminators to be stronger at distinguishing synthetic images from real ones, thus avoiding mode collapse. This in turn forces our generators to be better in order to trick our discriminators. We evaluate this position-based selection strategy in Sec. 3.
3.1 Data set
The MR and CT volumes are respectively obtained using a Siemens Magnetom Espree 1.5T scanner (Siemens Medical Solutions, Malvern, PA) and a Philips Brilliance Big Bore scanner (Philips Medical Systems, Netherlands) under a routine clinical protocol for brain cancer patients. Geometric distortions in MR volumes are corrected using a 3D correction algorithm in the Siemens Syngo console workstation. All MR volumes are N4 corrected and normalized by aligning the white matter peak identified by fuzzy C-means.
The data set contains the brain MR and CT volumes of 45 patients, which were divided into a training set containing MR and CT volumes of 27 patients, a validation set of 3 patients for model and epoch selection, and a test set of 15 patients for performance evaluation. As in
, the experiments were performed on 2D sagittal image slices. Each MR or CT volume contains about 270 sagittal images, which are resized and padded towhile maintaining the aspect ratio, and the intensity ranges are respectively HU for CT and for MR. To augment the training set, each image is padded to and then randomly cropped to as training samples.
compared to the conventional cycleGAN using a paired sample t-test.
3.2 Experimental results
We compare the proposed method to the conventional cycleGAN [8, 6] (denoted as “cycleGAN”) and a cycleGAN trained with paired data (denoted as “cycleGAN (paired)”), which represents the best that a cycleGAN can achieve. To evaluate the position-based selection strategy in Sec. 2.6, a cycleGAN using this strategy during training, denoted as “cycleGAN (PBS)”, is also included in comparison. As in [8, 6], the learning rate is set to 0.0002 for all compared methods.
To quantitatively compare these methods, we use mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) between the ground-truth CT volume and the synthetic one, which are computed within the head region mask and averaged over 15 test subjects. Furthermore, SSIM over regions with high gradient magnitudes (denoted as “SSIM(HG)”) is also computed to measure the quality of bone regions in synthetic images. The maximum value in PSNR and the dynamic range in SSIM are set to 4500, as the range of our CT data is HU.
As shown in Fig. 4, our proposed method achieves significantly better performance than conventional cycleGAN in all the metrics () and produces similar results compared to the cycleGAN trained with paired data. Compared to randomly selecting training slices at any position, our proposed position-based selection strategy produces significantly higher SSIM(HG) score () with marginal improvement in the other three metrics. Figure 5 shows visual examples of synthetic CT images by different methods from a test subject.
We propose a structure-constrained cycleGAN for brain MR-to-CT synthesis using unpaired data. Compared to the conventional cycleGAN [8, 6], we define an extra structure-consistency loss based on the modality independent neighborhood descriptor to constrain structural consistency and also introduce a position-based selection strategy for selecting training images. The experiments show that our method generates better synthetic CT images than the conventional cycleGAN and produces results similar to a cycleGAN trained with paired data.
This work is supported by the NSFC (11622106, 11690011, 61721002) and the China Scholarship Council.
-  Chartsias, A., Joyce, T., et al.: Adversarial image synthesis for unpaired multi-modal cardiac data. In: SASHIMI. pp. 3–13 (2017)
-  Goodfellow, I., et al.: Generative adversarial nets. In: NIPS. pp. 2672–2680 (2014)
-  Heinrich, M.P., et al.: MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration. Med. Image Anal. 16(7), 1423–1435 (2012)
-  Hofmann, M., Bezrukov, I., et al.: MRI-based attenuation correction for whole-body PET/MRI: quantitative evaluation of segmentation- and atlas-based methods. J. Nucl. Med. 52(9), 1392–1399 (2011)
-  Roy, S., Butman, J.A., Pham, D.L.: Synthesizing CT from ultrashort echo-time MR images via convolutional neural networks. In: SASHIMI. pp. 24–32 (2017)
-  Wolterink, J.M., Dinkla, A.M., et al.: Deep MR to CT synthesis using unpaired data. In: SASHIMI. pp. 14–23 (2017)
-  Zhang, Z., et al.: Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network. In: CVPR (2018)
Zhu, J.Y., Park, T., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV. pp. 2242–2251 (2017)