Deep CT to MR Synthesis using Paired and Unpaired Data

05/28/2018 ∙ by Cheng-Bin Jin, et al. ∙ 0

MR imaging will play a very important role in radiotherapy treatment planning for segmentation of tumor volumes and organs. However, the use of MR-based radiotherapy is limited because of the high cost and the increased use of metal implants such as cardiac pacemakers and artificial joints in aging society. To improve the accuracy of CT-based radiotherapy planning, we propose a synthetic approach that translates a CT image into an MR image using paired and unpaired training data. In contrast to the current synthetic methods for medical images, which depend on sparse pairwise-aligned data or plentiful unpaired data, the proposed approach alleviates the rigid registration challenge of paired training and overcomes the context-misalignment problem of the unpaired training. A generative adversarial network was trained to transform 2D brain CT image slices into 2D brain MR image slices, combining adversarial loss, dual cycle-consistent loss, and voxel-wise loss. The experiments were analyzed using CT and MR images of 202 patients. Qualitative and quantitative comparisons against independent paired training and unpaired training methods demonstrate the superiority of our approach.



There are no comments yet.


page 3

page 4

page 8

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

CT-based radiotherapy is currently used in radiotherapy planning and its effect is quite good. Recently, radiotherapy devices using magnetic resonance (MR) imaging are being developed, since MR imaging is much better than computed tomography (CT) scan in the contrast of soft tissue. In particular, the use of MR-based radiotherapy is increasing in brain tumors, and MR imaging will play a very important role in radiotherapy planning in the near future. However, MR imaging usually costs more than a CT scan, and the time required for a complete MR scan takes about 20 to 30 minutes. Conversely, a CT scan is usually completed within 5 minutes. In addition, CT scans can also differentiate soft tissue, especially with an intravenous contrast agent, and has higher imaging resolution, and less motion artifact due to its high imaging speed, which are its advantages compared with MR imaging. Furthermore, the use of MR-based radiotherapy has been limited in situations where the use of metal implants such as cardiac pacemakers and artificial joints is increasing in aging society. Much of the concern about CT scans is the harm of radiation. However, there is no risk to patients, even for a patient with lung tuberculosis who undergoes several X-rays in one year. The real risk is to professionals–technicians and radiologists. Of course, this is a controversial topic among experts. In this paper, we propose a synthetic approach to produce synthesized MR images from brain CT images. To the best of our knowledge, this is the first study that attempts to translate a CT image to an MR image.

The major contributions of this paper can be summarized as follows:

  • The proposed approach uses paired and unpaired data to overcome the context-misalignment issue of unpaired training, and to alleviate the registration and blurry results of paired training.

  • The paper introduces MR-GAN framework by combining adversarial loss, dual cycle-consistent loss, and voxel-wise loss for training paired and unpaired data together.

  • The proposed approach can be easily extended to other data synthesis (MR-CT and CT-PET synthesis) to benefit the medical image community.

Recently, advances in deep learning and machine learning in medical computer-aided diagnosis (CAD)

son2017retinal ; chen2017dcan , have allowed systems to provide information on potential abnormalities in medical images. Many methods have synthesized a CT image from the available MR image for MR-only radiotherapy treatment planning edmund2017review . The MR-based synthetic CT generation method han2017mr

used deep convolutional neural networks (CNN) with paired data, which was rigidly aligned by the minimization of voxel-wise differences between CT and MR images. However, minimizing the voxel-wise loss between the synthesized image and the reference image during training may lead to blurry generated outputs. In order to obtain clear results, Nie et al.

nie2017medical proposed a method that combined the voxel-wise loss with an adversarial loss in a generative adversarial network (GAN) goodfellow2014generative . Concurrent work bi2017synthesis

proposed a similar idea to synthesize positron emission tomography (PET) images from CT images using multiple channel information of the pix2pix framework by Isola et al.

isola2017image . Ben-Cohen et al. ben2017virtual combined fully convolutional network (FCN) long2015fully and the pix2pix isola2017image model to export initial results and blend the two outputs to generate a synthesized PET image from a CT image.

Although the combination of the voxel-wise loss with adversarial loss addresses the problem of blurry generated synthesis, the voxel-wise loss is dependent on the availability of large numbers of aligned CT and MR images. Obtaining rigidly aligned data can be difficult and expensive. However, most medical institutions have considerable unpaired data that were scanned for different purposes and different radiotherapy treatments. Using unpaired data would increase the amount of training data exponentially, and alleviate many of the constraints of current deep learning-based synthetic systems (Fig. 1). Unlike the paired data-based methods in han2017mr ; nie2017medical ; bi2017synthesis ; ben2017virtual , Wolterink et al. wolterink2017deep used a CycleGAN model zhu2017unpaired , which is an image-to-image translation with unpaired images used to synthesize CT images from MR images. In an unpaired GAN paradigm, we want the synthesized image to not only look real, but also to be paired up with an input image in a meaningful way. Therefore, cycle-consistency loss is enforced to translate the synthesized image back to the original image domain, and minimize the difference between the input and the reconstructed image as a regularization. Because of the large amount of unpaired data, the synthesized images are more realistic than the results from the paired training methods. However, compared to the voxel-wise loss of the paired data, the cycle-consistent loss still has certain limitations in correctly translating the contextual information of soft tissues and blood vessels.

Figure 1: Left Deep networks train with paired data, which include CT and MR slices taken from the same patient at the same anatomical location. Paired data need to be intentionally collected and aligned, which imposes difficulty. However, paired data give network regression constraints that are far more correct. Right Deep networks train with unpaired data, which include CT and MR slices that are taken from different patients at different anatomical locations. There is a considerable amount of unpaired data.

Ii Materials and Methods

ii.1 Data acquisition

Our dataset consisted of the brain CT and MR images of 202 patients who were scanned for radiotherapy treatment planning for brain tumors. Among these patients, 98 patients had only CT images, and 84 patients had only MR images. These data belonged to the unpaired data. For the remaining 20 patients, CT and MR images were acquired during radiation treatment. CT images were acquired helically on a GE Revolution CT scanner (GE Healthcare, Chicago, Illinois, United States) at 120 kVp and 450 mA. T2 3D MR (repetition time, 4320 ms; echo time, 95 ms; flip angle 150°) images were obtained with a Siemens 3.0T Trio TIM MR scanner (Siemens, Erlangen, Germany). To generate paired sets of CT and MR images, CT and MR images of the same patient were aligned and registered using affine transformation based on mutual information. CT and MR images were resampled to the same voxel size . Before the registration, the skull area in the CT images was removed by masking all voxels above a manually selected threshold. Skull-stripped MR brain image were also registered. In this study, AFNI’s 3dAlleniate function was used for the regression process saad2009new . The affine transformation parameters obtained were used to register resampled CT and MR images with the skull. To maximize information inside the brain area, CT images were windowed with a window length of 80 Hounsfield units (HU) and a window center of 40 HU. After registration (Fig. 2), CT and MR images were well-aligned spatially and temporally.

Figure 2: Examples showing registration between CT and MR images after the mutual-information affine transform.

ii.2 Mr-Gan

The proposed approach has a structure similar to CycleGAN zhu2017unpaired , which contains a forward and a backward cycle. However, our model has a dual cycle-consistent term for paired and unpaired training data. The dual cycle-consistent term includes four cycles: forward unpaired-data, backward unpaired-data, forward paired-data, and backward paired-data cycles (Fig. 3).

The forward unpaired-data cycle contains three independent networks that each have a different goal. The network attempts to translate a CT image to a realistic MR image, so that the output cannot be distinguished from ”real” MR images by the adversarially trained discriminator , which is trained to do as well as possible at discriminating the synthetic ”fakes.” In addition, to solve the well-known problem of mode collapse, the network is trained to translate back to the original CT domain. To improve training stability, the backward unpaired-data cycle is also enforced in, translating an MR image to a CT image, and it works with a logic opposite to the forward unpaired-data cycle. Unlike the unpaired-data cycle, the discriminators in the paired-data cycles do not just discriminate between real and synthesized images; they also observe a pair of CT and MR images to differentiate between real and synthesized pairs. In addition, the voxel-wise loss between the synthesized and the reference image is included in the paired-data cycles. The synthetic networks and in paired-data cycles work exactly the same as in the unpaired-data cycles.

Figure 3: Dual cycle-consistent structure consists of (a) a forward unpaired-data cycle, (b) a backward unpaired-data cycle, (c) a forward paired-data cycle, and (d) a backward paired-data cycle. In the forward unpaired-data cycle, the input CT image is translated to an MR image by a synthesis network . The synthesized MR image is translated to a CT image that approximates the original CT image, and is trained to distinguish between real and synthesized MR images. In the backward unpaired-data cycle, a CT image is instead synthesized from an input MR by the network , reconstructs the MR from the synthesized CT image, and is trained to distinguish between real and synthesized CT images. The forward paired-data and the backward paired-data cycle are the same as the above forward unpaired-data and the backward unpaired-data cycle. The difference is that and

do not just discriminate between real and synthesized images, they learn to classify between real and synthesized pairs. In addition, the voxel-wise loss between the synthesized image and the reference image is included in the paired-data cycles.

ii.3 Objective

Both networks in GAN were trained simultaneously with discriminators and estimating the probability that a sample came from real data rather than the synthesis networks, while the synthesis networks and were trained to translate realistic synthetic data that could not be distinguished from real data by the discriminators. We applied adversarial losses goodfellow2014generative to the synthesis network : and its discriminator , and express the objective as:


where tries to translate an image to a image that looks similar to an image from the MR image domain. For the first and the second term in Eq. (1), the discriminator aims to distinguish between synthesized and the real MR image for unpaired data. For the paired data, the discriminator also tries to discriminate between the real and synthesized pairs that provide with the synthesized MR image as the third and fourth term in the Eq. (1). The synthesis network tries to minimize this objective against an adversarial that tries to maximize it, i.e., . For another synthesis network , and discriminator have a similar adversarial loss as well, i.e., .

To stabilize the training procedure, the negative log-likelihood objective in unpaired data was replaced by a least squares loss mao2017least in our work. Hence, the discriminator aims to apply the label 1 for real MR images, and the label 0 for synthesized MR images. However, we found that keeping the negative log-likelihood objective in the paired data generated higher quality results. Eq. (1) then becomes:


The dual cycle-consistent loss is enforced to further reduce the space of possible mapping functions for paired and unpaired training data. In the forward cycle, for each from the CT domain, the image translation cycle should be able to bring back to the original image, i.e., . Similarly, for each from the MR domain, and should also satisfy a backward cycle consistency: . The dual cycle-consistency loss is expressed as:


Previous approaches pathak2016context have found it beneficial to combine the adversarial loss with a more traditional loss, such as L1 distance. For the paired data , , the synthesis network is tasked to not only generate realistic MR images, but also to be near the reference of the input . Though we don’t need a synthesis network as a final product, adding the same constraint to the enables a higher quality of synthesized MR images. The L1 loss term for the and are:


The overall objective is:


where and control the relative importance of adversarial loss, dual cycle-consistent loss, and voxel-wise loss. We aim to solve the Eq. (6):


The MR-GAN procedure is described in Algorithm 1.

1:, the learning rate. , the batch size. , the number of iterations of the unpaired/paired data.
2:for number of training iterations do
3:     for  steps do
4:         Sample a batch from the unpaired CT data.
5:         Sample a batch from the unpaired MR data.
6:         Update the discriminator, , by ascending its stochastic gradient:
7:         Update the generator, , by descending its stochastic gradient:
8:         Update the discriminator, , by ascending its stochastic gradient:
9:         Update the generator, , by descending its stochastic gradient:
10:     end for
Algorithm 1 MR-GAN, proposed algorithm. All experiments in the paper used the default values , .
11:     for  steps do
12:         Sample a batch from the paired data.
13:         Update the discriminator, , by ascending its stochastic gradient:
14:         Update the generator, , by descending its stochastic gradient:
15:         Update the discriminator, , by ascending its stochastic gradient:
16:         Update the generator, , by descending its stochastic gradient:
17:     end for
18:end for
19:return result
Figure 4: Flow diagram of the discriminator in the synthetic system.

has extra head and extra tail convolutional layers for the different input and loss functions of the paired and unpaired data. Discriminator

has the same architecture as the .

ii.4 Implementation

For the architecture of synthesis networks and , we utilized the archiecture from Johnson et al. johnson2016perceptual

, which was a 2D fully-convolutional network with one convolutional layer, followed by two strided convolutional layers, nine residual blocks


, two fractionally-strided convolutional layers, and one last convolutional layer. Instance normalization


and ReLU followed each convolution except at the last convolutional layer. The synthesis network takes a

input and generates an output image of the same size.

For the discriminators and , we adapted PatchGANs isola2017image , which tries to classify each patch in an image as real or fake. This way, the discriminators could better focus on high-frequency information in local image patches. Networks and used the same architecture, which had one convolution as an extra head for different input data, four strided convolutions as a shared network, and two convolutions as an extra tail for different tasks. Except for the first and last convolution, each convolutional layer was followed by instance normalization ulyanovinstance and leaky ReLu xu2015empirical (Fig. 4).

To optimize our networks, we used minibatch SGD and applied the Adam optimizer kingma2014adam with a batch size of 1. The learning rate started at for the first iterations, and decayed linearly to zero over the next iterations. For all experiments, we set and in Eq. (5) empirically. At inference time, we ran the synthesis network only to give a CT image.

Iii Results

iii.1 Data preprocessing

Among the data of patients, all of the unpaired data were used as training data. The paired data were separated into a training set with the data of patients, and a separate test set containing CT images, and corresponding reference MR images from patients. Each CT or MR volume involved more than 2D axial image slices. These were resampled to in

-grayscale and uniformly distributed by HU for CT and MR data.

For training, we augmented the training data with random online transforms:

  • Flip: Batch data were horizontally flipped with probability.

  • Translation: Batch data were randomly cropped to size

    from padded


  • Rotation: Batch data were rotated by degrees.

The paired CT and MR images were augmented using the same factor. However, in the unpaired data, CT and MR images were augmented independently. The proposed approach training took about hours for iterations using a single GeForce GTX 1080Ti GPU. At inference time, the system required ms to synthesize a single-slice CT image to MR image.

iii.2 Evaluation metrics

Real and synthesized MR images were compared using the mean absolute error (MAE)


where is the index of the 2D axial image slice in aligned voxels, and is the number of slices in the reference MR images. MAE measures the average distance between each pixel of the synthesized and the reference MR image. In addition, the synthesized MR images were evaluated using the peak-signal-to-noise-ratio (PSNR) as proposed in nie2017medical ; bi2017synthesis ; wolterink2017deep :


where . PSNR measures the ratio between the maximum possible intensity value and the mean square error (MSE) of the synthesized and reference MR images.

Figure 5: From left to right Input CT, synthesized MR, reference MR, and absolute error between real and synthesized MR images.

iii.3 Analysis of MR synthesis using paired and unpaired data

We first compared synthesized MR images with reference MR images that had been carefully registered to become paired data with CT images. For brevity, we refer to our method as MR-GAN. Fig. 5 shows four examples of an input CT image, synthesized MR image obtained by MR-GAN, reference MR image, and absolute difference maps between the synthesized and reference MR images. The MR-GAN learned to differentiate between different anatomical structures with similar pixel intensity in CT images, such as bones, gyri, and soft brain tissues. The largest differences are in the area of bony structures, and the smallest differences are found in the soft brain tissues. This may be partly due to the misalignment between the CT and reference MR images, and because the CT image provides more detail about bony structures to complement the shortcoming of the synthesized MR, which is focused on soft brain tissues.

Table 1 shows a quantitative evaluation using MAE and PSNR to compare different methods in the test set. We compare the proposed method with independent training using paired and unpaired data. To train the paired data system, a synthesis network with the same architecture and a discriminator network with the same architecture are trained using a combination of adversarial loss and voxel-wise loss as in the pix2pix framework isola2017image . To train the unpaired data system wolterink2017deep , the cycle-consistent structure of the CycleGAN zhu2017unpaired model is used, which is the same as our approach for the forward and backward unpaired-data cycle, shown in Fig. 3. To ensure a fair comparison, we implemented all the baselines using the same architecture and implementation details as our method.

Paired Unpaired Ours Paired Unpaired Ours
Patient01 24.20 27.71 22.76 62.82 62.45 64.65
Patient02 17.82 24.12 18.27 64.91 63.05 65.93
Patient03 22.01 22.45 22.27 63.59 63.83 63.55
Patient04 18.23 23.64 16.75 65.28 63.44 65.76
Patient05 18.26 22.82 17.68 64.92 64.04 65.97
Patient06 20.52 20.41 17.57 64.87 64.78 65.92
Patient07 20.63 18.72 16.55 64.55 64.14 66.28
Patient08 19.42 22.77 18.30 64.10 63.22 65.82
Patient09 19.12 16.98 18.57 64.93 66.19 65.43
Patient10 23.23 29.76 24.91 63.81 62.60 64.17
Avgsd 20.342.20 22.943.62 19.362.73 64.280.81 63.771.06 65.350.86
Table 1: MAE and PSNR evaluations between synthesized and real MR images when training with paired, unpaired, and paired with unpaired data (Ours).
Discriminator D1 D2 D3 D4 D5
Extra head (64) (64, 64) (64, 64) (64) (64)
Shared network
(64, 128,
256, 512)
(64, 128,
256, 512)
(64, 128,
256, 512)
(128, 256,
(128, 256,
Extra tail (512, 1)
512, 1)
(512, 512,
512, 1)
(512, 1) (1)
Table 2: Network architecture
Model Leaset squares loss
D1 D2 D3
MAE (Avgsd) 21.072.98 42.953.03 37.252.58
PSNR (Avgsd) 65.250.81 61.310.64 62.730.77
Table 3: Comparison of the MAE and PSNR for different discriminator networks and leaset squares loss. The leading scores are displayed in bold font.
Model Negative log-likelihood
D1 D2 D3 D4 D5
MAE (Avgsd) 19.362.73 49.703.10 59.063.27 19.352.56 20.572.82
PSNR (Avgsd) 65.350.86 60.340.60 59.230.46 65.240.77 65.160.85
Table 4: Comparison of the MAE and PSNR for different discriminator networks and negative log-likelihood. The leading scores are displayed in bold font.
Figure 6: From left to right Input CT image, synthesized MR image with paired training, synthesized MR image with unpaired training, synthesized MR images with paired and unpaired training (ours), and reference MR images.
Figure 7: From left to right Input CT image, synthesized MR image, reconstructed CT image, and relative difference error between the input and reconstructed CT image.

Although having trained with limited paired data, the model using paired training data outperformed the CycleGAN model on unpaired data in our experiments. Table 1 indicates that our approach to training with paired and unpaired data together had the best performance across all measurements, with the lowest MAE and highest PSNR compared to the conventional paired and unpaired training. Fig. 6 shows a qualitative comparison between paired training, unpaired training, and our approach. The results of training with paired data seemed good, but generated blurry outputs. The images obtained with unpaired training were realistic, but lost anatomical information in ares of soft brain tissue, and contained artifacts in areas with bony structures. Although our method learns translation using paired and unpaired data, the quality of our results closely approximates the reference MR images, and for some details our results are much clearer than the reference MR images.

We present comparison results for several discriminator models. As mentioned in Fig. 4, the discriminator is consists of the extra head, shared network, and extra tail. Different discriminator models are presented in Table 2, use standard padded convolution with stride 1. The comparison of the MAE and PSNR for different discriminator networks and objective functions are given in Table 3 and Table 4. The results clearly indicate that the discriminators with the negative log-likelihood better than least squares loss mao2017least in terms of MAE and PSNR. We observed that the performance was positively correlated with the number of convolution layers in the extra head and extra tail of the discriminator. With little convolution layers in the two networks, the discriminator prevents overfitting in the paired and unpaired learning. We also noted that the performance was uncorrelated to the number of convolution layers in the shared network.

During the training of the MR-GAN, dual cycle-consistency is explicitly imposed in a bi-direction way. Hence, an input CT image translated to an MR image by the model should be successfully translated back to the original CT domain. Fig. 7

shows an input CT, corresponding synthesized MR images from the CycleGAN and MR-GAN, their reconstructed CT images, and their relative difference maps. We observed that the reconstructed CT images were very close to the input images. Relative differences are distributed at the contour of the bone, and the reconstructed CT image from MR-GAN is more smoothed than the CycleGAN model because of the correct SynMR(ICT), which is like a latent vector in an auto-encoder

hinton2006reducing .

Iv Discussion

This paper has shown that a synthetic system can be trained using paired and unpaired data to synthesize an MR images from a CT image. Unlike other methods, the proposed approach utilizes the adversarial loss from a discriminator network, dual cycle-consistent loss using paired and unpaired training data, and voxel-wise loss based on paired data to synthesize realistically-looking MR images. Quantitative evaluation results in Table 1 show that the average correspondence between synthesized and reference MR images in our approach is much better than in other methods; synthesized images are closer to the reference, and achieve the lowest MAE of and the highest PSNR of . Slight misalignments between CT images and reference MR images may have a large effect on quantitative evaluation. Although a quantitative measurement may be the gold standard for assessing the performance of a method, we found that numerical differences in the quantitative evaluation do not indicate the qualitative difference correctly. In future work, we will evaluate the accuracy of synthesized MR images based on perceptual studies with medical experts.

A synthetic system using a CycleGAN model zhu2017unpaired and trained with unpaired data generated realistic results. However, the results had poor anatomical definitions compared with corresponding CT images, as exemplified in Fig. 6. We found that even though it was trained with limited paired data, the pix2pix model isola2017image outperformed the CycleGAN model on unpaired data in our experiments. The limitation of paired training is blurry output due to the voxel-wise loss. Qualitative analysis showed that MR images obtained by the MR-GAN look more realistic and contain less blurring than other methods. This could be due to the dual cycle-consistent and voxel-wise loss for paired data.

The experimental results have implications for accurate CT-based radiotherapy treatment for patients who are contraindicated to undergo an MR scan because of cardiac pacemakers or metal implants, and patients who live in areas with poor medical services. Our synthetic system can be trained using any kind of data: paired, unpaired, or both. Using paired and unpaired data together obtain higher quality synthesized images than using one kind of data alone.

V Conclusion

We propose a system for synthesizing MR images from CT images. Our approach uses paired and unpaired data to solve the context-misalignment problem of unpaired training, and alleviate the rigid registration task and blurred results of paired training. Unpaired data is plentifully available, and together with limited paired data, could be used for effective synthesis in many cases. Our results on the test set demonstrate that MR-GAN was much closer to the reference MR images when compared with other methods. The preliminary results indicated that the synthetic system is able to efficiently translate structures within complicated 2D brain slices, such as soft brain vessels, gyri, and bones. In future work, we will investigate the 3D information of anatomical structures as presented in CT and MR brain sequences to further improve performance based on paired and unpaired data. We suggest that our approach can potentially increase the quality of synthesized images for a synthetic system that depends on supervised and unsupervised settings, and can also be extended to support other applications, such as MR-CT and CT-PET synthesis.


  • (1) Son Jaemin, Park Sang Jun, Jung Kyu-Hwan. Retinal vessel segmentation in fundoscopic images with generative adversarial networks arXiv preprint arXiv:1706.09318. 2017.
  • (2) Chen Hao, Qi Xiaojuan, Yu Lequan, Dou Qi, Qin Jing, Heng Pheng-Ann. DCAN: Deep contour-aware networks for object instance segmentation from histology images Medical image analysis. 2017;36:135–146.
  • (3) Edmund Jens M, Nyholm Tufve. A review of substitute CT generation for MRI-only radiation therapy Radiation Oncology. 2017;12:28.
  • (4) Han Xiao. MR-based synthetic CT generation using a deep convolutional neural network method Medical physics. 2017;44:1408–1419.
  • (5) Nie Dong, Trullo Roger, Lian Jun, et al. Medical image synthesis with context-aware generative adversarial networks in International Conference on Medical Image Computing and Computer-Assisted Intervention:417–425Springer 2017.
  • (6) Goodfellow Ian, Pouget-Abadie Jean, Mirza Mehdi, et al. Generative adversarial nets in Advances in neural information processing systems:2672–2680 2014.
  • (7) Bi Lei, Kim Jinman, Kumar Ashnil, Feng Dagan, Fulham Michael. Synthesis of positron emission tomography (pet) images via multi-channel generative adversarial networks (gans) in Molecular Imaging, Reconstruction and Analysis of Moving Body Organs, and Stroke Imaging and Treatment:43–51Springer 2017.
  • (8)

    Isola Phillip, Zhu Jun-Yan, Zhou Tinghui, Efros Alexei A. Image-to-image translation with conditional adversarial networks

    arXiv preprint. 2017.
  • (9) Ben-Cohen Avi, Klang Eyal, Raskin Stephen P, Amitai Michal Marianne, Greenspan Hayit. Virtual pet images from ct data using deep convolutional networks: Initial results in International Workshop on Simulation and Synthesis in Medical Imaging:49–57Springer 2017.
  • (10) Long Jonathan, Shelhamer Evan, Darrell Trevor. Fully convolutional networks for semantic segmentation in

    Proceedings of the IEEE conference on computer vision and pattern recognition

    :3431–3440 2015.
  • (11) Wolterink Jelmer M, Dinkla Anna M, Savenije Mark HF, Seevinck Peter R, Berg Cornelis AT, Išgum Ivana. Deep MR to CT synthesis using unpaired data in International Workshop on Simulation and Synthesis in Medical Imaging:14–23Springer 2017.
  • (12) Zhu Jun-Yan, Park Taesung, Isola Phillip, Efros Alexei A. Unpaired image-to-image translation using cycle-consistent adversarial networks arXiv preprint. 2017.
  • (13) Saad Ziad S, Glen Daniel R, Chen Gang, Beauchamp Michael S, Desai Rutvik, Cox Robert W. A new method for improving functional-to-structural MRI alignment using local Pearson correlation Neuroimage. 2009;44:839–848.
  • (14) Mao Xudong, Li Qing, Xie Haoran, Lau Raymond YK, Wang Zhen, Smolley Stephen Paul. Least squares generative adversarial networks in Computer Vision (ICCV), 2017 IEEE International Conference on:2813–2821IEEE 2017.
  • (15) Pathak Deepak, Krahenbuhl Philipp, Donahue Jeff, Darrell Trevor, Efros Alexei A. Context encoders: Feature learning by inpainting in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition:2536–2544 2016.
  • (16)

    Johnson Justin, Alahi Alexandre, Fei-Fei Li. Perceptual losses for real-time style transfer and super-resolution in

    European Conference on Computer Vision:694–711Springer 2016.
  • (17) He Kaiming, Zhang Xiangyu, Ren Shaoqing, Sun Jian. Deep residual learning for image recognition in Proceedings of the IEEE conference on computer vision and pattern recognition:770–778 2016.
  • (18) Ulyanov D, Vedaldi A, Lempitsky VS. Instance normalization: the missing ingredient for fast stylization. CoRR abs/1607.08022 (2016)
  • (19) Xu Bing, Wang Naiyan, Chen Tianqi, Li Mu. Empirical evaluation of rectified activations in convolutional network arXiv preprint arXiv:1505.00853. 2015.
  • (20) Kingma Diederik P, Ba Jimmy. Adam: A method for stochastic optimization arXiv preprint arXiv:1412.6980. 2014.
  • (21) Hinton Geoffrey E, Salakhutdinov Ruslan R. Reducing the dimensionality of data with neural networks science. 2006;313:504–507.