3D-StyleGAN: A Style-Based Generative Adversarial Network for Generative Modeling of Three-Dimensional Medical Images

07/20/2021 ∙ by Sungmin Hong, et al. ∙ 64

Image synthesis via Generative Adversarial Networks (GANs) of three-dimensional (3D) medical images has great potential that can be extended to many medical applications, such as, image enhancement and disease progression modeling. However, current GAN technologies for 3D medical image synthesis need to be significantly improved to be readily adapted to real-world medical problems. In this paper, we extend the state-of-the-art StyleGAN2 model, which natively works with two-dimensional images, to enable 3D image synthesis. In addition to the image synthesis, we investigate the controllability and interpretability of the 3D-StyleGAN via style vectors inherited form the original StyleGAN2 that are highly suitable for medical applications: (i) the latent space projection and reconstruction of unseen real images, and (ii) style mixing. We demonstrate the 3D-StyleGAN's performance and feasibility with  12,000 three-dimensional full brain MR T1 images, although it can be applied to any 3D volumetric images. Furthermore, we explore different configurations of hyperparameters to investigate potential improvement of the image synthesis with larger networks. The codes and pre-trained networks are available online: https://github.com/sh4174/3DStyleGAN.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative modeling via Generative Adversarial Networks (GAN) has achieved remarkable improvements with respect to the quality of generated images [11, 3, 4, 32, 21]. StyleGAN2, a style-based generative adversarial network, has been recently proposed for synthesizing highly realistic and diverse natural images. It progressively accounts for multi-resolution information of images during training, and controls image synthesis using style vectors that are fed at each block of a style-based generator network [18, 20, 21, 19]. It showed the outstanding quality of generated images and the enhanced control and interpretability on image synthesis compared to previous generative models [4, 3, 20, 21]. Although it has great potential for medical applications due to its enhanced performance and controllability, it has not been extended to the generative modeling of 3D medical images in our best knowledge.

One of the challenges of using generative models such as GANs for medical applications is that medical images are often three-dimensional (3D) and have a significantly higher number of voxels compared to two-dimensional natural images. Due to the high-memory requirements of GANs, it is often not feasible to directly apply large networks for 3D image synthesis [4, 22, 30]. To address this issue, some models for 3D image synthesis generate the image in successive 2D slices, which are then combined to render a 3D image while accounting for slice-wise relationship [30]. However, these methods often result in discontinuity between slices, and the interpretation and manipulation of the slice-specific latent vectors is complicated in this setting  [30].

In this paper, we present 3D-StyleGAN to enable synthesis of high-quality 3D medical images by extending the StyleGAN2. We made several changes to the original StyleGAN2 architecture: (1) we replaced 2D operations, layers, and noise inputs with 3D, and (2) significantly decreased the depths of filter maps and latent vector sizes. We trained different configurations of 3D-StyleGAN on a collection of 12,000 T1 MRI brain scans from  [6]. We additionally make the following contributions: (1) we show the possibility of synthesizing realistic 3D brain MRIs at 2mm resolution, corresponding to 80x96x112 voxels, (2) we show that projection of unseen test images to the latent space can be achieved and results in reconstructions of high fidelity to the input images by a projection function suitable for medical images and (3) we demonstrate how StyleGAN2’s style-mixing can be used to “transfer” anatomical variation across images. We discuss the performance and feasibility of the 3D-StyleGAN with limited filter depths and latent vector sizes for 1mm isotropic resolution full brain T1 MR images. The source code and pre-trained networks are publicly available in https://github.com/sh4174/3DStyleGAN.

2 Methods

(a)
Figure 2: The style-based generator architecture of 3D-StyleGAN. The mapping network , made of 8 fully-convolutional layers, maps a normalized latent vector to an intermediate latent vector . This is then transformed by a learned feature-wise affine transform at each layer, then further used by modulation (Mod) and demodulation (Demod) operations that are applied on the weights of each convolution layer  [17, 8]

. The synthesis network starts with a constant 5x6x7 block and applies successive convolutions with modulated weights, up-sampling, and activations with Leaky ReLU. 3D noise maps, downscaled to the corresponding layer resolutions, are added before each nonlinear activation by LReLU. At the final layer, the output is convolved by a 1

11 convolution to generate the final image. is the size of base layer (567). The discriminator network (3D-ResNet) was omitted due to space constraint.

Figure 2 illustrates the architecture of the 3D-StyleGAN. A style-based generator was first suggested in [20] and updated in [21]. We will briefly introduce the style-based generator suggested in [21] and then summarize the changes we made for enabling 3D image synthesis.

StyleGAN2: A latent vector is first normalized and then mapped by a mapping network to , . A synthesis network starts with a constant layer with a base size , where is the input dimension (two for natural images). The transformed is modulated () with trainable convolution weights and then demodulated (), which act as instance normalization to reduce artifacts caused by arbitrary amplification of certain feature maps. Noise inputs scaled by a trainable factor and a bias

are added to the convolution output at each style block. After the noise addition, the Leaky Rectifier Linear Unit (LReLU) is applied as nonlinear activation 

[23]. At the final layer in the highest resolution block, the output is fed to the 11 convolution filter to generate an image.

Loss Functions and Optimization:

For the generator loss function, StyleGAN2 uses the logistic loss function with the path length regularization 

[21]:

(1)

where is a mapped latent vector, is a generator, is a random image following a normal intensity distribution with the identity covariance matrix, and is the dynamic constant learned as the running exponential average of the first term over iterations [21]. It regularizes the gradient magnitude of a generated image projected on to be similar to the running exponential average to make the (mapped) latent space smoother. The generator regularization was applied once in every 16 minibatches following the lazy regularization strategy. For the discriminator loss function, StyleGAN2 uses the standard logistic loss function with or regularizations.

2.1 3D-StyleGAN

We modified StyleGAN2 for 3D image synthesis by replacing the 2D convolutions, upsampling and downsampling with 3D operations. We started from StyleGAN2 configuration F (see [21]), but switched back to the original StyleGAN generator with operators, as it showed best performances in preliminary results for 3D images [21, 14, 13]. We used the 3D residual network for the discriminator. We used the standard logistic loss function without regularization for the discriminator loss function that showed the best empirical results.

Image Projection: An image (either generated or real) can be projected to the latent space by finding and stochastic noise at each layer that minimize a distance between an input image and a generated image . In the original StyleGAN2, LPIPS distance was used [31]. However, the LPIPS distance uses the VGG16 network trained with 2D natural images, which is not straightforwardly applicable to 3D images [29]. Instead, we used two mean squared error (MSE) losses, one computed at full resolution, and the other at an (x8) downsampled resolution. The second loss was added for stability, and to ensure that the optimization does not converge to a local minimum.

Configurations: We tested different resolutions, filter depths, latent vector sizes, minibatch sizes, which are all summarized in Table 1. Because of the high-dimensionality of 3D images, the filter map depths of the 3D-StyleGAN needed to be significantly lower than in the 2D version. We tested five different feature depths: 16, 32, 64, and 96, with 2mm isotropic resolution brain MR images to investigate how different filter depths would affect the quality of generated images. For 1mm isotropic images, we used the filter depth of 16 for the generator and discriminator, 32 filter depth for the mapping network, and 32 latent vector size that our computational resource allowed.

Slice-wise Fréchet Inception Distance Metric: Conventionally, the Fréchet Inception Distance (FID) score is measured by comparing the distributions of randomly sampled real images from a training set and generated images [15]. Because FID relies on the Inception V3 network that was pre-trained with 2D natural images, it is not directly applicable to 3D images [29]. Instead, we measured the Fréchet Inception Distance (FID) scores on the middle slices on axial, coronal, and sagittal planes of the generated 3D images [15].

Additional Evaluation Metrics:

In addition to the slice-wise FID metric, we evaluated the quality of generated images with the batch-wise squared Maximum Mean Discrepancy (bMMD) as suggested in [22] and [30]. Briefly, MMD

measures the distance between two distributions with finite sample estimates with kernel functions in the reproducing kernel Hilbert space 

[12]. The bMMD measures the discrepancy between images in a batch with a dot product as a kernel [22, 30]. To measure the diversity of generated images, we used the pair-wise Multi-Scale Structural Similarity (MS-SSIM) [22, 30]. It measures the perceptual diversity of generated images by measuring the mean of the MS-SSIM scores of pairs of generated image [28].

Config. Filter Depth Latent Vector Size
Minibatch Size
{B, 2B, B, B, B}
# trainable
params
2mm-fd96 96 96 {32, 32, 16, 16} 6.6M
2mm-fd64 64 64 {32, 32, 16, 16} 3.0M
2mm-fd32 32 128 {32, 32, 16, 16} 0.9M
2mm-fd16 16 64 {32, 32, 16, 16} 0.2M
1mm-fd16 16 64 {64, 64, 32, 16, 16} 0.2M
Table 1: The list of configurations of 3D-StyleGAN with different filter map depths, latent vector sizes and minibatch sizes.

3 Results

For all experiments, we used 11,820 brain MR T1 images from multiple publicly available datasets: ADNI, OASIS, ABIDE, ADHD2000, MCIC, PPMI, HABS, and Harvard GSP [27, 24, 7, 26, 10, 25, 5, 16, 6]. All images were skull-stripped, affine-aligned, resampled to 1mm isotropic resolution, and trimmed to 160192224 size using FreeSurfer [6, 9]. For the experiments with 2mm isotropic resolution, the images were resampled to an image size of 8096112. Among those images, 7,392 images were used for training. The rest of 4,329 images were used for evaluation. The base layer was set to =6

7 to account for the input image size. For each experiment, we used four NVIDIA Titan Xp GPUs for the training with 2mm-resolution images, and eight GPUs for the training with 1mm-resolution images. The codes were implemented with Tensorflow 1.15 and Python 3.6 

[1].

(a)
Figure 4: Uncurated randomly generated 3D images by the 3D-StyleGAN. The images are generated by the network trained with configuration 2mm-fd96. The middle slices of sagittal, axial, and coronal axes of five 3D images were presented.
(a) 2mm-fd96
(b) 2mm-fd64
Figure 7: The qualitative comparison between the networks trained with the filter map depths of (a) 96 (2mm-fd96) and (b) 64 (2mm-fd64). While the generated images of the 2mm-fd64 are sharper, e.g., cerebellum and sulci, the overall brain structures are not as coherent as those in the images generated with 2mm-fd96.

 

Real

 

Proj.
Figure 8: Results of projecting real, unseen 3D images to the latent space. Top row shows the real image, and bottom row shows the reconstruction from the projected embedding. Configuration used was 2mm-fd96. The middle sagittal, axial and coronal slices of the 3D images are displayed. The differences between the real and reconstructed images are indistinguishable other than lattice-like artifacts caused by the generator.

3D Image Synthesis at 2mm Isotropic Resolution: Figure 4 shows randomly generated 2mm-resolution 3D images using 3D-StyleGAN that was trained with the configuration fd-96 with the filter map depth of 96 (see Table 1 for all configurations). Each model was trained for the average of 2 days. We can observe that anatomical structures are generated correctly and coherently located.

In Table 2, we show the bMMD, MS-SSIM, and FIDs for the middle sagittal (FID-Sag), axial (FID-Ax) and coronal (FID-Cor) slices calculated using 4,000 generated images with respect to the unseen test images. One interesting observation was that the slice-wise FIDs of middle sagittal and axial slices of the network trained with the filter map depth of 64 were lower than the one with the filter map depth of 96. Figure 7 shows a qualitative comparison between images generated by networks with a filter map depth of 64 vs 96. Networks with a filter depth of 64 produce sharper boundaries in the cerebellum and sulci, but the overall brain structures are not as coherent as in the images with filter depth of 94. This is possibly caused by the characteristics of the FID score that focuses on the texture of the generated images, rather than taking into account the overall structures of the brain [31]. The network with filter map depth of 32 showed the best bMMD result while the distributions of the metrics were largely overlapped between fd-32, fd-64, and fd-96. The MS-SSIM showed that the network with filter mpa depth of 64 showed the most diversity in generated images. This may indicate that the better FID metrics of the fd-64 configuration was due to the diversity of generated images and the fd-96 configuration could be overfitted that did not cover the large variability of real images.

 

Configurations bMMD MS-SSIM FID-Sag FID-Ax FID-Cor

 

2mm-fd16 7026 (555.79) 0.96 (0.03) 145.6 153.6 164.1
2mm-fd32 4470 (532.65) 0.94 (0.07) 129.3 144.3 128.8
2mm-fd64 4497 (898.53) 0.93 (0.12) 106.9 71.3 90.2
2mm-fd96 4475 (539.38) 0.96 (0.04) 138.3 83.2 88.5
2mm-Real 449 (121.43) 0.85 (0.002) 3.0 2.1 2.9

 

Table 2: The quantitative evaluation results of generated images with different configurations. The Maximum Mean Discrepancy (bMMD) and Multi-Scale Structural Similarity Index (MS-SSIM) were calculated on each set of 3D generated and training real images. The slice-wise Fréchet Inception Distance (FID) of the middle slices (Sag: Sagittal, Ax:Axial, and Cor:Coronal) of generated and training real images with respect to test unseen real images were summarized.

Image Projection: Figure 8 shows the results of projecting unseen images into the space of latent vector and noise map . Top row shows the real, unseen image, while the bottom row shows the reconstruction from the projected embedding. Reconstructed images are almost identical to the input images, a result that has also been observed by [2] on other datasets, likely due to the over-parameterization of the latent space of StyleGAN2.

Style Mixing: Figure 11 shows the result of using 3D-StyleGAN’s style mixing capability. The images at the top row and the first column are randomly generated from the trained 3D-StyleGAN with the configuration 2mm-fd96. The style vectors of the images at the top row were fed into the high resolution style blocks (i.e., the 5th to 9th). The style vector of the image at the first column is fed into the low resolution style blocks (i.e., the 1st to 4th). We observed that high-level anatomical variations, such as the size of a ventricle and a brain outline, were controlled by the style vectors of the image at the first column mixed to the lower-resolution style blocks. These results indicate that the style vectors at different resolution levels could potentially control different levels of anatomical variability although our results are preliminary and need further investigation.

Failure Cases: We trained the 3D-StyleGAN for the image synthesis of 1mm isotropic resolution images. The depths of filter map and latent vectors sizes were set to 16 and 32, respectively, because of the limited GPU memory (12GB). The model was trained for 5 days. Figure 14 (a) shows the examples of generated 1mm-resolution images on the sagittal view after 320 iterations. The results showed the substantially lower quality of generated images compared to those from the networks trained with deeper filter depths for 2mm-resolution images. Figure 14 (b) shows the image generation result for 2mm-resolution images with the same filter depth, 16. Compared to the generated images shown in Figure 4, the quality of generated images is also substantially lower. This experiment showed that the filter map depth of generator and discriminator networks need to be above certain thresholds to assure the high quality of generated images.

(a) Sagittal View
(b) Coronal View
Figure 11: Style-mixing example. The style vectors of two 3D images at the top row at the high resolution (5th-9th) style blocks were mixed with the style vectors at at the low resolution (1st-4th) style blocks of an image at the first column were mixed and generated new images. Higher-level anatomical variations, such as the size of a ventricle and corpus callosum (red boxes), were controlled by the lower-resolution style blocks and the detailed variations, such as sulci and cerebellum structures (yellow boxes) by the higher-resolution style blocks. Three input images were randomly generated full 3D images and displayed on the (a) sagittal and (b) coronal views of the respective middle slices.
(a) 1mm resolution
(b) 2mm resolution
Figure 14: Failure cases of 1mm and 2mm isotropic resolution with the filter map depth of 16. The low filter map depth resulted in the low quality of images.

4 Discussion

We presented 3D-StyleGAN, an extension of StyleGAN2 for the generative modeling of 3D medical images. In addition to the high quality of generated images of 3D-StyleGAN, we believe the latent space projection of unseen real images and the control of anatomical variability through style vectors can be utilized to tackle clinically important problems. One limitation of our paper is in the use of limited evaluation metrics. The evaluation metrics did not match with qualitative evaluations on the generated image qualities. We used FIDs that only worked on 2D slices due to their reliance on the 2D VGG network. We plan to address 3D quantitative evaluation metrics on perceptual similarity with a large 3D network in future work. Another limitation of our work is the limited sizes of the training data and the trained networks. We plan to address this in future work through extensive augmentation of the training images and investigating memory-efficient network structures. We believe our work will enable important downstream tasks to be done directly in 3D via generative modeling, such as super-resolution, motion correction, conditional atlas estimation, and image progression modeling with respect to clinical attributes.

Acknowledgements This research was supported by NIH-NINDS MRI-GENIE: R01NS086905; K23NS064052, R01NS082285, NIH NIBIB NAC P41EB015902, NIH NINDS U19NS115388, Wistron, and MIT-IBM Watson AI Lab. A.K.B. was supported by MGH ECOR Fund for Medical Discovery (FMD) Clinical Research Fellowship Award. M.B. was supported by the ISITE and Planiol foundations and the Sociétés Françaises de Neuroradiologie et Radiologie.

References

  • [1]

    Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015),

    https://www.tensorflow.org/, software available from tensorflow.org
  • [2]

    Abdal, R., Qin, Y., Wonka, P.: Image2stylegan: How to embed images into the stylegan latent space? In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4432–4441 (2019)

  • [3] Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International conference on machine learning. pp. 214–223. PMLR (2017)
  • [4] Brock, A., Donahue, J., Simonyan, K.: Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096 (2018)
  • [5] Dagley, A., LaPoint, M., Huijbers, W., Hedden, T., McLaren, D.G., Chatwal, J.P., Papp, K.V., Amariglio, R.E., Blacker, D., Rentz, D.M., et al.: Harvard aging brain study: dataset and accessibility. Neuroimage 144, 255–258 (2017)
  • [6]

    Dalca, A.V., Guttag, J., Sabuncu, M.R.: Anatomical priors in convolutional networks for unsupervised biomedical segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 9290–9299 (2018)

  • [7] Di Martino, A., Yan, C.G., Li, Q., Denio, E., Castellanos, F.X., Alaerts, K., Anderson, J.S., Assaf, M., Bookheimer, S.Y., Dapretto, M., et al.: The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Molecular psychiatry 19(6), 659–667 (2014)
  • [8] Dumoulin, V., Perez, E., Schucher, N., Strub, F., Vries, H.d., Courville, A., Bengio, Y.: Feature-wise transformations. Distill 3(7), e11 (2018)
  • [9] Fischl, B.: Freesurfer. Neuroimage 62(2), 774–781 (2012)
  • [10] Gollub, R.L., Shoemaker, J.M., King, M.D., White, T., Ehrlich, S., Sponheim, S.R., Clark, V.P., Turner, J.A., Mueller, B.A., Magnotta, V., et al.: The mcic collection: a shared repository of multi-modal, multi-site brain image data from a clinical investigation of schizophrenia. Neuroinformatics 11(3), 367–388 (2013)
  • [11] Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. arXiv preprint arXiv:1406.2661 (2014)
  • [12] Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.: A kernel two-sample test. The Journal of Machine Learning Research 13(1), 723–773 (2012)
  • [13]

    Hara, K., Kataoka, H., Satoh, Y.: Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet? In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 6546–6555 (2018)

  • [14] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
  • [15] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. arXiv preprint arXiv:1706.08500 (2017)
  • [16] Holmes, A.J., Hollinshead, M.O., O’keefe, T.M., Petrov, V.I., Fariello, G.R., Wald, L.L., Fischl, B., Rosen, B.R., Mair, R.W., Roffman, J.L., et al.: Brain genomics superstruct project initial data release with structural, functional, and behavioral measures. Scientific data 2(1), 1–16 (2015)
  • [17] Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1501–1510 (2017)
  • [18] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017)
  • [19] Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., Aila, T.: Training generative adversarial networks with limited data. arXiv preprint arXiv:2006.06676 (2020)
  • [20] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4401–4410 (2019)
  • [21] Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8110–8119 (2020)
  • [22] Kwon, G., Han, C., Kim, D.s.: Generation of 3d brain mri using auto-encoding generative adversarial networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 118–126. Springer (2019)
  • [23]

    Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: Proc. icml. vol. 30, p. 3. Citeseer (2013)

  • [24] Marcus, D.S., Wang, T.H., Parker, J., Csernansky, J.G., Morris, J.C., Buckner, R.L.: Open access series of imaging studies (oasis): cross-sectional mri data in young, middle aged, nondemented, and demented older adults. Journal of cognitive neuroscience 19(9), 1498–1507 (2007)
  • [25] Marek, K., Jennings, D., Lasch, S., Siderowf, A., Tanner, C., Simuni, T., Coffey, C., Kieburtz, K., Flagg, E., Chowdhury, S., et al.: The parkinson progression marker initiative (ppmi). Progress in neurobiology 95(4), 629–635 (2011)
  • [26] Milham, M.P., Fair, D., Mennes, M., Mostofsky, S.H., et al.: The adhd-200 consortium: a model to advance the translational potential of neuroimaging in clinical neuroscience. Frontiers in systems neuroscience 6,  62 (2012)
  • [27] Mueller, S.G., Weiner, M.W., Thal, L.J., Petersen, R.C., Jack, C.R., Jagust, W., Trojanowski, J.Q., Toga, A.W., Beckett, L.: Ways toward an early diagnosis in alzheimer’s disease: the alzheimer’s disease neuroimaging initiative (adni). Alzheimer’s & Dementia 1(1), 55–66 (2005)
  • [28]

    Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier gans. In: International conference on machine learning. pp. 2642–2651. PMLR (2017)

  • [29] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  • [30] Volokitin, A., Erdil, E., Karani, N., Tezcan, K.C., Chen, X., Van Gool, L., Konukoglu, E.: Modelling the distribution of 3d brain mri using a 2d slice vae. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 657–666. Springer (2020)
  • [31]

    Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 586–595 (2018)

  • [32]

    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. pp. 2223–2232 (2017)