Manipulating Medical Image Translation with Manifold Disentanglement

11/27/2020 ∙ by Siyu Liu, et al. ∙ 23

Medical image translation (e.g. CT to MR) is a challenging task as it requires I) faithful translation of domain-invariant features (e.g. shape information of anatomical structures) and II) realistic synthesis of target-domain features (e.g. tissue appearance in MR). In this work, we propose Manifold Disentanglement Generative Adversarial Network (MDGAN), a novel image translation framework that explicitly models these two types of features. It employs a fully convolutional generator to model domain-invariant features, and it uses style codes to separately model target-domain features as a manifold. This design aims to explicitly disentangle domain-invariant features and domain-specific features while gaining individual control of both. The image translation process is formulated as a stylisation task, where the input is "stylised" (translated) into diverse target-domain images based on style codes sampled from the learnt manifold. We test MDGAN for multi-modal medical image translation, where we create two domain-specific manifold clusters on the manifold to translate segmentation maps into pseudo-CT and pseudo-MR images, respectively. We show that by traversing a path across the MR manifold cluster, the target output can be manipulated while still retaining the shape information from the input.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

page 7

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Generative Adversarial Network (GAN[8] and conditional GAN [28] have been rising in popularity for medical image synthesis. Conditional GAN is currently the dominant method for cross-modality medical image translation. For example, in MR-only radiotherapy treatment [4], a conditional GAN can be used to “retrieve" missing CT images from other available imaging modalities. The generative aspect of conditional GAN can also be useful in medical imaging analysis research, for example, a robust GAN capable of generating realistic and diverse examples can be used as a data augmentation tool to improve the performance of other models [32, 6].

A GAN-based image translation framework consists of a mapping function from the source domain to the target domain and . In medical image translation, since different imaging domains (modalities) contain mutual as well as exclusive features, it is paramount that learns to preserve important domain-invariant features (e.g. anatomical structures and shape information). At the same time, it also needs to learn diverse features (e.g. tissue appearance and image contrast) specific to domain such that visually resembles . In most image translation GANs, these two types of features are intertwined and cannot be individually controlled. In medical image translation, we argue it is desirable to disentangle domain-invariant features and domain-specific features such that they can be manipulated individually. For example, In a CT-MR translation task, we may want to diversify features specific to the MR domain (such as contrast and tissue appearance) while leaving the underlying anatomical structures intact. We may also want to alter the target-domain of the output (e.g. PET instead of MR) with the same constraint.

Fig. 1: Proposed manifold disentanglement GAN for (style-based, multi-modal) medical image domain translation. The single generator supports multiple modalities as styles across domains.

The StyleGAN [18, 19] framework is a natural candidate for our objective. A StyleGAN generator relies on style code injection to manipulate the output, and the style codes essentially form a manifold that controls the output. We can logically formulate our objective as a stylisation problem. As in Figure 1, a shared encoder-decoder generator network is used to extract and retain domain-invariant features. At the same time, we learn a disentangled manifold which provides style codes to “stylise” (translate) the input to the correct target-domain. Based on this idea, we propose MDGAN, a powerful style-based generative framework for medical image translation. The contribution of this framework can be summarised as

  • We harness the StyleGAN framework to explicitly disentangle domain-invariant features and domain-specific features. The generator implicitly learns a manifold of target-domain features for image translation. The style codes are manifold clusters embedded on this manifold. This manifold provides control of the translated images, but domain-invariant features from the source input are still faithfully retained as a result of feature disentanglement.

  • The generator is trained to interpret and generate multi-modal images based on multiple manifold clusters. This property enables multi-modal medical image translation with a shared generator and separately learnt manifold clusters. In our case, MDGAN learns a CT manifold and a MR manifold to generate pseudo-CT and pseudo-MR images from input segmentation maps.

  • By sampling the manifold, we can explore and interpolate the latent space to generate diverse images. All the images are different in appearance, but without violating domain-invariant features from the source input.

We use a shared-generator to pass domain-invariant information, and inject style codes from two separate manifold cluster networks to synthesise realistic MR and CT images. The proposed framework also goes beyond one-to-one domain mapping and can produce diverse outputs for a given single input segmentation. Finally, we perform dimensionality reduction on the style codes to reveal two well formed manifold clusters. By sampling style codes along geodesic paths across the MR manifold cluster, we observed smooth and systematic transitions in tissue appearances, while keeping the shape information of the anatomical structures consistent with the input segmentation.

Ii Related Work

Ii-a Generative Adversarial Networks

The vanilla GAN [8] is composed of a generator network and a discriminator network. During training, the generator’s performance is improved by competing against the discriminator, which is given the opposite objective. The min-max objective of a basic GAN is defined as

(1)

The generator and the discriminator achieve optimal performance when the GAN reaches a state of Nash Equilibrium. In practice, the training is often unstable and characterised by failures such as mode collapse, non-convergence and vanishing gradient. Efforts have been made to stabilise the training process of GAN. While notable work, including LSGAN [24], WGAN [1], WGAN-GP [10] and PGGAN [17] have proposed novel methods to alleviate these failures modes and improve performance. Training a GAN is still very much an empirical process, and the training procedures can be highly domain sensitive.

Conditional GAN [28] is a type of GAN that involves data synthesis based on some conditional input. Usually, the input to the generator is highly correlated to the output and can be exploited to gain control of the generated data. Conditional GANs are best known for their success in image domain translation. For example, from sketches to photo-realistic images of the sketched objects [14]. These networks often rely on auxiliary losses such as L1, L2 and perceptual losses [16] to enforce the correlation and consistency across the source and the target domains.

Ii-B Style-based Gan

Style-based GAN [18] stands on its own as an alternative approach to unconditional image synthesis. It rethinks image synthesis from the perspective of style transfer. Huang et al. [13] explored the profound effect of activation normalisation in Convolutional Neural Network (CNN) and proposed Adaptive Instance Normalisation (AdaIN) as a means of real-time style transfer. In comparison to traditional gradient-based style transfer methods [7], AdaIN exhibits superior versatility and control of the output style. StyleGAN [18] was the first to harness the power of AdaIN

in a generative adversarial framework. Instead of starting with a latent noise vector, the StyleGAN generator applies

AdaIN

at various points of the network to inject a learnt latent (style) vector. This alternative approach to image generation achieves unprecedented control of the generated image at all scales (from global structure to local details). Additionally, the network uses a progressively growing training scheme, mini-batch standard deviation 

[17], path length regularisation, style-mixing regularisation to optimise performance and stabilise training. The successor to StyleGAN, StyleGAN2 [19], was published with various significant refinements. First, AdaIN was removed in favour of modulated convolution to alleviate normalisation artefacts in the output. Second, the progressively growing networks were simplified to residual architectures for easier training. Lastly, a new path length regularisation was used to improve image quality and network invertibility. The main limitation of the original StyleGAN and StyleGAN2 is their unconditional nature, which is unsuitable for image translation tasks. Recently, Pixel2Style2Pixel [31] proposed an image encoder network as an extension to the StyleGAN framework. The encoder network maps the inputs to codes, which enables domain translation using the StyleGAN framework. Another prominent conditional extension to StyleGAN is StarGAN [3], which employs AdaIN and cycle consistency to achieve unpaired multi-domain translation.

Ii-C Generative Adversarial Networks in Medical Imaging Analysis

Both conditional GAN and unconditional GAN have been widely adopted for medical imaging analysis [41]. Medical image synthesis using GAN

has been shown effective for data augmentation, which may enhance existing deep learning models facing data scarcity. For example,

[32] uses GAN-generated CT scans to improve segmentation accuracy, and [6] uses three generative networks to synthesise three types of lesions to improve classification accuracy. Conditional GAN is useful for a multitude of applications beyond data augmentation thanks to their versatility. The most common application of conditional GAN is image domain translation [2]. For example, segmentation map to medical image [33, 9], MR reconstruction [34, 30], image denoising [36, 40] and cross-modality translation [25, 29, 15]

. Many of these methods are based on the popular Pix2Pix framework, which relies on paired data across domains. When pair-wise labels are not available, CycleGAN 

[42] and UNIT [23]

are used for semi-supervised learning on unpaired images 

[12, 39, 38, 22, 35].

Ii-D Mdgan

Compared to conventional GAN, the StyleGAN framework introduces a more profound approach to manipulate outputs using an external style input. As described in the Introduction, it fits naturally within our objective of creating a versatile medical image translation framework based on feature disentanglement. However, the original StyleGAN framework is fundamentally unconditional, and its manifold does not provide disentanglement of domain-related features. While Pixel2Style2Pixel is one step closer to our objective due to the addition of a conditional input and its diverse outputs, it does not provide explicit disentangled of domain-invariant features and domain-specific manifold for independent control. At the same time, it is not designed for multi-modal applications. StarGAN has also been considered a candidate for our task as it is a powerful multi-domain image translation network. However, it does not learn a manifold of domain-specific features. Comparing to other existing approaches for medical image translation, most of them are not multi-modal, thus requiring a designated network for each target domain. An immediate advantage of the proposed stylisation approach is the sharing of domain-invariant knowledge, only a small manifold cluster is learnt for each domain, and the entire generator is shared. Most of the methods in the medical imaging context only perform one-to-one mappings on a given input, which does not capture the true dynamics of image translation tasks. For example, one segmentation map can be theoretically mapped to infinitely many valid target images. We are also not aware of other work that formulates multi-domain medical image translation as a general stylisation problem with disentangled manifolds. The closest use of a style-based GAN in medical imaging analysis is [5], which uses the original StyleGAN to explore the latent space of medical images. We are also aware that unpaid image translation enabled by CycleGAN and UNIT can be more desirable for some applications, but to the best of our knowledge, there has not been method, especially in medical image analysis, based on exploitable manifolds of disentangled features. We consider a cycle version of this framework a potential future extension.

Fig. 2: Network architectures for the proposed framework. The generative networks and are shared. uses modulated convolution (denoted ) for stylisation based on some external style code. The multiple style code manifold networks represent multiple manifold clusters on the overall manifold.

Iii Methods

The objective of the proposed method is to achieve disentangled representations of domain-invariant features and domain-specific features, where the domain-invariant features are translated to the target domain according to style codes sampled from learnt manifold clusters. In this section, we formulate a framework for one-to-many medical image translation using this idea.

Iii-a Proposed Framework

Improving upon the foundation of StyleGAN, the proposed MDGAN consists of four networks: conditional encoder (), style-based image synthesiser (), manifold cluster network () and discriminator (). and are shared networks which encapsulate domain-invariant information (such as anatomical structure and shape information). For each target domain , a corresponding manifold cluster network () and discriminator () are trained to encourage the formation of manifold clusters.

The sub-network interactions are illustrated in Figure 2. Given a source domain input and a desired target domain , produces a shared latent representation of the input which learns domain-invariant features, is supplied both and a domain-specific style code as conditional inputs to synthesise the translated image . Note that and are deliberately separated to disentangle shared features and domain-specific features. The role of is to modulate the convolutional weights in to achieve the desired output style. Compared to a random noise vector, is more interpretable to as it is fundamentally a manifold cluster of the features specific to domain

. Finally, the discriminator for each target domain is a binary classifier

which aims to distinguish real data from the fake .

For each training step, we randomly sample a target domain and train , with the shared image synthesiser and conditional encoder .

Iii-B Network Architecture Details

is a fully-connected network with four 384-unit hidden layers. It learns a mapping from a 384-d noise vector to a 384-d intermediate manifold cluster . is a fully-convolutional network and is a re-implemented StyleGAN2 [19] with conditional inputs. contains four convolutional blocks (16, 16, 32 and 64 filters) which progressively down-sample the input while expanding the feature depths. contains convolutional blocks of depths 256, 128, 64 and 48, and its structure mirrors that of

to recover the original scale of the input. Residual connections 

[19] are used in both and to improve the connectivity between neighbouring blocks. Like the original StyleGAN, we also incorporate noise feature maps in to introduce fine-grained variations. uses a similar structure to but the filter depths are increased to 48, 64, 128 and 256. The final feature maps of are mapped to a confidence score using a densely connected layer.

All of the convolutional layers in are modulated convolution as used in StyleGAN2. As below, modulated convolution performs weight re-normalisation based on some affine-transformed external style vector. In the proposed framework, this re-normalisation procedure provides the mechanism for freely switching among multiple target domains as well as produce diverse outputs.

Like the modulated convolution in StyleGAN2, comes from an external style input. and enumerates the input feature maps, output feature maps and spatial dimensions respectively.

Iii-C Proposed Training Procedure

Fig. 3: The perceptual loss uses the intermediate outputs of . stands for the output of the -th convolutional layer and

The primary losses of the proposed framework include a non-saturating adversarial loss  [8], a perceptual reconstruction loss and an gradient penalty  [27] term:

When computing , the perceptual network for target domain is its corresponding discriminator . As Figure 3, captures the perceptual difference based on the intermediate outputs (denoted ) of from the third convolution layer onward.

Most GAN frameworks used for image translation are one-to-one mapping networks, which arguably resembles the undesirable effect of mode collapse. With the proposed framework, we can achieve one-to-many image translation by ensuring the output is diverse and valid. This is done by imposing an additional diversification regulariser (similar to [37]) as below to ensure the output is well-conditioned on the style codes. Our experiments show that the model tends to collapse and produce similar outputs without this regulariser.

The total loss is finally defined as follows. and are scaling factors for the reconstruction loss and gradient penalty term, respectively. is the scaling factor for the diversification loss. We use , and unlike StarGAN2 [3], we avoid using decay as it results in mode collapse in our case.

For training, we use Adam [20] optimiser with a learning rate of 0.0001 for and , and a learning rate of 0.000001 for all the other networks. The models are trained for 72 hours on an NVIDIA P100 GPU, which is equipped with 16GB of VRAM to process a batch of 8 images at a time.

Iii-D Experiment

We test the proposed framework by performing domain translation from segmentation to MR and CT scans. The dataset is a manually segmented 3D prostate dataset with 211 MR and 42 CT scans. The scans are collected in a prostate cancer treatment study of 42 patients over the course of 8 weeks [4]. Each 3D image is in resolution and is manually labelled with five foreground classes: body, bone, bladder, rectum and prostate. During training, we randomly sample images from the centre 40 slices of the coronal plane. All the input images are prepossessed by mapping their pixel intensity ranges to [0, 1]. To critically test the capability of MDGAN, we deliberately avoid any data augmentation in the training process.

Our framework takes the segmentation maps (one-hot encoded) as the input, and

stylises them to arrive at the target domains . As described in III-A, the generative part of the framework only requires two separately learnt manifold clusters networks and , which are inexpensive to train. The two expensive components and are fully shared. and are also separately trained networks, but they do not contribute to inference and can be discarded after training.

Finally, we perform dimensionality reduction on the style codes to explore the learnt manifold of each domain. This is done by sampling 10,000 styles codes from and each, and map them to 2D space using UMAP [26] (minimum distance of 0.2 and 5 neighbours). Manifold interpolation was performed on the MR style codes (instead of CT because MR scans contain more complex features) to observe visual transitions in the MR domain. For a given segmentation input, a geodesic path with 36 points across the manifold is selected, and 36 images are generated.

Fig. 4: Representative results from the proposed MDGAN framework for pseudo-CT and pseudo-MR generation.

Iv Results

In this section, we present the results from the medical image domain translation task, where our analysis will focus mainly on the generative capabilities, as well as exploring the inner workings of the disentangled manifold. Further results on other datasets are explored and provided in supplementary materials.

Iv-a Generator Results

Figure 4 presents representative results generated using MDGAN. All the generated images are acquired using a shared instance of the proposed style-based generator. It can be seen that the MR and CT outputs retain consistent shape information from the input segmentation maps. We also tested the robustness of the proposed model by elastically deforming the input segmentation and sampling a large number of different noise inputs. No noticeable failure cases were observed. This can be attributed to the learnt manifold clusters, which map random noise inputs to a more “interpretable" latent space [18] thus avoiding invalid combinations of features.

Iv-B Quantitative Results

Fréchet Inception Distance (FID[11] is a measure of the similarity between two sets of images (usually a set of real images and a set of GAN-generated images). This metric was used to assess the quality of the generated MR. The CT results are excluded from this analysis because of the much smaller dataset size, and they are also easier to translate compared to MR due to the lack of rich contrast information. Since there are no existing benchmark results on the MR dataset, we use the real data as the gold standard. As Figure 5, we split the dataset into two subsets (1/3 of ) and (2/3 of ) and compute the gold standard FID based on 50,000 slices sampled from each subset (with overlap within the subset). We then generate a fake replica of (denoted ) using the segmentation maps of , which also contains 50,000 slices (no overlap as output is non-deterministic). The FID between and are computed as MDGAN’s performance metric. We take a 4-fold validation approach to this evaluation and the results are shown in Table I. As a baseline, the FID between the MR dataset and the CT dataset is 219.65. Therefore, our results are close to the golden standard with the margin of error.

Split 1 Split 2 Split 3 Split 4
Gold Standard MR 22.77 15.78 17.74 33.47
MDGAN MR 20.30 19.56 22.71 14.81
TABLE I: MDGAN FID results
Fig. 5: The dataset is divided into two subsets. The gold standard FID is computed on the two real subsets. The FID of MDGAN is computed based on one real subset and the generated copy of the other subset.
Fig. 6: Diverse outputs generated using different style codes within the proposed MDGAN from the same labelled image input.
Fig. 7: Left: 2D UMAP manifold mapping from 10,000 MR (orange) and 10,000CT (blue) style codes. The manifold clusters are naturally separated in 2D. The geodesic path chosen to explore the MR manifold is indicated in green. Right: images generated using the 1st, 9th, 16th, 27th and 36th points from the manifold path.
Fig. 8: Change in tissue structures before (n=1) and after (n=36) transition.

Iv-C Diversity of Results

To test MDGAN for output diversification, we perform image translation on a given input segmentation in combination with different style codes. The representative results in Figure 6 suggest that the generator is well conditioned on both the segmentation map and the style code input. The explicit disentanglement of domain-invariant and domain-specific features allows us to “edit” the tissues in the generated MR while keep the mutual shape information intact. We also observe that scaling factor has a positive association with the magnitude of diversification. Though large values of takes significantly longer to converge and may occasionally produce invalid outputs.

Iv-D Manifold of MR Features

Figure 7 presents the results of the MR manifold geodesic path or “walk”. As shown, the style codes of MR (orange) and CT (blue) are embedded as two separate manifold clusters on the manifold of the generator. The clear separation between the two clusters acts as a boundary to explicitly prevent “feature mix ups" between the two exclusive domains. This suggests the proposed style-based generator is capable of learning and interpreting multiple medical imaging manifold clusters for different imaging modalities.

Traversing the chosen geodesic path in the MR manifold (green), the images generated using the style code sequence (and a chosen segmentation map) show smooth and systematic transitions. We observe that these transitions result into consistent and meaningful changes in tissue structure for all valid segmentation maps. Examples of this finding are shown in Figure 8 and the changes before and after transition are highlighted in colour. The style codes appear to have similar global influence on the all segmentation maps even for the heavily distorted case (Figure 7E) that was not part of the training set, though the features are localised differently in each image due to the shape and validity constraints. plays a profound role in forming these well-distributed manifold clusters. Our experiments without this term resulted in mode collapse, where all style codes produces the same output for a given input and chosen target-domain. Future work will involve using the proposed manifold disentanglement to construct meaningful manifolds in order to understand human diseases via MR and CT images from large studies.

Finally, all the images faithfully preserve the shape information as prescribed by the segmentation maps. Therefore, we believe the learnt manifold is disentangled from domain-invariant features, but semantically conditioned on the these features at the same time.

V Conclusion

In this paper, we introduce MDGAN as a style-based framework for medical image domain translation. Besides its robust generative performance, the frameworks explicitly models domain-invariant features and domain-specific features. We model domain-invariant features with a fully convolutional network, and domain-specific features as a disentangled manifold. We embed two manifold clusters onto the manifold using two style code manifold networks, which provide style codes for multi-modal (segmentation to MR and CT) medical image translation. These manifold clusters are found to determine the target-domain as well as features specific to that domain in the image translation process. This valuable property could facilitate the detailed manifold learning of human diseases investigated with radiological techniques such as MR imaging.

References

  • [1] M. Arjovsky, S. Chintala, and L. Bottou (2017-01) Wasserstein GAN. arXiv e-prints, pp. arXiv:1701.07875. External Links: 1701.07875 Cited by: §II-A.
  • [2] K. Armanious, C. Jiang, M. Fischer, T. Küstner, T. Hepp, K. Nikolaou, S. Gatidis, and B. Yang (2020) MedGAN: medical image translation using gans. Computerized Medical Imaging and Graphics 79, pp. 101684. External Links: ISSN 0895-6111, Document, Link Cited by: §II-C.
  • [3] Y. Choi, Y. Uh, J. Yoo, and J. Ha (2019-12) StarGAN v2: Diverse Image Synthesis for Multiple Domains. arXiv e-prints, pp. arXiv:1912.01865. External Links: 1912.01865 Cited by: §II-B, §III-C.
  • [4] J. A. Dowling, J. Sun, P. Pichler, D. Rivest-Hénault, S. Ghose, H. Richardson, C. Wratten, J. Martin, J. Arm, L. Best, S. S. Chandra, J. Fripp, F. W. Menk, and P. B. Greer (2015) Automatic substitute CT generation and contouring for MRI-alone external beam radiation therapy from standard MRI sequences. International Journal of Radiation Oncology Biology Physics 93 (5), pp. 1144–1153 (English). External Links: ISSN 0360-3016, Document Cited by: §I, §III-D.
  • [5] L. Fetty, M. Bylund, P. Kuess, G. Heilemann, T. Nyholm, D. Georg, and T. Löfstedt (2020) Latent space manipulation for high-resolution medical image synthesis via the stylegan. Zeitschrift für Medizinische Physik. External Links: ISSN 0939-3889, Document, Link Cited by: §II-D.
  • [6] M. Frid-Adar, I. Diamant, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan (2018-03) GAN-based Synthetic Medical Image Augmentation for increased CNN Performance in Liver Lesion Classification. arXiv e-prints, pp. arXiv:1803.01229. External Links: 1803.01229 Cited by: §I, §II-C.
  • [7] L. A. Gatys, A. S. Ecker, and M. Bethge (2015-08) A Neural Algorithm of Artistic Style. arXiv e-prints, pp. arXiv:1508.06576. External Links: 1508.06576 Cited by: §II-B.
  • [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems 27, pp. 2672–2680. Cited by: §I, §II-A, §III-C.
  • [9] J. T. Guibas, T. S. Virdi, and P. S. Li (2017-09) Synthetic Medical Images from Dual Generative Adversarial Networks. arXiv e-prints, pp. arXiv:1709.01872. External Links: 1709.01872 Cited by: §II-C.
  • [10] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville (2017-03) Improved Training of Wasserstein GANs. arXiv e-prints, pp. arXiv:1704.00028. External Links: 1704.00028 Cited by: §II-A.
  • [11] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017-06) GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv e-prints, pp. arXiv:1706.08500. External Links: 1706.08500 Cited by: §IV-B.
  • [12] Y. Hiasa, Y. Otake, M. Takao, T. Matsuoka, K. Takashima, J. Prince, N. Sugano, and Y. Sato (2018-03) Cross-modality image synthesis from unpaired data using cyclegan: effects of gradient consistency loss and training data size. pp. . Cited by: §II-C.
  • [13] X. Huang and S. Belongie (2017-03) Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. arXiv e-prints, pp. arXiv:1703.06868. External Links: 1703.06868 Cited by: §II-B.
  • [14] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2016-11) Image-to-Image Translation with Conditional Adversarial Networks. arXiv e-prints, pp. arXiv:1611.07004. External Links: 1611.07004 Cited by: §II-A.
  • [15] J. Jiang, Y. C. Hu, N. Tyagi, P. Zhang, A. Rimner, G. S. Mageras, J. O. Deasy, and H. Veeraraghavan (2018-09) Tumor-aware, Adversarial Domain Adaptation from CT to MRI for Lung Cancer Segmentation. Med Image Comput Comput Assist Interv 11071, pp. 777–785. Cited by: §II-C.
  • [16] J. Johnson, A. Alahi, and L. Fei-Fei (2016-03)

    Perceptual Losses for Real-Time Style Transfer and Super-Resolution

    .
    arXiv e-prints, pp. arXiv:1603.08155. External Links: 1603.08155 Cited by: §II-A.
  • [17] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2017-10) Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints, pp. arXiv:1710.10196. External Links: 1710.10196 Cited by: §II-A, §II-B.
  • [18] T. Karras, S. Laine, and T. Aila (2018-12) A Style-Based Generator Architecture for Generative Adversarial Networks. arXiv e-prints, pp. arXiv:1812.04948. External Links: 1812.04948 Cited by: §I, §II-B, §IV-A.
  • [19] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila (2019-12) Analyzing and Improving the Image Quality of StyleGAN. arXiv e-prints, pp. arXiv:1912.04958. External Links: 1912.04958 Cited by: §I, §II-B, §III-B.
  • [20] D. P. Kingma and J. Ba (2014-12) Adam: A Method for Stochastic Optimization. arXiv e-prints, pp. arXiv:1412.6980. External Links: 1412.6980 Cited by: §III-C.
  • [21] Y. LeCun and C. Cortes (2010) MNIST handwritten digit database. Note: http://yann.lecun.com/exdb/mnist/ External Links: Link Cited by: §VI-A.
  • [22] F. Liu (2019-05) SUSAN: segment unannotated image structure using adversarial network. Magn Reson Med 81 (5), pp. 3330–3345. Cited by: §II-C.
  • [23] M. Liu, T. Breuel, and J. Kautz (2017-03) Unsupervised Image-to-Image Translation Networks. arXiv e-prints, pp. arXiv:1703.00848. External Links: 1703.00848 Cited by: §II-C.
  • [24] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley (2016-11) Least Squares Generative Adversarial Networks. arXiv e-prints, pp. arXiv:1611.04076. External Links: 1611.04076 Cited by: §II-A.
  • [25] M. Maspero, M. H. F. Savenije, A. M. Dinkla, P. R. Seevinck, M. P. W. Intven, I. M. Jurgenliemk-Schulz, L. G. W. Kerkmeijer, and C. A. T. van den Berg (2018-09) Dose evaluation of fast synthetic-CT generation using a generative adversarial network for general pelvis MR-only radiotherapy. Phys Med Biol 63 (18), pp. 185001. Cited by: §II-C.
  • [26] L. McInnes, J. Healy, and J. Melville (2018-02) UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv e-prints, pp. arXiv:1802.03426. External Links: 1802.03426 Cited by: §III-D, §VI-A.
  • [27] L. Mescheder, A. Geiger, and S. Nowozin (2018-01) Which Training Methods for GANs do actually Converge?. arXiv e-prints, pp. arXiv:1801.04406. External Links: 1801.04406 Cited by: §III-C.
  • [28] M. Mirza and S. Osindero (2014-11) Conditional Generative Adversarial Nets. arXiv e-prints, pp. arXiv:1411.1784. External Links: 1411.1784 Cited by: §I, §II-A.
  • [29] D. Nie, R. Trullo, C. Petitjean, S. Ruan, and D. Shen (2016-12) Medical Image Synthesis with Context-Aware Generative Adversarial Networks. arXiv e-prints, pp. arXiv:1612.05362. External Links: 1612.05362 Cited by: §II-C.
  • [30] T. M. Quan, T. Nguyen-Duc, and W. Jeong (2017-09) Compressed Sensing MRI Reconstruction using a Generative Adversarial Network with a Cyclic Loss. arXiv e-prints, pp. arXiv:1709.00753. External Links: 1709.00753 Cited by: §II-C.
  • [31] E. Richardson, Y. Alaluf, O. Patashnik, Y. Nitzan, Y. Azar, S. Shapiro, and D. Cohen-Or (2020-08) Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation. arXiv e-prints, pp. arXiv:2008.00951. External Links: 2008.00951 Cited by: §II-B.
  • [32] V. Sandfort, K. Yan, P. J. Pickhardt, and R. M. Summers (2019-11-15) Data augmentation using generative adversarial networks (cyclegan) to improve generalizability in ct segmentation tasks. Scientific Reports 9 (1), pp. 16884. External Links: ISSN 2045-2322, Document, Link Cited by: §I, §II-C.
  • [33] H. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. P. Andriole, and M. Michalski (2018) Medical image synthesis for data augmentation and anonymization using generative adversarial networks. In Simulation and Synthesis in Medical Imaging, Cham, pp. 1–11. External Links: ISBN 978-3-030-00536-8 Cited by: §II-C.
  • [34] O. Shitrit and T. Riklin Raviv (2017)

    Accelerated magnetic resonance imaging by adversarial neural network

    .
    In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Cham, pp. 30–38. External Links: ISBN 978-3-319-67558-9 Cited by: §II-C.
  • [35] P. Welander, S. Karlsson, and A. Eklund (2018-06) Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT. arXiv e-prints, pp. arXiv:1806.07777. External Links: 1806.07777 Cited by: §II-C.
  • [36] J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum (2017) Generative adversarial networks for noise reduction in low-dose ct. IEEE Transactions on Medical Imaging 36 (12), pp. 2536–2545. Cited by: §II-C.
  • [37] D. Yang, S. Hong, Y. Jang, T. Zhao, and H. Lee (2019-01) Diversity-Sensitive Conditional Generative Adversarial Networks. arXiv e-prints, pp. arXiv:1901.09024. External Links: 1901.09024 Cited by: §III-C.
  • [38] H. Yang, J. Sun, A. Carass, C. Zhao, J. Lee, Z. Xu, and J. Prince (2018) Unpaired brain mr-to-ct synthesis using a structure-constrained cyclegan. In DLMIA/ML-CDS@MICCAI, Cited by: §II-C.
  • [39] H. Yang, J. Sun, A. Carass, C. Zhao, J. Lee, Z. Xu, and J. Prince (2018-09) Unpaired Brain MR-to-CT Synthesis using a Structure-Constrained CycleGAN. arXiv e-prints, pp. arXiv:1809.04536. External Links: 1809.04536 Cited by: §II-C.
  • [40] Q. Yang, P. Yan, Y. Zhang, H. Yu, Y. Shi, X. Mou, M. K. Kalra, Y. Zhang, L. Sun, and G. Wang (2018-06) Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss. IEEE Trans Med Imaging 37 (6), pp. 1348–1357. Cited by: §II-C.
  • [41] X. Yi, E. Walia, and P. Babyn (2019) Generative adversarial network in medical imaging: a review. Medical Image Analysis 58, pp. 101552. External Links: ISSN 1361-8415, Document, Link Cited by: §II-C.
  • [42] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017-03) Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. arXiv e-prints, pp. arXiv:1703.10593. External Links: 1703.10593 Cited by: §II-C.

Vi Supplementary Experiments

This material presents supplementary experiments to further explore the properties and potential applications of MDGAN.

Vi-a Diverse MNIST Digit Generation from Labels

This section presents a supplementary experiment using MDGAN to generate hand-written digits [21] from labels. The goal is to test MDGAN for more general conditional GAN applications, where the input can be in abstract vectorised formats with no spatial information (in this case a one-hot encoded label vector). We show that the fundamental properties of manifold disentanglement discussed in the main paper still hold, and they can be exploited to meaningfully manipulate and diversify the output.

Fig. 9: Modified MDGAN architecture. The conditional encoder () is now a densely connected network. The discriminator is a CNN for multi-class realness assessment.

A heavily shrunk-down version of MDGAN with minor modifications was used for this task. The modified architecture is shown in Figure 9. The conditional encoder () now takes one-hot encoded labels (10-d) as the input, and encodes it via the conditional encoder (), which is now a 3-layer densely connected network with 64 hidden units. The output from is then linearly projected onto a -d vector space and reshaped into image representation format. Two up-sampling blocks (as seen in the main paper) with 64 and 32 filters are used to recover the original resolution. There is also a Manifold Cluster Network which provides style codes to manipulate the output without modifying the domain-invariant information (class label in this case). Finally, the discriminator

is now multi-class and outputs a 10-d realness score (one for each class label, all zeros to represent fake). Like the generator, it is also shrunk down to include only two down-sampling blocks of 32 and 64 filters. The same loss functions were used without modification except the adversarial loss, which is now applied to every output class.

Fig. 10: Image generated (for all 10 digits) along the path of traversal.
Fig. 11: 2-D visualisation of the manifold formation. The points along the chosen path of traversal are highlighted in red.

Similar to section IV of the main paper, we use UMAP [26] (with the same parameters) to perform dimensionality reduction on 10,000 sampled style codes. The manifold formation is visualised in 2D as Figure 11. A path (containing a series of points highlighted in red) is traversed to observe the transition of the generated image, and the generated images (for all 10 digits) are shown in Figure 10. Consistent with the findings in the main paper, the learnt manifold of MDGAN appears to be smooth as the generated images experience a smooth transition along the chosen path. The transition appears to be systematic. For example, the strokes appear thinner at the start for all the digits and become “fuzzier" as the end. However, some of these trends can only be easily interpreted by the generator. As the disentanglement of the manifold is an important property enforced by MDGAN, the input vector can be “stylised" into diverse outputs of its class, but the validity and domain-invariant features (in this case their respective class label) are never violated. This shows MDGAN can generalise beyond image-to-image translation tasks and can learn to construct manifold clusters based on other forms of conditional inputs.

Vi-B Diverse MNIST Digit Generation from Separate Manifold Clusters

This section presents another supplementary experiment using MDGAN to generate hand-written digits from labels. The goal is to provide further in-depth analysis on the localisation of spatial features (from the input) and domain-specific (manifold) features.

Instead of starting with one-hot encoded labels, a noisy image is used as the seed for . We use the labels (0 to 9) to index one of the 10 separately learn Manifold Cluster Networks () which determines the output class. The rest of , the entirety of and the loss functions remain unchanged from Section VI-A.

5,000 style codes are sampled from each of , and they are mapped to 2-D (Figure 12) using UMAP. As shown, the manifold clusters are distinctively separated. To investigate the localisation of domain-invariant features, 16 seeds () and 10 style codes (, one from each class) are sampled to generate outputs and the results are shown in Figure 13. It can be seen that while the style codes ultimately control the output class, the seeds determine more subtle domain-invariant properties such as “font italicisation". For example, all the digits generated using are slanted to the right. As Figure 14, for a given seed, different style code sampled from a manifold cluster ( in this case) can moderately alter the output while keeping the overall structure unchanged. This is because structural information is prescribed by the seed, which is a dense spatial input (less abstract compared to one-hot encoded labels) containing domain-invariant features. The style codes are thus localised as minor deviations from the main structure with relatively less “freedom".

Fig. 12: 2-D visualisation of the manifold formation. The manifold clusters modelled are shown as 10 labelled clusters.
Fig. 13: Images generated using 16 seeds ( and randomly-sampled style codes () from each class. The seed appears to determine properties such as “font italicisation".
Fig. 14: Images generated using a given seed and 100 style codes from . All the images are different from each other, but the overall structure remains unchanged.