Normalization of breast MRIs using Cycle-Consistent Generative Adversarial Networks

12/16/2019
by   Gourav Modanwal, et al.
24

Dynamic Contrast Enhanced-Magnetic Resonance Imaging (DCE-MRI) is widely used to complement ultrasound examinations and x-ray mammography during the early detection and diagnosis of breast cancer. However, images generated by various MRI scanners (e.g. GE Healthcare vs Siemens) differ both in intensity and noise distribution, preventing algorithms trained on MRIs from one scanner to generalize to data from other scanners successfully. We propose a method for image normalization to solve this problem. MRI normalization is challenging because it requires both normalizing intensity values and mapping between the noise distributions of different scanners. We utilize a cycle-consistent generative adversarial network to learn a bidirectional mapping between MRIs produced by GE Healthcare and Siemens scanners. This allows us learning the mapping between two different scanner types without matched data, which is not commonly available. To ensure the preservation of breast shape and structures within the breast, we propose two technical innovations. First, we incorporate a mutual information loss with the CycleGAN architecture to ensure that the structure of the breast is maintained. Second, we propose a modified discriminator architecture which utilizes a smaller field-of-view to ensure the preservation of finer details in the breast tissue. Quantitative and qualitative evaluations show that the second proposed method was able to consistently preserve a high level of detail in the breast structure while also performing the proper intensity normalization and noise mapping. Our results demonstrate that the proposed model can successfully learn a bidirectional mapping between MRIs produced by different vendors, potentially enabling improved accuracy of downstream computational algorithms for diagnosis and detection of breast cancer.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 5

page 6

page 7

07/05/2018

Automatic deep learning-based normalization of breast dynamic contrast-enhanced magnetic resonance images

Objective: To develop an automatic image normalization algorithm for int...
03/01/2018

SD-CNN: a Shallow-Deep CNN for Improved Breast Cancer Diagnosis

Breast cancer is the second leading cause of cancer death among women wo...
04/07/2020

BreastScreening: On the Use of Multi-Modality in Medical Imaging Diagnosis

This paper describes the field research, design and comparative deployme...
11/08/2019

Transfer Learning in 4D for Breast Cancer Diagnosis using Dynamic Contrast-Enhanced Magnetic Resonance Imaging

Deep transfer learning using dynamic contrast-enhanced magnetic resonanc...
11/09/2018

Neural Stain Normalization and Unsupervised Classification of Cell Nuclei in Histopathological Breast Cancer Images

In this paper, we develop a complete pipeline for stain normalization, s...
09/14/2018

BPE and computer-extracted parenchymal enhancement for breast cancer risk, response monitoring, and prognosis

Functional behavior of breast cancer - representing underlying biology -...
07/23/2013

Numerical Methods for Coupled Reconstruction and Registration in Digital Breast Tomosynthesis

Digital Breast Tomosynthesis (DBT) provides an insight into the fine det...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Breast cancer is one of the leading causes of death among women around the globe [1]. Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is widely used to complement mammography and ultrasound when evaluating breast cancer, particularly when assessing the extent of cancer before surgery. In some high-risk cases, it is also used for screening.

A significant challenge related to the use of DCE-MRI is the lack of standardized imaging protocols [2]. Different MRI scanners use different parameters, which previous research [3] has shown to drastically alter image appearance, quality, and even radiological interpretation. When the same patient is imaged using a different scanner or even the same scanner with different scanner parameters, the produced MR images may vary significantly [4]. The inconsistencies present in the radio-frequency (RF) coil produce intensity variations in the underlying tissue across the scanned image [5]. Additionally, varying scanner parameters alters the noise distribution of the images. An illustration of the difference in intensity and noise distribution between images obtained from two different MRI scanner manufacturers (GE Healthcare and Siemens) is illustrated in Fig. 1.

Fig. 1: Example of images from two modalities displaying differences in intensity and noise distribution (a) GE Healthcare (b) Siemens.

The high degree of inter-scanner variation proves to be a large obstacle for the effective usage of DCE-MRI. In the context of radiomics, where a multitude of features are extracted from images for further processing, the features from different modalities may turn out to be incomparable and rendered useless for classification and prediction. The impact of scanner parameters on breast MRI radiomic features was demonstrated in [3]. Variability in images has been shown to have an impact on the training of deep learning as well [6]. Algorithms trained on images from one scanner may not perform well on exams at a different institution that were acquired using a different scanner. Finally, the inconsistency between images from different scanners may affect the outcome of computer-aided diagnosis and also the interpretation by radiologists. The ability to translate between images acquired by different vendors with different scanner parameters would have tremendous positive consequences. It would enable quantitative comparison of image features across different institutions, and it would also improve generalization as deep models trained on one dataset could still perform inference on new datasets generated by different scanners.

In order to address this issue, we frame the problem of mapping between images generated by different MRI scanners as an application of unpaired image-to-image translation. Most of the literature in the domain of MRI preprocessing

[4, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] has focused on normalizing intensities but does not account for noise patterns. To our knowledge, no one has yet proposed a method for MRI vendor normalization. This process is challenging because it requires both normalizing the intensity and learn the mapping between the noise patterns. In this work, we present a vendor normalization method that attempts to perform intensity normalization as well as noise distribution mapping between MRIs obtained from different scanners. The major contributions of this work can be summarized as follows:

  • We present a method for MRI vendor normalization that performs unpaired bidirectional translation between DCE-MRIs produced by different scanner models.

  • We investigate the challenges of the standard CycleGAN approach for normalization of medical images, primarily the difficulty in maintaining the breast shape and structures within the breast between the original image and the translated image. Then, we propose and evaluate two technical solutions to this issue, as described below.

    • We propose the incorporation of a mutual information loss with the standard CycleGAN architecture in order to ensure that the structure of the breast and dense tissue is maintained.

    • We propose a modified discriminator capable to preserve the breast shape as well as the dense tissues and evaluate the effect of changing the field-of-view on the performance.

  • We present and compare the performance of the proposed vendor normalization methods using both quantitative and qualitative approaches.

  • We highlight how the proposed work can potentially enable the synthesis of larger and richer data-sets which mitigate issues related to class imbalance.

Fig. 2: CycleGAN network configuration.

The remainder of the paper is organized as follows. Section II describes the related work. Section III presents details about the dataset, and the proposed methods are detailed in Section IV. Information about training is furnished in Section V and Section VI presents the metrics for the evaluation of the proposed method. Section VII reports the experimental results and discussions. Finally, Section VIII concludes with a summary.

Ii Related Work

Unlike other imaging modalities, MRIs span a wide, non-linear spectrum of raw intensity values. They lack uniformity and often exhibit high variance between subjects. Even within a single subject, intensity variations of 10-40% have been observed

[17]. This heterogeneity makes it difficult to train robust medical image analysis algorithms with MRIs effectively.

In response, various statistical approaches have been proposed for MRI intensity normalization. These include histogram equalization [4, 7], intensity scaling based on regions of interest [8] and landmarks [9]. However, histogram-based methods rely on discrete approximations of intensity distributions, leading to high levels of inexactness [10]. Meanwhile, obtaining a high level of accuracy with landmark-based algorithms requires obtaining multiple landmarks from various tissue types in the image. Designing algorithms to perform this landmark selection task is difficult and time-consuming [9].

Another limitation of many MRI normalization methods [11], [12] is that they require auxiliary inputs such as segmentation masks. This adds an intrinsic reliance on the models that perform these preprocessing tasks. Alternatively, few technique [13] attempts to leverage the physics of MR acquisition in order to develop intensity invariant segmentation algorithms. However, using this type of approach requires integrating explicit physics-based embeddings into the segmentation algorithm, thus limiting this system’s ability to generalize to other downstream tasks.

Additionally, some of the methods discussed above [4, 8, 9] attempt to perform intensity transformation between two fixed imaging settings. That is, they make the assumption that the intensity relationship of the tissues is constant between the target group and the reference group, which is not always true [15]. If the intensity standardization needs to be done for images coming from multiple centers, multiple transforming models need to be established. Resultantly, these methods do not have the ability to process new images that are not from an MR image group that has already been included in their training data. This severely limits their usability.

Recently, GANs have been used in a variety of applications to the domain of medical imaging. Yi et al. [18] presents a review of GANs’ recent applications to the medical domain. Most of the previous work [19, 20, 21] has focused on using GANs for multimodal translation that in turn, improved diagnosis across several modalities (e.g. PET, CT, MRI). Additionally, GANs have also been used to generate synthetic images [16] to augment training datasets for algorithms that perform downstream tasks—diagnosis, prognosis, segmentation, and registration. Among GANs, CycleGAN has an intrinsic ambiguity with respect to geometric transformations [22]. It does not take care of the anatomical changes in the transformed images since the shape of training data is arbitrary and the standard discriminator model disregard the changes in anatomical structures as it doesn’t effects the realness of the image. Multimodal translation and synthesis in medical imaging using CycleGAN should ensure shape consistency as anatomical structures are crucial in many computer-aided detection of cancer.

Recently, CycleGAN was used for normalizing MRIs across different scanners. Gao et al. [15] proposed a universal intensity standardization method for brain MRIs using GANs. Dar et al. [16] also applied CycleGAN for normalization of brain MRI. However, they failed to learn the noise distribution pattern—e.g. noisy halo around the breast. Another work [22] have tried to solve the problem by introducing a segmentor model along with CycleGAN networks to ensure shape consistency. However, ground truth is needed to obtain shape consistency loss and ground truth of dense tissues is not available in a typical application. Other papers [19, 20, 21] have used cycleGAN for translation across different modalities (e.g. PET, CT and MRI) with some of the same issues present.

In this article, we present a fully unpaired strategy for image translation using cycleGAN. We address previously unsolved issue of maintaining structure of the organ. Since our method does not rely on availability of registered or even paired images, it is applicable to different organs and not only rigid ones where simple registration is possible.

GE Healthcare (GE) Siemens (SE)
Train Set 5045 2776
Test Set 1563 843
Validation Set 173 93
TABLE I: Details about training, test and validation data-set

Iii Data-set and Pre-processing

In this study, we experimented with DCE MRI images data obtained using GE Healthcare (GE) and Siemens (SE) scanners (1.5 T) in the axial plane. Our database consisted of 124 subjects: 77 were scanned using a GE Healthcare scanner while the remaining 47 were scanned using a Siemens scanner. Each MR volume contains more than 160 2D axial image slices. The top 1% of pixel values in the entire dataset were assigned the value of 255, and the remaining intensities were linearly scaled to the 0-255 pixel range. The data were randomly divided at the patient level, and 75% were used to produce a training set. Out of the remaining images, 10% were kept as a validation set, and the remaining 90% were used as a test set. Only slices from the middle 50% of each patient volume were used in our study. Details regarding the number of slices used for training, testing, and validation are given in Table I.

Iv Methods

In this section, we present various frameworks to perform cross-modal translation between MRI images acquired by GE Healthcare (GE) and Siemens (SE) scanners.

Iv-a CycleGAN

We utilize the CycleGAN [23]—a bidirectional image-to-image translation method—for the transformation between the GE and SE MRIs. It consists of two generators and two discriminators . Each generator has a corresponding discriminator, and they are trained in an adversarial setting in which the two networks compete against each other to fool their counterparts. Fig. 2 illustrates the CycleGAN network configuration where, and are training samples from and , respectively. Generator maps from while maps from .

The discriminator network discriminates between the images generated by the generator and the target image while generator tries to improve the quality of the transformed image so that it can fool the discriminator. Similarly, discriminates between images generated by and the target image , while tries to transform effectively enough to fool . The above task is formulated as a min-max optimization problem.

Iv-A1 Network Architectures

The architecture for the generators is adapted from Johnson et al. [24]. The generator consists of an encoder, transformer, and decoder. The encoder uses convolutional down-sampling to shrink the size of the input representation and increase the number of channels. It is followed by a transformation block which retains the size of representation using residual convolution blocks. Finally, a decoder block is used which upsamples the size of representation using deconvolution.

The discriminator network uses a classical PatchGAN [25]

. It is a fully convolutional neural network that processes overlapping patches of the input image instead of the entire input image. The output of the discriminator is a matrix of binary classifications of whether each patch is real or fake. A standard PatchGAN has a field of view (FOV), or patch size, of

. We also experimented with numerous other discriminator architectures with varying FOV. Those results are detailed in Section VII.

Iv-A2 Losses

The objective function contains two loss terms: adversarial loss and cyclic loss . The adversarial loss [26] ensures that the generated images belong to the data distribution of the target domain. The adversarial loss is formulated as below:

(1)
(2)

The generator tries to minimize the above adversarial loss and the discriminator tries to maximize it. However, the adversarial loss alone is not sufficient enough to produce good target images. The adversarial loss will enforce the transformed output to be of the appropriate domain, but will not enforce the input and output to be recognizably the same. Thus an additional cycle-consistency loss is added to the overall objective. The cycle-consistency loss ensures that the translated image looks like the input image by enforcing and to be inverses of each other i.e. and .

(3)

The overall objective is given as below where is the weighting factor for cycle-consistency loss.

(4)

Iv-B CycleGAN with Mutual Information

The standard CycleGAN architecture detailed above, when used for translation between GE and SE breast MRIs, may produce results which are unable to preserve the breast shape and tissue characteristics. In order to preserve the breast shape and tissue characteristics, we propose to utilize mutual information maximization between the real images and the generated images, as shown in Fig. 3. Our rationale is that while the intensity and texture of the image may change, high mutual information will indicate that the shape of the breast and the structure of dense tissue remained the same which is desired in our application.

In practice, estimation of mutual information in images is challenging as we only have access to samples rather than the underlying distributions

[27, 28]. Additionally, previous sample-based estimators are brittle and do not scale well to higher dimensions [29]. Recently, Mutual Information Neural Estimation (MINE) [30] was introduced to approximate the mutual information using observed samples even when the true distribution is unknown. Their approach also scales to higher dimensions. Hence, we adopted their method to estimate and maximize mutual information. To the best of our knowledge, the proposed model is the first of its kind to utilize mutual information along with adversarial and cycle-consistency loss.

Fig. 3: An illustration of the proposed mutual information loss.

The mutual information is equivalent to the Kullback-Leibler (KL) divergence between the joint distribution,

, and the product of the marginal distributions and , as expressed below

(5)

where is defined as,

(6)

It uses the Donsker–Varadhan (DV) representation [31] of KL divergence, which leads to the following definition of approximate mutual information:

(7)

The approximate mutual information is obtained by maximizing the lower bound of the objective function shown in eq. 7. The maximization is achieved by using a neural network with parameters . The neural network is optimized using gradient descent to characterise a family of functions which ultimately maximizes the lower bound of the above objective.

To enforce and preserve breast shape and tissue characteristics, we propose to include a mutual information loss in the overall objective as specified below.

(8)

where is the weight factor for the mutual information loss.

Iv-C CycleGAN with modified discriminator

We experimented with various fields of view (FOV) —the amount of input that effected a single pixel of an output map. Based on the results, we proposed a modification to the discriminator architecture. The proposed modification was able to prioritize the morphological features of the breast tissue during training. The modified discriminator architecture is shown in Fig. 4

. The proposed discriminator classifies smaller patches (

) of the image to be real/fake instead of 70*70 patches, as suggested in Isola et al. [25]. A smaller patch size encourages the transformation learned by the generator to maintain sharp, high-frequency detail which is required in order to adequately preserve both the overall structure of the breast and the structure of the dense tissue regions inside the breast. Details about the various architectures with different FOV is presented in Appendix A.

Fig. 4: Proposed discriminator architecture.

Iv-D CycleGAN with modified discriminator + Mutual Information

We also applied the Mutual Information loss to the modified CycleGAN framework obtained by altering the discriminator architecture.

V Training

We optimize the network using mean squared error (MSE) instead of cross-entropy as suggested in Mao et al. [32]. As a result, training becomes more stable and higher quality images are produced. Additionally, to prevent the model from oscillation, the discriminator is fed a history of the 50 most recently generated images rather than solely the most recently generated image. Adam optimizer was used to train the network with learning rate , = 0.5, and = 0.999.

Fig. 5: Effect of mutual information loss (a) GE Healthcare to Siemens (b) Siemens to GE Healthcare. Difference of the input and transformed image is shown using a composite image, where magenta shows negative value and green shows the positive value.
Fig. 6: Representative results of the proposed image translation (a) GE Healthcare to Siemens (b) Siemens to GE Healthcare.

Vi Evaluation Metrics

The quantitative evaluation of the transformed images is difficult in the case of unpaired images [23] as there is no standard/universal metric to evaluate the accuracy of transformed images [33]. Hence, evaluating the quality of synthesized images is an open and challenging problem for which metrics vary depending on the specific needs of the application. Most previously published work relies either on the visual examination of the transformed images by human subjects or some application-specific metrics. Visual evaluation of the transformed image is still the most common and intuitive method for determining the quality of the transformed images.

In this work, the evaluation of our algorithms was done in two ways. First, we performed a combination of quantitative and qualitative analysis to determine the robustness of the transformation. In the quantitative analysis, we manually annotated a breast mask for 20 images from the test set before and after transformation and used these annotations to compute a Dice coefficient between them. A higher Dice coefficient suggests the breast shape was preserved during the transformation while a lower value suggests distortion in breast shape. In order to evaluate the preservation of dense tissue, we performed qualitative analysis through visual observation to determine whether the shape of the dense tissue was preserved or not.

Secondly, we performed an evaluation of the intensity transformation. Specifically, we manually annotated the dense tissue (10 cases) to estimate the mean intensity value before and after translation. The expectation is that while the mean intensities of dense tissue differ significantly between GE and Siemens before the transformation, they should be similar after the transformation.

Fig. 7: Experiment with the field of view (FOV) in the discriminator architecture (a) GE Healthcare to Siemens (b) Siemens to GE Healthcare.

Vii Results and Discussion

The result of the proposed vendor normalization using CycleGAN is presented in Fig. 6. Qualitatively, it can also be observed that the standard CycleGAN model is unable to preserve the shape of the breast and dense tissue. Our proposed modified discriminator framework performed the best out of all explored algorithms. A surprising result visible in Fig. 6 is that the introduction of mutual information loss was not even able to preserve the shape of the breast. We analyzed this phenomenon further and we found that it was caused by the noise pattern in GE images. Specifically, the mutual information neural estimator (MINE) network tries to maximizes the mutual information by matching the shape of the breast to the noisy “halo” around the breast, and therefore it increases the size of the breast. Similarly, for the Siemens to GE transformation, it maximizes the mutual information by decreasing the shape of the breast. This is illustrated in Fig. 5.

The proposed modified CycleGAN framework was obtained by modifying the discriminator architecture to put more stress on features pertaining to breast tissue. We experimented with various FOV in the discriminator architecture. The effect of various FOV in the performance is presented in Fig. 7. It can be observed that the FOV frequently modifies the dense tissues of the breast. It also modifies the shape of the breast, which is apparent from the lower Dice coefficients (Table II). A FOV, i.e. PixelGAN, has no effect on spatial statistics and is thus unable to learn the mapping between the noise distributions of the two domains. Additionally, the transformed images look extremely pixelated and exhibit a checkerboard pattern. The performance of a FOV was comparatively better than FOV in terms of preserving the shape of the breast as well the dense tissues. However, after visual inspection, it was observed that a FOV preserves the dense tissue better and produces sharper images as compared to . The Dice coefficients confirmed that the FOV was able to preserve the shape of the breast as well. Additionally, the FOV has fewer parameters. It was for all these reasons that was used as the optimal FOV in the proposed discriminator architecture.

GE to SE SE to GE
FOV Mean Std Mean Std
1*1 - - - -
34*34 0.97621 0.00911 0.97944 0.00700
45*45 0.92357 0.01630 0.93098 0.01774
70*70 0.91381 0.05770 0.90209 0.04426
TABLE II: Dice coefficients between breast mask before and after transformation obtained on validation data

Quantitative results are presented in Table III. It can be observed that during GE to Siemens translation, the Dice coefficient of the breast masks is the highest for the CycleGAN framework obtained by modifying the discriminator architecture. It is also apparent from the Table III that applying the mutual information loss to proposed discriminator causes a reduction in Dice coefficient value () due to a decrease in the shape of the breast. However, the standard CycleGAN model and its variant with mutual information both have comparable Dice coefficients. This can be explained by both methods’ inability to preserve the breast shape. Similar observations can be made for the translation between Siemens to GE. This confirms that the modified architecture with the field of view of 34*34 results in superior performance.

GE to SE SE to GE
Models Mean Std Mean Std
Std. CycleGAN 0.89130 0.09408 0.90887 0.04709
Std. CycleGAN + MINE 0.89762 0.05103 0.89486 0.03908
Proposed Discrim 0.98005 0.00610 0.98129 0.00487
Proposed Discrim + MINE 0.90815 0.07137 0.89118 0.07063
TABLE III: Quantitative results: Dice coefficients between breast mask before and after transformation
Fig. 8: Mean intensity value distribution of dense tissues in (a) original GE and original Siemens (b) original GE and transformed Siemens (c) and original Siemens and transformed GE.

In summary, from a qualitative point of view, it was found that the standard CycleGAN along with mutual information leads to the worst result (See Fig. 6) This was also reflected in the quantitative results, where it achieves almost minimum dice coefficient score. On the other hand, the proposed modified CycleGAN framework obtained by altering the discriminator architecture is able to consistently preserve the dense tissue as well as the breast shape. These observations also align with the quantitative results presented in Table III.

In order to evaluate the intensity transformation, we manually annotated the dense tissue (10 cases) to estimate its mean intensity value before and after translation. The result is presented in Fig 8 where, Fig. 8(a) illustrates the mean intensity distribution of the dense tissue in GE and Siemens before the transformation. It can be observed from Fig. 8(b) that the mean intensity distribution of the original GE is comparable to the transformed Siemens. A similar observation can also be made from Fig. 8(c) for the original Siemens and transformed GE. This demonstrates that the proposed method was able to successfully adjust the intensity of the image as it pertains to dense tissue. It should also be noted that the proposed method not only perform the intensity adjustment but also learns the mapping of noise which is a crucial aspect of vendor normalization. The proposed vendor normalization method will thus potentially increase the robustness of downstream models which do not have access to adequate training data from multiple vendors by synthesizing larger and richer data-sets which will mitigate issues related to class imbalance.

Viii Conclusions

In this article, we have shown that a fully convolutional neural network can be trained to perform vendor normalization by translating between DCE-MRI images generated from different scanners (GE Healthcare & Siemens). In contrast to previous works, our proposed method not only performs intensity normalization but also learns the noise distribution pattern.

Our evaluation showed that the standard CycleGAN when applied to this task, while matching the desired intensity of images, it struggles with recreating the shape of the breast and shape of the dense tissue. This is caused by the limited constraint on the images generated by the GANs and in turn, liberty that it takes to freely generate breast images. In response to this, we proposed two solutions. The first one was to incorporate mutual information into the loss function. Our rationale was that it would ensure that the structure of the breast is maintained between the input and the output of the generator. This first solution did not solve the problem due to a very specific characteristic of the data, which was a noise “halo” around the breast. Incorporating mutual information into a CycleGAN is not a trivial task and we believe that the method of doing so proposed in this paper will be helpful for other similar tasks in medical imaging and beyond. The second solution to the problem of maintaining the structure of the breast that we propose in this paper is a modification to the discriminator. This solution was highly successful for this task as verified by our experiments.

Our study had some limitations. One limitation of this work is that it provides the capability of translation using 2D images only. However, while some effort in network design and parameter optimization is needed, the proposed methods naturally lend themselves to 3D networks. Another limitation is that we tested our algorithm on two vendors only and the number of patients was limited. While we still believe that the dataset used in this study is very representative of the real-life problem faced in analyses of breast MRIs, further studies are needed to show that the proposed method generalizes beyond the data presented here. In summary we proposed a framework for normalization of breast MRIs based on CycleGAN. In response to the challenges with applying this framework, we proposed technical innovations that allowed for this framework to be successfully applied to the task at hand. While the framework was tested using breast MRIs, it naturally lends itself to other medical imaging tasks where no matched data is available.

Appendix A Architectures for discriminator

Discriminator architectures with various field of view is presented in this section. Each model uses a convolution after the last layer to produce a 1-D output of size . InstanceNorm layer was not applied to first layer in each of the architecture. The slope for LeakyReLU was 0.2.

Layer
Input
Channel
Output
Channel
Filter
Size (k)
Stride
(S)
Activation
Convolution 1 64 2

Leaky ReLU

Convolution 64 128 2 Leaky ReLU
Convolution 128 256 2 Leaky ReLU
Convolution 256 512 1 Leaky ReLU
Convolution 512 1 1 -
TABLE IV: Discriminator architecture ()
Layer
Input
Channel
Output
Channel
Filter
Size (k)
Stride
(S)
Activation
Convolution 1 64 2 Leaky ReLU
Convolution 64 128 2 Leaky ReLU
Convolution 128 256 1 Leaky ReLU
Convolution 256 1 1 -
TABLE V: Discriminator architecture ()
Layer
Input
Channel
Output
Channel
Filter
Size (k)
Stride
(S)
Activation
Convolution 1 64 2 Leaky ReLU
Convolution 64 128 2 Leaky ReLU
Convolution 128 256 1 Leaky ReLU
Convolution 256 1 1 -
TABLE VI: Discriminator architecture ()
Layer
Input
Channel
Output
Channel
Filter
Size (k)
Stride
(S)
Activation
Convolution 1 64 1 Leaky ReLU
Convolution 64 128 1 Leaky ReLU
Convolution 128 1 1 -
TABLE VII: Discriminator architecture (PixelGAN)

Acknowledgment

The authors would like to thank Mr. Mateusz Buda for the insightful discussions.

References

  • [1] American Cancer Society, “Breast Cancer Facts & Figures 2019-2020,” American Cancer Society, Inc. 2019.
  • [2] P. B. Sachs, K. Hunt, F. Mansoubi, and J. Borgstede, “CT and MR protocol standardization across a large health system: Providing a consistent radiologist, patient, and referring provider experience,” Journal of digital imaging, vol. 30, no. 1, pp. 11–16, 2017.
  • [3] A. Saha, X. Yu, D. Sahoo, and M. A. Mazurowski, “Effects of MRI scanner parameters on breast cancer radiomics,” Expert systems with applications, vol. 87, pp. 384–391, 2017.
  • [4] L. G. Nyul, J. K. Udupa, and Xuan Zhang, “New variants of a method of MRI scale standardization,” IEEE Transactions on Medical Imaging, vol. 19, no. 2, pp. 143–150, Feb 2000.
  • [5] S. Roy, A. Carass, P.-L. Bazin, and J. L. Prince, “Intensity inhomogeneity correction of magnetic resonance images using patches,” in Medical Imaging 2011: Image Processing, vol. 7962.   International Society for Optics and Photonics, 2011, p. 79621F.
  • [6] E. A. AlBadawy, A. Saha, and M. A. Mazurowski, “Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing,” Medical physics, vol. 45, no. 3, pp. 1150–1158, 2018.
  • [7] X. Sun, L. Shi, Y. Luo, W. Yang, H. Li, P. Liang, K. Li, V. C. Mok, W. C. Chu, and D. Wang, “Histogram-based normalization technique on human brain magnetic resonance images from different acquisitions,” Biomedical engineering online, vol. 14, no. 1, p. 73, 2015.
  • [8] G. Collewet, M. Strzelecki, and F. Mariette, “Influence of MRI acquisition protocols and image intensity normalization methods on texture classification,” Magnetic resonance imaging, vol. 22, no. 1, pp. 81–91, 2004.
  • [9] A. Madabhushi and J. K. Udupa, “New methods of MR image intensity standardization via generalized scale,” Medical physics, vol. 33, no. 9, pp. 3426–3434, 2006.
  • [10] S. Roy, A. Carass, and J. Prince, “A compressed sensing approach for mr tissue contrast synthesis,” in Biennial International Conference on Information Processing in Medical Imaging.   Springer, 2011, pp. 371–383.
  • [11] R. T. Shinohara, E. M. Sweeney, J. Goldsmith, N. Shiee, F. J. Mateen, P. A. Calabresi, S. Jarso, D. L. Pham, D. S. Reich, C. M. Crainiceanu et al., “Statistical normalization techniques for magnetic resonance imaging,” NeuroImage: Clinical, vol. 6, pp. 9–19, 2014.
  • [12] J. Zhang, A. Saha, B. J. Soher, and M. A. Mazurowski, “Automatic deep learning-based normalization of breast dynamic contrast-enhanced magnetic resonance images,” 2018.
  • [13] B. Fischl, D. H. Salat, A. J. Van Der Kouwe, N. Makris, F. Ségonne, B. T. Quinn, and A. M. Dale, “Sequence-independent segmentation of magnetic resonance images,” Neuroimage, vol. 23, pp. S69–S84, 2004.
  • [14] G. Lemaître, M. Rastgoo, J. Massich, J. C. Vilanova, P. M. Walker, J. Freixenet, A. Meyer-Baese, F. Mériaudeau, and R. Martí, “Normalization of t2w-mri prostate images using rician a priori,” in Medical Imaging 2016: Computer-Aided Diagnosis, vol. 9785.   International Society for Optics and Photonics, 2016, p. 978529.
  • [15] Y. Gao, Y. Liu, Y. Wang, Z. Shi, and J. Yu, “A universal intensity standardization method based on a many-to-one weak-paired cycle generative adversarial network for magnetic resonance images,” IEEE transactions on medical imaging, 2019.
  • [16] S. U. Dar, M. Yurt, L. Karacan, A. Erdem, E. Erdem, and T. Çukur, “Image synthesis in multi-contrast mri with conditional generative adversarial networks,” IEEE transactions on medical imaging, 2019.
  • [17] A. Simkó, T. Löfstedt, A. Garpebring, T. Nyholm, and J. Jonsson, “A generalized network for mri intensity normalization.” 2019.
  • [18] X. Yi, E. Walia, and P. Babyn, “Generative adversarial network in medical imaging: A review,” Medical image analysis, p. 101552, 2019.
  • [19] D. Nie, R. Trullo, J. Lian, L. Wang, C. Petitjean, S. Ruan, Q. Wang, and D. Shen, “Medical image synthesis with deep convolutional adversarial networks,” IEEE Transactions on Biomedical Engineering, vol. 65, no. 12, pp. 2720–2730, 2018.
  • [20] C.-B. Jin, H. Kim, M. Liu, W. Jung, S. Joo, E. Park, Y. S. Ahn, I. H. Han, J. I. Lee, and X. Cui, “Deep CT to MR synthesis using paired and unpaired data,” Sensors, vol. 19, no. 10, p. 2361, 2019.
  • [21] K. Armanious, C. Jiang, S. Abdulatif, T. Küstner, S. Gatidis, and B. Yang, “Unsupervised medical image translation using cycle-medgan,” arXiv preprint arXiv:1903.03374, 2019.
  • [22] Z. Zhang, L. Yang, and Y. Zheng, “Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2018, pp. 9242–9251.
  • [23] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  • [24] J. Johnson, A. Alahi, and F. Li, “Perceptual losses for real-time style transfer and super-resolution,” CoRR, vol. abs/1603.08155, 2016. [Online]. Available: http://arxiv.org/abs/1603.08155
  • [25]

    P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125–1134.
  • [26] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
  • [27] B. Poole, S. Ozair, A. van den Oord, A. A. Alemi, and G. Tucker, “On variational bounds of mutual information,” in ICML, 2019.
  • [28] D. McAllester and K. Statos, “Formal limitations on the measurement of mutual information,” arXiv preprint arXiv:1811.04251, 2018.
  • [29] A. M. Saxe, Y. Bansal, J. Dapello, M. Advani, A. Kolchinsky, B. D. Tracey, and D. D. Cox, “On the information bottleneck theory of deep learning,” 2018.
  • [30] I. Belghazi, S. Rajeswar, A. Baratin, R. D. Hjelm, and A. C. Courville, “Mine: Mutual information neural estimation,” ArXiv, vol. abs/1801.04062, 2018.
  • [31] M. D. Donsker and S. S. Varadhan, “Asymptotic evaluation of certain markov process expectations for large time. iv,” Communications on Pure and Applied Mathematics, vol. 36, no. 2, pp. 183–212, 1983.
  • [32] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, “Least squares generative adversarial networks,” in The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  • [33] A. Borji, “Pros and cons of gan evaluation measures,” Computer Vision and Image Understanding, vol. 179, pp. 41–65, 2019.