Deep convolutional neural networks have previously shown their power in the context of medical imaging tasksCNN_med_image ; however, the successful training of deep networks significantly relies on large-scale and balanced datasets. Since pathological data represent only a very small portion of all available medical imaging data and many patients decline to share their data due to privacy concerns, it is difficult to obtain enough samples for certain rare medical conditions, even in rather sizeable pathological datasets. As a result, most available training datasets remain unbalanced and small, constraining the overall accuracy of learned medical imaging models. Thus, effective data augmentation strategies are of great interest to the medical imaging community.
. Nevertheless, the amount of data augmented is still limited, the augmented data are often highly correlated, and the augmented data can very well be meaningless during model training if essential information is removed in cropping. In the particular context of this paper, since prostate diffusion imaging data are relatively small in size and low-resolution, typical data augmentation methods, unfortunately, have very little to offer for improving the accuracy of deep learning-driven prostate cancer analysis.
Recently, Generative Adversarial Networks (GANs) GAN and their variants DCGAN ; InfoGAN have been used to increase the performance of deep networks for medical imaging tasks, due to their ability to synthesize realistic images med_image_synthesis ; HD_med_syn ; DCGAN_liver ; Brain_tumor_synthesis .
One limitation, however, is that GANs cannot fully make use of the annotated images because they cannot control the class of the generated images. In response, many medical imaging researchers have more recently used Conditional GANs (CGANs) CGAN , allowing the model to be conditioned on additional information such as class labels or even data from other modalities MR2CT ; CGAN_Prostate . However, none of the current strategies have applied GANs to synthesizing diffusion imaging data for prostate cancer analysis; rather, state-of-the-art methods have focused on image translation between magnetic resonance imaging (MRI) and computed tomography (CT), and on the analysis of liver, brain, or lung cancer. In addition, the generative network is typically trained without considering any class information, which is problematic in situations of image scarcity for certain classes.
In this paper, we propose ProstateGAN, a GAN-based model for synthesizing prostate diffusion imaging data, which can be used to mitigate the data bias present in machine learning-driven prostate cancer analysis. ProstateGAN has a generator and a discriminator competing with one another:
The generator accepts random noise with embedded Gleason scores as the class information, and makes use of transposed convolutions to expand the noise samples to the synthetic prostate diffusion images.
evaluates the generator, in which the synthetic prostate diffusion images are passed into the discriminator with Gleason scores embedded. The discriminator outputs a score between zero and one, indicating the probability of the synthetic image being real (not synthetic).
Essentially, the better the generator, the more likely that the discriminator will be fooled. At the same time, the discriminator is being trained to increase its ability to distinguish false from real content by minimizing the loss between the output score and the ground truth. The contributions of this work include the synthesis of high-quality focal prostate diffusion images using generative adversarial networks (GANs) conditioned on corresponding labels. The generated synthetic data can be used to augment and balance training sets for deep networks to improve prostate cancer classification.
With the proposed ProstateGAN, we combine the ideas behind DCGAN (deep convolutional GAN) DCGAN and CGAN (conditional GAN) CGAN for the purpose of synthesizing labeled prostate diffusion imaging data, resulting in what is essentially a conditional deep convolutional GAN architecture. In both DCGAN and CGAN, the framework consists of a generative model
to estimate the data distributionfrom given image set and a discriminative model to attempt to differentiate synthetically generated samples from those in the training data. The objective of is to maximize the probability of being wrong, i.e., being fooled by . The and
of DCGAN are both deep CNNs without max pooling or fully connected layers, whereuses transposed convolution for upsampling and
uses strided convolutions for downsampling. CGAN further changesand of DCGAN by adding the label as an additional parameter to both the generator to generate the corresponding images and the discriminator to distinguish real images better.
An overview of the proposed ProstateGAN is shown in Figure 1. On the left is the generator network, taking 100 random noise samples with distribution (here uniform) and class label , to generate a the prostate imagetanhactivation function instead. The conditional adversarial loss of , i.e., what generator is trained to minimize, is defined as
where the generator is conditioned on Gleason score and is a joint representation. The right part of Figure 1 is the discriminator network which takes the generator output , together with the Gleason score , to output the likelihood indicating the probability that the input is synthetic, deciding the fidelity of . The discriminator takes the generated images
into a series of strided convolutions, with batch normalization and leaky ReLU functions applied to each layer until the last layer, replaced with a sigmoid function. The conditional loss foris defined as
where is from the true image distribution .
3.1 Experimental Setup
Our prostate dataset consists of 1490 prostate diffusion imaging data of 104 patients, with corresponding Gleason scores (0 to 9) and PIRAD scores (2 to 5). The diffusion data were acquired using a Philips Achieva 3.0T machine at a hospital in Toronto, Ontario. Institutional research ethics board approval and written informed consent was waived by the hospital’s research ethics board. For our goal of generating realistic diffusion imaging data, which can be used for the training of improved prostate cancer classifiers, we used image-class data pairswhere is the greyscale prostate image and is the corresponding Gleason score of image .
In our experiments, we set the epoch number to 100 and the batch size to 64. The noise samples
have size 100 and are uniformly distributed. The images are first augmented by rotating, normalizing, and flipping before training. The learning rate is 0.0002 and
for Adam optimizer is 0.5. Weights are initialized with the mean of 0 and standard deviation of 0.02. The slope of leaky ReLU is 0.2.
3.2 Experimental Results
We trained the ProstateGAN model on our prostate dataset of diffusion imaging data and the corresponding Gleason scores. In epochs near 100, the discriminator loss converges around 1, and the generator loss converges to between 1 and 2. Figure 2 shows the visualized training progress of the ProstateGAN generator, showing the synthetic images generated from for six different epochs, contrasted with real images in the right-most column, and where within each row the Gleason score is held constant. In addition, since the input noise samples for the generator are sampled before training, for each row the image of different epochs is generated using the same noise sample . It can be observed that, because uses a deconvolutional kernel of size with a stride of 2, the generator fails to coordinate these kernels in overlapped regions, generating noise artifacts with apparent grid-lines. In the first few epochs, these gaps inside the generated images evolve to mosaic patterns and then later fade away after dozens of epochs.
The qualitative comparison between real and synthetic images is shown in Figure 3. The generated images are, by design, of size , while the lengths and widths of the real images vary from 10 to 35. The synthetic images generated by generator have similar structure and boundaries as those of real diffusion MR images. Most notably and the goal of this work, ProstateGAN can generate synthetic images that exhibit characteristics indicative of prostate cancer, precisely the samples needed for classifier training. In particular, the abnormal darkened regions that appear in real diffusion images containing prostate cancer (i.e., Gleason scores of six or higher) can also be found in synthesized diffusion images of similar classification.
In this paper, we presented ProstateGAN, a model for generating realistic synthetic prostate diffusion images using a conditional deep convolutional GAN architecture, and demonstrated its ability to synthesize high-quality prostate diffusion image data by taking Gleason score into consideration during the training process. Results show that ProstateGAN can generate synthetic diffusion images corresponding to positive cancer grades (i.e., Gleason scores of six or higher) that exhibit characteristics indicative of prostate cancer. As such, ProstateGAN can potentially be used to augment and balance training datasets, an important step in mitigating data bias in prostate cancer classification.
This work was supported by the Natural Sciences and Engineering Research Council of Canada and Canada Research Chairs Program. The authors also thank Nvidia for the GPU hardware used in this study through the Nvidia Hardware Grant Program.
- (1) Q. Li, W. Cai, X. Wang, Y. Zhou, D. D. Feng, and M. Chen, “Medical image classification with convolutional neural network,” in 2014 13th International Conference on Control Automation Robotics Vision (ICARCV), 2014, pp. 844–848.
- (2) M. Drozdzal, G. Chartrand, E. Vorontsov, M. Shakeri, L. D. Jorio, A. Tang, A. Romero, Y. Bengio, C. Pal, and S. Kadoury, “Learning normalized inputs for iterative estimation in medical image segmentation,” Medical Image Analysis, vol. 44, pp. 1 – 13, 2018.
- (3) H. R. Roth, L. Lu, J. Liu, J. Yao, A. Seff, K. Cherry, L. Kim, and R. M. Summers, “Improving computer-aided detection using convolutional neural networks and random view aggregation,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1170–1181, 2016.
- (4) A. A. A. Setio, F. Ciompi, G. Litjens, P. Gerke, C. Jacobs, S. J. van Riel, M. M. W. Wille, M. Naqibullah, C. I. Sánchez, and B. van Ginneken, “Pulmonary nodule detection in ct images: False positive reduction using multi-view convolutional networks,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1160–1169, 2016.
- (5) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems 27. Curran Associates, Inc., 2014, pp. 2672–2680.
- (6) A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” CoRR, vol. abs/1511.06434, 2015.
- (7) X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan: Interpretable representation learning by information maximizing generative adversarial nets,” CoRR, vol. abs/1606.03657, 2016.
- (8) H.-C. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. P. Andriole, and M. Michalski, “Medical image synthesis for data augmentation and anonymization using generative adversarial networks,” in Simulation and Synthesis in Medical Imaging, A. Gooya, O. Goksel, I. Oguz, and N. Burgos, Eds., 2018, pp. 1–11.
- (9) A. Beers, J. M. Brown, K. Chang, J. P. Campbell, S. Ostmo, M. F. Chiang, and J. Kalpathy-Cramer, “High-resolution medical image synthesis using progressively grown generative adversarial networks,” CoRR, vol. abs/1805.03144, 2018.
- (10) M. Frid-Adar, I. Diamant, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Gan-based synthetic medical image augmentation for increased CNN performance in liver lesion classification,” CoRR, vol. abs/1803.01229, 2018.
- (11) T. C. W. Mok and A. C. S. Chung, “Learning data augmentation for brain tumor segmentation with coarse-to-fine generative adversarial networks,” CoRR, vol. abs/1805.11291, 2018.
- (12) M. Mirza and S. Osindero, “Conditional generative adversarial nets,” CoRR, vol. abs/1411.1784, 2014.
- (13) J. M. Wolterink, A. M. Dinkla, M. H. F. Savenije, P. R. Seevinck, C. A. T. van den Berg, and I. Išgum, “Deep mr to ct synthesis using unpaired data,” in Simulation and Synthesis in Medical Imaging, S. A. Tsaftaris, A. Gooya, A. F. Frangi, and J. L. Prince, Eds. Cham: Springer International Publishing, 2017, pp. 14–23.
- (14) S. Kohl, D. Bonekamp, H. Schlemmer, K. Yaqubi, M. Hohenfellner, B. Hadaschik, J. Radtke, and K. H. Maier-Hein, “Adversarial networks for the detection of aggressive prostate cancer,” CoRR, vol. abs/1702.08014, 2017.