A Survey on Training Challenges in Generative Adversarial Networks for Biomedical Image Analysis

01/19/2022
by   Muhammad Muneeb Saad, et al.
16

In biomedical image analysis, the applicability of deep learning methods is directly impacted by the quantity of image data available. This is due to deep learning models requiring large image datasets to provide high-level performance. Generative Adversarial Networks (GANs) have been widely utilized to address data limitations through the generation of synthetic biomedical images. GANs consist of two models. The generator, a model that learns how to produce synthetic images based on the feedback it receives. The discriminator, a model that classifies an image as synthetic or real and provides feedback to the generator. Throughout the training process, a GAN can experience several technical challenges that impede the generation of suitable synthetic imagery. First, the mode collapse problem whereby the generator either produces an identical image or produces a uniform image from distinct input features. Second, the non-convergence problem whereby the gradient descent optimizer fails to reach a Nash equilibrium. Thirdly, the vanishing gradient problem whereby unstable training behavior occurs due to the discriminator achieving optimal classification performance resulting in no meaningful feedback being provided to the generator. These problems result in the production of synthetic imagery that is blurry, unrealistic, and less diverse. To date, there has been no survey article outlining the impact of these technical challenges in the context of the biomedical imagery domain. This work presents a review and taxonomy based on solutions to the training problems of GANs in the biomedical imaging domain. This survey highlights important challenges and outlines future research directions about the training of GANs in the domain of biomedical imagery.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 8

page 15

10/24/2018

Generative adversarial networks and adversarial methods in biomedical image analysis

Generative adversarial networks (GANs) and other adversarial methods are...
05/06/2019

Source Generator Attribution via Inversion

With advances in Generative Adversarial Networks (GANs) leading to drama...
04/18/2019

Examining the Capability of GANs to Replace Real Biomedical Images in Classification Models Training

In this paper, we explore the possibility of generating artificial biome...
02/24/2020

LogicGAN: Logic-guided Generative Adversarial Networks

Generative Adversarial Networks (GANs) are a revolutionary class of Deep...
01/25/2022

Addressing the Intra-class Mode Collapse Problem using Adaptive Input Image Normalization in GAN-based X-ray Images

Biomedical image datasets can be imbalanced due to the rarity of targete...
04/30/2020

Generative Adversarial Networks in Digital Pathology: A Survey on Trends and Future Potential

Image analysis in the field of digital pathology has recently gained inc...
07/20/2021

A Review of Generative Adversarial Networks in Cancer Imaging: New Applications, New Solutions

Despite technological and medical advances, the detection, interpretatio...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Generative adversarial networks (GANs) refer to the class of generative models that generate synthetic data by learning through probability distributions

[2]

. GANs are designed with generator and discriminator models. The generator produces realistic-looking synthetic data while taking random vectors as inputs. The discriminator task is to classify real data from generated (synthetic) data. GANs use an objective function as a joint loss function with minimax optimization. The Generator aims to produce realistic data and misguides the discriminator to classify it as real. Contrarily, the discriminator aims to classify synthetic data as fake and real data as real. Ideally, the training of the GANs should be continued until it achieves the Nash equilibrium so that the actions of the generator and discriminator models do not affect each other’s performance.

In healthcare technology, GANs have been widely utilised for several tasks such as pattern analysis of biomedical imagery [3] [4] [5], electronic health records [6], as well as drug discovery [7]. Recently, GANs have also been contributing in the context of Coronavirus disease (COVID-19), i.e., disease detection from chest radiography [8]

. In the domain of biomedical imagery, the availability of data is an obstacle to the application of deep learning. Deep learning models are composed of deep neural networks, that require large training datasets for better predictive analytics

[3]

. Thus, enhancing the size of biomedical datasets is a challenging problem. Another dilemma in the biomedical imaging domain is class-imbalanced datasets. It refers to the datasets with skewed classes when dealing with multiple disease classes. With class-imbalanced datasets, deep neural networks train better on the classes with a large number of images rather than the class with a limited number of images

[9]. Data augmentation is one of the potential solutions to address the class imbalance, as well as data limitation problems [10].

The utility of GANs in biomedical image analysis has been extensively investigated to perform image recognition [11], image synthesis [12], image reconstruction [13], and image segmentation [14]. GANs have demonstrated a capacity to support deep learning models through the generation of synthetic images and thus enlarging the size of biomedical datasets [15] [16] [17]. GANs suffer from training challenges such as mode collapse, non-convergence, and instability problems. With these limitations, GANs can generate unrealistic, blurry, and less diverse images. The mode collapse problem occurs when the generator produces similar output images while taking different input features. In the domain of biomedical imaging, the mode collapse problem of GANs has been addressed by using minibatch discrimination [18], skip connections [19], VAEGAN [20], varying layers of generator and discriminator [4], spectral normalization [21], perceptual image hashing [22]

, Gaussian mixture model as generator

[23], discriminator with conditional information vector [24], and self attention mechanism [25]. The non-convergence problem occurs due to the lack of GAN’s ability to reach Nash equilibrium. This problem has been addressed by using modified training updates of generator and discriminator [26], Whale optimization algorithm [27], and two time-scale update rule [25]. The instability problem of GANs occurs due to the vanishing gradient problem. The Wasserstein loss [18] [19] [20] [28]

, residual connections

[29], and multi-scale generator [30] techniques are identified to address the instability problem in the biomedical imagery.

Several survey articles have identified technical solutions to address the problems of mode collapse, non-convergence, and instability [31] [32] [33] [34]

. In the general imaging domain, few survey articles discuss each problem with solutions based on objective functions and modified architectures of GANs while missing the definition, identification, and quantification methodologies The quantification methods are discussed as evaluation metrics in two survey articles

[35] [36] while covering almost all aspects of each problem. The existing literature discussed these training challenges of GANs in general and did not cover the significant solutions to address these challenges in the domain of biomedical imaging. There are only two survey articles [37] [38] that only cover these challenges with their definitions in the biomedical imaging domain. These survey articles outline application-based problems of GANs and have no information about identification, quantification, and solutions to the training challenges of GANs in the biomedical imaging domain. In this survey article, we define each training problem of GANs with their definition, identification, quantification, and existing solutions. A detailed comparison of this work with the existing survey articles is indicated in Table I.

I-a Contributions of this Paper

The main contributions of this survey article are listed as follows:

  • In this article, we discuss training challenges of GANs like mode collapse, non-convergence, and instability in detail.

  • We classify each of these training challenges into four different categories i.e., Definition, Identification, Quantification, and available solutions.

  • We also review the existing approaches in terms of different biomedical imaging modalities and classify them into applications-based taxonomies for each problem.

  • This survey identifies research gaps and provides future research directions of GANs in the domain of biomedical imagery.

Main Domain References Year Mode Collapse Non-Convergence Instability
Definition Identification Quantification Solution Definition Identification Quantification Solution Definition Identification Quantification Solution
General Imagery Pan et al. [35] 2019 x x x
Hong et al. [39] 2019 x x x
Wiatrak et al. [31] 2019 x x x
Jabbar et al. [32] 2020 x x x
Lee et al. [40] 2020 x x x x x x x x x x
Gui et al. [36] 2020
Shamsolmoali et al. [41] 2021 x x x
Wang et al. [42] 2021 x x x x x
Sampath et al. [33] 2021 x x x
Saxena et al. [34] 2021 x x x
Biomedical Imagery Kazeminia et al. [37] 2020 x x x x x x x x x x
Singh et al. [38] 2021 x x x x x x x x x x
This Work 2022
TABLE I: An overview of existing survey articles discussing three training problems of GANs based on definition, identification, quantification, and solution from the technical literature in the general and biomedical imagery domain.

I-B Organization of the Paper

The rest of the article is organized as follows; Section-II presents the detailed working of GANs including background, basic architecture, and popular variants. Section-III highlights the applications of GANs in biomedical imagery. Section-IV discusses the mode collapse problem definition, identification, quantification, and existing solutions. Section V elaborates on the non-convergence problem in training of GANs, its identification methods, how to quantify the problem? and possible existing solutions. Section VI explains the instability problem in the training of GANs while providing a literature review of existing identification, quantification, and possible solutions in biomedical imagery. Section-VII discusses the important challenges and future research directions. Finally, Section VIII concludes the paper.

for tree= draw, rounded corners, node options=align=center, text width=2.7cm, anchor=center, , where level=0folder, grow’=0, if level=1before typesetting nodes=child anchor=north, edge path’=(!u.parent anchor) – ++(0,-20pt) -— (.child anchor), , [ Training Challenges of GANs in Biomedical Image Analysis, fill=gray!30, parent [Mode Collapse, for tree=fill=red!30, child [Definition

Sec IV-A] [Identification

Sec IV-B [Unrealistic Images] [Less Diverse

Images]] [Quantification

Sec IV-C [Inception Score] [Minimum Mean Discrepancy] [Multi-scale Structural

Similarity Index Measure]] [Solutions to the Problem

Sec IV-D]] [Non-convergence, for tree=fill=green!50,child, calign with current edge [Definition

Sec V-A] [Identification

Sec V-B [Blurry Images] [Images with Artifacts] ] [Quantification

Sec V-C [Peak Signal to Noise Ratio] [Fréchet Inception Distance Score] ] [Solutions to the Problem

Sec V-D]] [Instability, for tree=fill=cyan!40, child [Definition

Sec VI-A] [Identification

Sec VI-B [Low Quality

Images] [Images with Artifacts] ] [Quantification

Sec VI-C [Inception Score] [Peak Signal to Noise Ratio] [Fréchet Inception Distance Score] ] [Solutions to the Problem

Sec VI-D]]]

Fig. 1: Taxonomy of training challenges in GANs for biomedical image analysis.

Ii Generative Adversarial Networks

GANs belong to a class of models known as generative models. Generative models refer to the models that learn to generate probability distributions of a training sample’s distribution. The primary application of a GAN is to generate synthetic images. Generation of synthetic images refers to produce fake copies of training images. To gain an understanding of GANs; the architecture, training, objective function, and GANs variants are elaborated as follows:

Ii-a Architecture of GANs

In GANs, the generator attempts to create synthetic (fake) samples by mimicking the distribution of real samples. The input to the generator is random vector with probability distribution and transforms it to the probability distribution of samples . The discriminator distinguishes fake samples from real ones. It is a binary classifier that uses labels and for real and fake data, respectively. The architecture of the baseline model is known as the vanilla GANs and is shown in Fig. 2

. This GAN utilizes multi-layer perceptron (MLP) neural networks, to implement the generator and the discriminator

[2].

Fig. 2: Architecture of Vanilla GANs. The generator and the discriminator are trained in an adversarial manner so that can generate plausible fake samples while can classify them from real samples. uses random vector input for generating fake samples. loss is described as while loss is .

Ii-B Training of GANs

In GANs, the discriminator is trained on the real samples and the synthetic samples receiving from the generator. The generator is trained on the feedback given by the discriminator. The discriminator sends gradients as feedback to the generator by which the generator updates its weights to adapt to the probability distribution of real data. The generator and the discriminator can be trained sequentially as well as simultaneously. The generator wants to deceive/fool the discriminator by producing realistic-looking synthetic samples so that the discriminator can recognize them as real. The discriminator also wants to improve its performance by recognizing synthetic samples as fake. This training behavior of both models while deceiving each other is known as adversarial training [43].

Ii-C Objective Function of GANs

The objective function of a GAN is defined by the distance between the probability distribution of the generated samples () and the probability distribution of real samples (). The binary cross-entropy loss is used to evaluate the objective function. The binary cross-entropy is a joint loss function of the discriminator and the generator. It minimizes the Jensen-Shannon divergence (JSD) between the distribution of generated data as well as real data distribution. The JSD is defined as Eq. (1) [2].

(1)

In Eq. (1

), KL is defined as the Kullback-Leibler divergence,

and represent the real and generated data distributions. denotes the average distribution between real and generated distributions. The objective function becomes minimax of and as presented in Eq. (2) reproduced from [36].

(2)

In the Eq. (2), minimax is considered as a game in the context of GANs. Generally, the minimax is an optimization problem that aims to optimize the objective function using the given constraints of loss and loss. The use of the gradient descent method for an optimization of the objective function is discouraged as it may converge the function to a saddle point. At the saddle point, the objective function gives a minimal value for one model’s weight parameters while the maximal value for the other model’s weight parameters. Hence, the objective function is optimized using the minimax game to find a Nash equilibrium.

Ii-D Variant of GANs

The underlying training problem of GANs results in unrealistic, less diverse, low resolution, images with artifacts, and blurry images. However, several GAN variants are proposed to address these problems.

In this section, we discuss the three most practiced variants of GANs that are proposed with some advancement in architecture and loss functions to the vanilla GAN.

Ii-D1 Deep Convolutional GAN (DCGAN)

One of the popular variants of GANs is deep convolutional GAN (DCGAN) [44]

. The DCGAN adopted convolutional neural networks instead of fully connected networks as in vanilla GAN for the generator and the discriminator. Besides, batch normalization is used in most of the layers. The ADAM optimizer

[45] is adopted instead of SGD. DCGAN provides a meaningful solution in terms of a stable architecture as compared to a vanilla GAN. However, DCGAN lack in generating diverse, realistic, and free of artifacts images which are fundamental challenges that need more advanced solutions.

Ii-D2 Conditional GAN (CGAN)

In vanilla GAN, the generator produces synthetic images only based on latent input which is considered to be limited information for high-performance image synthesis. Authors [46] proposed an idea of conditional GAN that utilizes additional information together with the random vector input as well as input to the discriminator. The can be a class label or any other conditional information that acts as an additional information feed to the generator as well as the discriminator. The CGAN architecture is presented in Fig. 3. The modified objective function is shown in Eq. (3) that is reproduced from [36]. The idea of CGAN has proven to be advantageous in terms of image synthesis as it can generate realistic and diverse images. CGAN shows a more stable training behavior as compared to the vanilla GAN and DCGAN.

Fig. 3: Architecture of CGANs. The generator and the discriminator are trained in an adversarial manner so that can generate plausible fake samples while can classify them from real samples. is class label or any additional information conditioned with input samples for and . loss is described as while loss is
(3)

Ii-D3 Wasserstein GAN (WGAN)

To address the instability problem in vanilla GANs caused by the use of Jensen-Shannon divergence, authors in [47] proposed the idea of measuring the distance between two data distributions instead of minimizing the divergence. So, an Earth-mover (EM) or Wasserstein-1 distance is introduced in the Wasserstein-GAN (WGAN). The Wasserstein-1 distance is described as a metric instead of cross-entropy to measure the loss for optimizing the objective function. The objective function of the WGAN is shown in Eq. (4) that is reproduced from [42].

(4)

In Eq. (4),

denotes all the joint distributions and

based on the marginals of and . During the training of GAN, when there is no overlap between and , the Jensen-Shannon divergence returns no values. However, the EM distance can reflect the distance measure continuously. Thus, WGAN can propagate meaningful gradient feedback to train the generator and avoid vanishing gradient problems. The main contribution of the WGAN is the use of a discriminator as a regressor instead of a binary classifier.

Iii Applications of GANs in Biomedical Image Analysis

In the domain of biomedical imaging, GANs have been utilized in several applications such as image synthesis, image segmentation, image reconstruction, image detection, image denoising, image super-resolution, and image registration. The performance of these applications is affected by the training challenges of GANs. This section presents a high-level discussion on the above-mentioned applications of GANs in biomedical image analysis. How these training challenges affect applications is also discussed. Few state-of-the-art survey papers are identified to get insights of these applications for readers that are shown in Fig.

4.

for tree= forked edges, draw, rounded corners, node options=align=center, text width=2.7cm, anchor=center, , where level=0folder, grow’=0, if level=1before typesetting nodes=child anchor=north, edge path’=(!u.parent anchor) – ++(0,7pt) -— (.child anchor), , [Applications of GANs in Biomedical Image Analysis, rotate = 0, fill=gray!25, parent [Image

Reconstruction

[48] (2020), rotate = 0, for tree=fill=green!25, child] [Image

Segmentation

[49] (2019)

[50] (2019)

[51] (2020), rotate = 0, for tree=fill=teal!25,child] [Image

Denoising

[37] (2020)

[52] (2020), rotate = 0, for tree=fill=brown!25, child] [Image Synthesis, rotate = 0, for tree=fill=lime!25, child, calign with current edge [Conditional

Image

Synthesis

[38] (2021), rotate = 0] [Unconditional

Image

Synthesis

[49] (2019)

[37] (2020), rotate = 0]] [Image

Detection

[49] (2019)

[37] (2020), rotate = 0, for tree=fill=yellow!25, child] [Image

Registration [53] (2020), rotate = 0, for tree=fill=purple!25, child] [Image Super

Resolution [54] (2020), rotate = 0, for tree=fill=blue!25, child]]

Fig. 4: Applications of GANs in biomedical image analysis

Iii-a Image synthesis

In the domain of biomedical imaging, the availability of annotated (training images with labels) datasets to train deep learning models is very limited. Sometimes, there are no labels available for disease patterns in the biomedical images. Manually annotating images is a daunting task. An automated method is required to either annotate images or produce more copies of annotated images. GANs are used to generate synthetic images of training images. Conventionally, GANs are introduced as unsupervised models and can be leveraged with un-annotated image datasets. Therefore, synthesizing training images using GANs is known as image synthesis. The terminology synthesis is used for generating new plausible synthetic images that look like the actual ones. Training challenges of GANs can affect the synthetic images during the image synthesis process. For example, the generation of similar synthetic images for distinct input images, blurry images, and low-quality images indicates the training challenges of GANs. GANs have been used for two types of image synthesis; unconditional image synthesis and conditional image synthesis [38] [37] [49]. Each type of image synthesis is discussed as follows;

Iii-A1 Unconditional Image Synthesis

In unconditional image synthesis, GANs rely only on random noisy inputs in the latent space without any prior conditions to generate new synthetic image samples. The unconditional image synthesis of biomedical images is affected most by the training challenges of GANs such as mode collapse and training instability. As there is no condition on the input images, GANs only take random input to generate plausible images, the mode collapse occurs. For example, direct generation of magnetic resonance images, computed tomography images, cell images, and dermoscopic images encounter these training challenges. Being an unsupervised framework, this approach has been widely utilized for biomedical image analysis to address data limitation and class imbalance issues. A detailed discussion and technical papers can be found in [37] [49].

Iii-A2 Conditional Image Synthesis

In conditional image synthesis, GANs consider some prior conditional information together with

to generate new synthetic images. This type of image synthesis face training challenges of GANs during the image to image translation tasks. When a GAN generates a biomedical image from the same modality input or cross-modality input images, it can miss salient features of input images during the training to translate into new images. Due to instability problem, the quality of synthetic images can be affected during the generation of biomedical images. There are two types of applications in conditional image synthesis. Generation of new images from real images with some prior conditions in the same modalities such as CT to CT, MRI to MRI, and PET to PET. Generation of new images from different modalities like MRI to CT, and MRI to PET, etc. The survey paper

[38] discussed these applications in detail and can be studied.

Iii-B Image Segmentation

In the domain of biomedical imagery, the segmentation of objects is an important tool for image analysis. It helps to prepare the data for several tasks like detection, classification, and pattern recognition. The main goal of image segmentation is to create anatomical or pathological structural changes in biomedical images. These changes provide a more clear anatomical view of images and aid in learning the disease patterns in a more precise manner.

GANs provide a significant contribution in the domain of biomedical imagery for image segmentation tasks. It has been utilized for the segmentation of tumors, pathology, and lesions from different body parts like the brain or liver, etc. GANs use segmented masks with input images to generate synthetic images with the segmentation of the target masks. Sometimes, during the training of GANs, the segmented masks are difficult to learn and GANs generate poorly segmented synthetic images or low-quality images. The literature [49] [50] [51] can be explored for more discussion on biomedical image segmentation.

Iii-C Image Reconstruction

In the domain of biomedical imagery, image reconstruction refers to the process of translating signals into images. These signals are acquired from different sensors. Every imaging modality utilizes signals in distinct bands of the electromagnetic spectrum such as PET, CT, and X-rays from gamma rays and microscopy or endoscopy from visible light. The reconstruction algorithms aim to transform a set of signals into 2, 3, and 4-dimensional images.

GANs are incorporating to improve the quality of reconstructed images like estimating full-dose CT images from low-dose CT images with reduced aliasing artifacts. Usually, GANs do not reduce these aliasing artifacts effectively due to the training instability problem. GANs face difficulty to generate plausible images reconstructed from training images due to poor image quality. The mode collapse occurs during the training of GANs while learning the distribution of low-quality images. The reader can be referred to the survey paper

[48] for a detailed insight on biomedical image reconstruction using GANs.

Iii-D Image Detection

The high-performance accurate detection of anomalies from biomedical images using supervised learning models faces data limitation problems. Supervised learning methods can only focus on biomedical images that are trained with annotated anomalies. The limited availability of annotated data in biomedical imagery makes it difficult to perform reliable image detection.

GANs have been incorporated for unsupervised anomaly detection in biomedical imagery. The discriminator model can be used to detect anomalies like lesions or tumors through learning the probability distribution of training images. In this way, the discriminator can classify healthy images from abnormal biomedical images. This contribution helps to work with un-annotated data and address the problem of reliable detection of anomalies. GANs can face limitations if the generator can not learn these anomalies in true manners or the discriminator is not optimal during the training process. Thus, it can affect the detection of anomalies and provide a low classification score. The survey papers

[37] [49] are identified for more detail on the underlying GANs application.

Iii-E Image Denoising

Biomedical devices capture digital scans of images that often contain a certain amount of lighting and noise corruption effects. The noise is generated in the process of compression or transmission of images when using low-quality digital sources. This hides the actual latent information and makes the images blurred or produce artifacts.

Image denoising techniques are required to remove this noise and recover the original latent information from the noisy images. GANs can be used as an excellent tool to produce sharp, plausible, and noise-free images. A powerful GAN model is required to denoise biomedical images because it is usually incorporated with the training challenges of GANs. These challenges can affect the denoising of biomedical images as GANs are unable to learn low-quality or noisy images effectively and can reflect the poor output images. A more detailed overview of biomedical image denoising techniques with the utility of GANs can be studied in the literature [37] [52].

Iii-F Image Super Resolution

In the domain of biomedical imaging, super-resolution techniques are defined to enhance the spatial resolution of biomedical images such as MRI, CT, X-ray, and PET, etc. High or super-resolution images provide more clear feature visualizations as compared to low-resolution images.

GANs can be utilized to produce super-resolution images from low-resolution images. In GANs, the generator produces super-resolution images while the discriminator’s job is to distinguish that artificial super-resolution images from real high-resolution images. The training instability problem should be addressed completely to achieve better high-resolution biomedical images as the optimality of the GANs is difficult to achieve. The mode collapse and non-convergence problems can also degrade the quality of synthetic images. GANs have performed various super-resolution tasks in biomedical image analysis and the reader can find a detailed review of those tasks in the review paper [54].

Iii-G Image Registration

In the biomedical imaging domain, image registration refers to the alignment of two or more different images such that the location of an organ or object in each image should be aligned. Usually, the biomedical images are acquired in 3D volumes. There are different methods to do this task. In one of the adapted approaches, similarity measures are calculated between desired image patches and use to register the dedicated image sets.

Conventional registration techniques suffer from parameter dependency problems and high optimization load. GANs have good capabilities of image transformations that can serve as an excellent candidate for the extraction of a more optimal registration mapping. GANs have limitations of training challenges as they can miss the location of an object or feature in the biomedical image during the image registration process. Usually, 3D volumes of biomedical images face these challenges as the generator can not learn 3D volumes effectively to generate diverse, un-blurred, and high-quality synthetic images. More details can be found in the survey paper of [53].

Iv The Mode Collapse Problem

Iv-a Definition

The basic purpose of the GANs is to produce realistic and variety of synthetic output images. The synthetic images should be of different styles (modes of distributions) for each random input. In practice, the generator learns to produce synthetic images just to misguide the discriminator for being classified as real. Once the generator finds the best way to fool the discriminator by producing particular plausible images, it focuses on the generation of similar images repetitively. The discriminator gets fooled each time and classifies the synthetic images as real. Eventually, the discriminator gets stuck in this trap and is unable to get out of this trap. Consequently, the generator starts producing a similar style of images. The underlying problem is known as mode collapse [43].

Iv-B Identification

The mode collapse can be identified in the training of GANs by looking at the generated images. The unrealistic and less diversity in generated images indicates a problem of mode collapse. The mode collapse problem can be divided into two categories based on the number of classes within the datasets [55]. Firstly, when the generator produces a similar style of output images for multi-class input images then it will affect the inter-class diversity and the problem is known as inter-class mode collapse. Secondly, when the generator produces a similar style of output images for single-class input images then the problem is termed as intra-class mode collapse and affects the intra-class diversity.

Iv-C Quantification

The diversity and similarity of generated images can be computed by several quantitative measures. A number of evaluating metrics are proposed such as Inception Score (IS) [56], Minimum Mean Discrepancy (MMD) [57], and Multi-scale Structural Similarity Index Measure (MS-SSIM) [58].

Iv-C1 Inception Score (IS)

Inception Score is a metric used for the evaluation of GANs [59]. It provides an assessment of generated images for high-quality and diverse characteristics. IS utilizes a pre-trained Inception-Net [60]

and measures the KL divergence between class conditional probability distribution

of generated sample and the marginal probability distribution obtained from a set of generated images.

(5)

In Eq. (5) that is reproduced from [61], shows the class conditional probability distribution with image x, is a marginal probability distribution, and denotes the entropy of variable x. [61]. IS measures the lowest score as 1 while the highest score depends on the number of classes of the dataset. The higher IS score shows that the model can generate high-quality as well as diverse images.

Iv-C2 Maximum Mean Discrepancy (MMD)

The maximum mean discrepancy is used to measure the dissimilarity between real image distribution and generated image distribution [57]. The higher value of MMD indicates that the generator is collapsing and doesn’t generate realistic and diverse images.

(6)

Mathematically, it uses Hilbert’s space of functions. In Hilbert space functions, two functions are supposed to be point-wise closed if they are closed in the norm [19]. So, MMD can be calculated by measuring the squared distance between the embeddings of and as shown in Eq. (6) that is reproduced from [61].

Iv-C3 Multi-scale Structural Similarity Index Measure (MS-SSIM)

MS-SSIM is a metric that is used to assess the diversity of synthetic images in GANs. MS-SSIM is introduced to measure the similarity score using human perception similarity analysis. It computes the similarity between two images with the help of pixels and structures [62]. MS-SSIM considers luminance (realizing the brightness of a color) and contrast estimations for a metric score. Luminance (), contrast (), and structure () can be computed using Eq. (7) as reproduced from [61].

(7)

In Eq. (7), and are two images. and represent the mean, whereas and

denote the variance (standard deviation) of pixel intensities. The correlation between corresponding pixels is represented by

. For the numerical stability of the fractions, constant C is added in all three quantities. The single-scale similarity index is then computed by Eq. (8) (reproduced from [61]) by considering the fixed distance perspective, as well as sampling density of images [63].

(8)

The multi-scale SSIM is a variant of the single-scale SSIM metric. It considers all scales of iteratively downsampled images for computing contrast and structural scores. The luminance quantity is measured at the last iteration known as the coarsest scale (M). Conversely, it gives weightage to the contrast and structure at each scale. The MS-SSIM is computed by Eq. (9) as reproduced from [61].

(9)

The range of MS-SSIM scores lies between 0.0 and 1.0. An important point to note is that a higher MS-SSIM score shows lower diversity between images of the same class. This metric is useful for evaluating GANs to compute the diversity between generated images of a single class.

for tree= draw, rounded corners, node options=align=center, text width=2.7cm, anchor=center, , where level=0folder, grow’=0, if level=1before typesetting nodes=child anchor=north, edge path’=(!u.parent anchor) – ++(0,-20pt) -— (.child anchor), , [Proposed Solutions for Mode Collapse Problem

in GANs, rotate = 0, fill=gray!30, parent [Regularization, rotate = 0, for tree=fill=red!30, child [Weight

Normalization:

SNSRGAN [21] (2020), rotate = 0, for tree=fill=red!30, child] ] [Modified

Architecture, rotate = 0, for tree=fill=green!50, child, calign with current edge [Generator:

MD-GAN [23] (2018)

SL-StyleGAN [4] (2020), rotate = 0] [Discriminator:

CycleGAN [24] (2021)

Modified CGAN [18] (2019)

CGAN [64] (2019), rotate = 0] [Generator-Discriminator Combined:

Auto-Encoding GAN [20] (2019)

DCR Auto-Encoding Alpha GAN [19] (2020)

SPGGAN [25] (2021), rotate = 0]] [Adversarial

Training, rotate = 0, for tree=fill=cyan!40, child [Buffer Strategy:

ScarGAN [65] (2018), rotate = 0] [Image Hashing:

DCGAN [22] (2017), rotate = 0]]]

Fig. 5: Taxonomy of different proposed solutions for addressing the mode collapse problem of GANs in biomedical imagery analysis

for tree= draw, rounded corners, node options=align=center, text width=2.7cm, anchor=center, , where level=0folder, grow’=0, if level=1before typesetting nodes=child anchor=north, edge path’=(!u.parent anchor) – ++(0,-20pt) -— (.child anchor), , [Mode Collapse Problem in Different Applications

of GANs, rotate = 0, fill=gray!30, parent [Image

Segmentation, rotate = 0, for tree=fill=red!30, child [MR Images:

ScarGAN [65] (2018), rotate = 0] [CT Images:

DCGAN [22] (2017), rotate = 0] ] [Image Synthesis, rotate = 0, for tree=fill=green!50, child [Conditional

Image Synthesis, rotate = 0] [Unconditional

Image Synthesis, rotate = 0, [MR Images:

Auto-Encoding GAN [20] (2019)

DCR Auto-Encoding Alpha GAN [19] (2020), rotate = 0] [Dermoscopic Images:

SL-StyleGAN [4] (2020)

SPGGAN [25] (2021), rotate = 0] [Other Image Modalities

MD-GAN [23] (2018)

Modified CGAN [18] (2019), rotate = 0]]] [Image Super

Resolution, rotate = 0, for tree=fill=cyan!40, child [CT Images

CGAN [64] (2019), rotate = 0]] [Image-Image

Translation, rotate = 0, for tree=fill=teal!25,child [MR-MR Images

CycleGAN [24] (2021), rotate = 0]] ]

Fig. 6: An application-based taxonomy of different approaches for addressing the mode collapse problem of GANs in biomedical imagery analysis

Iv-D Solutions to the Problem

Iv-D1 Regularization

In deep learning models, we aim to find minimum loss that is difficult to achieve when using large weight sizes. This will lead the model to overfit the data and provide poor prediction results. To alleviate this problem, a regularization term is used to reduce the weight size of the network or limits the model capacity [66]. In GANs, neural networks are used in the generator as well as the discriminator. So, when the discriminator produces ambiguous gradients as feedback to the generator continuously, the generator learns to generate similar images again and again to fool the discriminator which leads to the mode collapse problem. Here, regularization is used as weight normalization.

Weight Normalization (WN)

In GANs, weight normalization (WN) uses specialized training algorithms to update the weight matrices regularly while training the GANs. WN does not use additional loss. It backpropagates the gradients by computing them according to the normalized weights during the training of GANs

[40].

Spectral Normalization is a type of weight normalization that employs the spectral norm of weight matrices while training GANs. The spectral norm is equivalent to the L2 norm and corresponds to the largest singular vector. The largest singular vector can be approached to Lipschitz constant. Xu et al. [21] use spectral normalization for super-resolution of low-dose X-ray images. The spectral normalization is used to normalize the weight matrices in the discriminator which controls the Lipschitz constant to 1. In this way, super-resolution synthetic diverse images can be generated. Spectral Normalization Super Resolution GAN (SNSRGAN) outperforms with MS-SSIM score of 0.986 and IS score of 6.56 as compared to other baseline GAN models like SRGAN [67].

Iv-D2 Modified Architecture

In GANs, if a new architecture is defined with an alternative generator or discriminator or both as compared to the vanilla GAN then we describe it as modified architecture.

Generator

An alternative generator is introduced in the proposed architecture of GAN is described as the modified generator. To avoid the mode collapse problem, a widely adopted approach is to use multiple generators instead of a single as in vanilla GAN which has proved effective to alleviate the problem [68]. However, optimizing multiple generators is complicated and costs extensively large computations. [23] proposed the idea to use multiple distributions instead of using multiple generators to synthesize human cell images. For this, the authors adopted a gaussian mixture model (GMM) and utilize it as the generator model. Gaussian mixture model can cover each data distribution in the latent space and helps to generate diverse image samples using a mixture of data distributions. Moreover, the paper argued that using more distributions can generate more diverse synthetic image samples but such construction can lead to huge computation costs. The generated human cell images are then used to augment the dataset for classification tasks. To evaluate generated images, no quantitative analysis is observed in the paper. While authors discussed that the generated synthetic images aid in data augmentation and improve the classification performance of CNN by 4.6 % precision value.

The selection of an appropriate structure of the generator model has a great impact on the production of synthetic images. To interpret this idea, [4] proposed an extension to the StyleGAN as skin-lesion StyleGAN (SL-StyleGAN) for synthesizing skin lesion images. In [4], the authors discussed that changing the number of fully connected layers in a mapping network of the generator can control the generation of different modes of images. In baseline Style-GAN [69], a generator consists of a non-linear mapping network that maps latent input z to an intermediate latent space using MLP network and then passes the information to the original generator model. Furthermore, the authors attempted 2, 4, and 6 fully connected layers and check the performance of generated images. They investigated that the generator with 6 and 8 fully connected layers tends to generate only single-mode images which indicate a mode collapse problem. The generator with 2 fully connected layers can generate relatively more diverse images than 6 and 8 but results in scattered defects like artifacts, etc. The generator model with 4 fully connected layers can generate relatively good diverse images with no artifacts and other defects. The synthesized images are evaluated with a Recall evaluation score. The SL-StyleGAN with 4 fully-connected generator models achieved a 0.263 recall score which is higher than all of the other fully-connected layer combinations. The authors concluded that the final synthetic images are not fully diverse as indicated by the recall score which needs more work in the future to address this problem.

Discriminator

An alternative discriminator is introduced in the proposed architecture of GAN is known as the modified discriminator. In GANs, when the generator collapses to a single-mode and produces identical image samples then the discriminator backpropagates identical gradients for several generator inputs. There is no coordination between the discriminator and its gradients because it deals with each training sample independently. So, no mechanism guides the generator to produce dissimilar or diverse image samples. To address this problem in MR to MR image translation of breast slices, Modanwal et al. use a small field of view 34x34 instead of 70x70 in standard Patch discriminator in the CycleGAN. The small field of view encourages the transformation learned by the generator to maintain the sharp and high-frequency details. This modification of the CycleGAN preserves the structural information of breast and dense tissues during the training of GAN to perform image translation tasks.

The generated images are evaluated by dice coefficient and compared with the standard CycleGAN. The standard CycleGAN has a mean value of 0.8913 and standard deviation of 0.0941 for GE to SE translation while the mean value of 0.9089 and standard deviation of 0.0471 for SE to GE translation. GE Healthcare and Siemens are the two source scanners for image acquisition. Authors have achieved the improved mean value of 0.9801 and standard deviation of 0.0061 for GE to SE translation while mean value of 0.9813 and standard deviation of 0.0049 for SE to GE translation on the test data.

To address the mode collapse in synthesizing cervical histopathology images, authors in [18] utilizes mini-batch discrimination in the discriminator of CGAN to generate realistic diverse samples. Minibatch discrimination is used to penalize the generator if it collapses to a single-mode and regulates it to produce diverse images [59]. Minibatch discrimination [59] creates coordination between gradients of discriminator and training samples. The synthetic images are not evaluated by any metric to check the diversity or similarity measures with real images. The generated synthetic images are then used to augment the dataset for classification tasks.

A similar problem of generating diverse synthetic image samples occurs in CGAN when dealing with distinct CT scans of different body parts. To address this problem, CGAN with a modified discriminator is proposed for a super-resolution task [64]. A conditional information vector is used for the discriminator. A 3-dimensional fully convolutional neural network is used as a discriminator. The conditional vector contains information about input image data such as leg, head, abdomen, or chest. This information is used by the discriminator to evaluate the generated slices of CT data and encourages the generator to produce diverse image samples. The generated super-resolution images are evaluated through SSIM and PSNR scores. The highest score of SSIM (0.933) and PSNR (35.73) are achieved respectively as compared to the CGAN without conditional vector . The SSIM score shows similarity measure and realistic nature of generated images towards ground truth images.

Generator-Discriminator Combined

In this section, we describe the architecture of GANs where the generator and the discriminator are updated or modified. To synthesis 3-dimensional (3D) Magnetic Resonance images in diverse modes using GANs is a challenging task. It is due to the complexity of the task of generating 3D data. To address this problem, authors in [20] adopted an

-GAN with few modifications in the activation functions, batch normalization, and loss function. The

-GAN is composed of a Variational Auto-encoder (VAE) and a code discriminator network. VAE is a generative model that explicitly learns the likelihood distributions of training data rather than the other model’s feedback as in GANs to generate synthetic image samples [70]. This property of VAE helps to address the mode collapse problem and generate diverse images. In contrast, VAE generates blurry images. -GAN utilizes the advantage of VAE in alleviating the mode collapse problem in 3D MR image generation. The authors of [20] proposed an Auto-encoding GAN and generate 3D MR images with different latent input sizes like 100, 1000, and 2048. With a latent vector input of 1000, the proposed Auto-encoding GAN can generate diverse image samples while it fails to escape mode collapse with too small (100) or too large (2048) latent vector input sizes.

To evaluate the synthetic image performance, [20] calculated average MMD x and MS-SSIM scores. The results show that the proposed GAN can perform better with a latent input value of 1000 with an average MMD x score of 0.072 and MS-SSIM of 0.829. The MS-SSIM of real data is 0.846. MS-SSIM score of synthetic 3D MR images shows a good similarity measure with the real data and can be a good candidate for generating diverse images. However, there is a gap for generating more robust and diverse images with smooth and artifact-free images.

Authors in [19] extends this work by applying a refiner network based on ResNet blocks [71] to generate realistic 3D MR images. The ResNet uses skip connections which control the skipping of some training layers to smooth the shapes of generate images and make them more realistic. However, this work delivers a low score of MS-SSIM of 0.9991 between generated images which indicates less diversity of images. The proposed deep convolutional refiner GAN [19] achieved a good score of MMD as (0.2240 0.0008) x as compared to the previous score of MMD as (0.5932 0.0004) x which proved the realistic nature of generated images.

Authors in [25] adapted a self-attention mechanism with progressive growing GAN (PGGAN) to generate synthetic diverse skin lesion images. They discussed that most image synthesis tasks in biomedical imagery utilize PGGAN built with convolutional layers. While in convolutional layers, the convolutional filters are dependent on local neighborhood information to processes the convolution operations. It is computationally inefficient for convolutional filters to capture the long-range dependencies in images by relying only on convolutional layers. So, a self-attention mechanism is adapted that enables the discriminator to preserve image features with relevant activations to a particular task. It also helps the generator to produce synthetic images in which coordination should be observed between fine details at every location and fine details in distant portions of the images. Besides, the discriminator can judge the consistency of highly detailed features in distant portions of the image. In this way, the generator becomes capable of generating diverse image samples using a self-attention mechanism in PGGAN (SPGGAN).

Different feature level maps are used for evaluating the performance of the self-attention mechanism in image synthesis of resolution 128 x 128 pixels. The (N-1)-to-(N) stage in SPGGAN and PGGAN is monitored which represents the -to- level feature maps where . As a result, SPGGAN performs better with % as compared to PGGAN with % for the training set at . Similarly, SPGGAN performs better with % as compared to PGGAN with % for the test set at . However, the real dataset has features maps of 78.2%. It shows that the proposed SPGGAN attains better diversity and realistic image synthesis performance than PGGAN yet is distant from real images.

Iv-D3 Adversarial Training

This section discusses the alterations made during the training of GANs such as making a buffer storage [65] or using perceptual image hash [22] to identify and address the mode collapse problem.

Buffer Storage Scheme

Generation or simulation of diverse scar tissues in the myocardium of the left ventricle from a segmented healthy Late-gadolinium enhancement (LGE) imaging scan using GANs is always a challenging task. Scar tissue is a fibrosis tissue that appears when healthy tissue gets destroyed by some disease. [65] proposed a variant of GAN namely ScarGAN that is composed of a convolutional U-Net-based architecture [72] both in the generator as well as in the discriminator. In ScarGAN, an experience replay buffer scheme [73] is used to prevent the generator from producing similar shapes of scar tissue. In this scheme, half of the generated masks are stored in a buffer for an experience replay. From this buffer, the discriminator uses half of the training batches randomly to check previously generated scar tissue samples and prevent the generator from producing similar shapes of scar tissues.

The generated images from ScarGAN [65] are evaluated by experienced physicians. These physicians are provided with 15 generated and 15 real images in a mixed dataset. They classify them with an accuracy of 53 % which reflects a good score for the realism of generated images. However, the authors concluded that ScarGAN still generates less diverse shapes of scar tissues i.e similar shapes that require to be researched in the future.

Perceptual Image Hashing

Generating new segmentation masks and ground-truth images separately from GANs is a time taking task. To generate new chest radiographs and segmentation masks, [22] proposed a variant of DCGAN that forces the generator to produce a segmentation mask together with ground truth images. During the adversarial training, the generator starts producing identical image-segmentation pairs with few artifacts that lead to a mode collapse problem. To address this problem, the authors use the perceptual image hash function to remove the identical generated image-segmentation pair. Perceptual image hash functions calculate hash values of real and generated images based on specific image features [74]. These hash values are compared further to evaluate the difference between generated and real images.

The generated image-segmentation pair is evaluated in data augmentation for the segmentation task. The U-Net is trained on 30 real and 120 generated images. The lowest Hausdorff distance of 7.2885 has been observed as compared to the results when U-Net trained on only real images or only generated images. However, the authors concluded that a mild form of mode collapse occurred which results in less diverse images.

Training Problem References GAN Variant Image Modality Proposed Solution Evaluation Metric
Mode Collapse Qin et al. [4] SL-StyleGAN Dermoscopic Images Varying number of fully-connected layers Recall
Lau et al. [65] ScarGAN MR Images Experience replay buffer -
Wu et al. [23] MDGAN Chromosome Cell Images Gaussian Mixture Model as generator -
Modanwal et al. [24] CycleGAN MR Images Patch discriminator (34x34) dice coefficient
Xue et al. [18] Modified CGAN Histopathology Images Minibatch discrimination -
Kudo et al. [64] CGAN CT Images Discriminator based on 3D CNN with conditional information vector -
Segato et al. [19] DCR Auto-Encoding Alpha GAN MR Images Skip connections MMD and MS-SSIM
Kwon et al. [20] Auto-Encoding GAN MR Images VAEGAN MMD and MS-SSIM
Abdelhalim et al. [25] SPGGAN Dermoscopic Images Self attention mechanism Feature level maps
Neff et al. [22] DCGAN CT Images Perceptual image hashing -
Xu et al. [21] SNSRGAN X-ray Images Spectral Normalization -
Non-convergence Abdelhalim et al. [25] SPGGAN-TTUR Dermoscopic Images Two Time-scale Update Rule (TTUR)

Paired t-test

Goel et al. [27] Optimized GAN CT Images Whale optimization algorithm -
Biswas et al. [26] uGAN Retinal Images Modified training updates of generator and discriminator SSIM
Instability Xue et al. [18] Modified CGAN Histopathology Images WGAN-GP loss -
Segato et al. [19] DCR Auto-Encoding Alpha GAN MR Images WGAN-GP loss -
Kwon et al. [20] Auto-Encoding GAN MR Images WGAN-GP loss -
Wei et al. [29] CF-SAGAN MR Images Residual connections PSNR
Wu et al. [30] ciGAN Mammography Images Multi-scale generator -
Zhao et al. [75] S-CycleGAN PET Images WGAN loss learned perceptual image patch similarity (LPIPS) score
Deepak et al. [28] MSG-GAN MR Images WGAN-GP loss -
TABLE II: A comparative analyses of contributing papers highlighting training problems of GANs based on GAN variant, proposed solution, image modality, and evaluation metric.

Iv-D4 Summary

In this section, technical papers are reviewed to address the mode collapse problem in the biomedical imagery domain. The mode collapse problem can be alleviated by using different methods such as regularization, modified architectures, and adversarial training. These methods are reviewed as solutions to the underlying problem in the domain of biomedical imagery. A taxonomy is created based on these solutions as shown in Fig. 5. In Fig. 5, each sub-category is further divided into different methods like regularization has weight normalization, modified architectures are divided into the generator, discriminator, and generator-discriminator combined. Similarly, adversarial training is further divided into possible solutions like buffer schemes and perceptual image hash. The application-based taxonomy is also created as shown in Fig. 6. This taxonomy 6 helps to analyze the effect of mode collapse for the specific type of biomedical images. From the technical literature, it is reviewed that all of the papers utilized approaches partially alleviate the problem of mode collapse in biomedical imagery. Table II provides a comparative analysis of contributing papers to address the underlying problem. The Auto-encoding GAN [20] provides relatively more diverse synthetic images while addressing the problem in biomedical imagery.

V The Non-convergence Problem

V-a Definition

In GANs, it is important that the training of the generator and the discriminator should converge at a global point (Nash equilibrium). The training of GANs is performed as a minimax game to reach this Nash equilibrium. The discriminator and the generator should be trained with the best training strategies to achieve better training. As the generator’s performance improves, it gets difficult for the discriminator to distinguish synthetic images from real images. When the generator is producing the best plausible (realistic-looking) images, the discriminator will have a classification accuracy of 50%. Consequently, the discriminator has no meaningful feedback to update the weights of the generator. This will affect the quality of the synthetic images generated by the generator. As a result, the training of GANs leads to a non-convergence problem [76].

V-B Identification

The non-convergence problem has a direct effect on the quality of generated images. The underlying problem can be identified by analyzing the quality of synthetic images. The non-converge problem leads the generator to produce blurry images. Another drawback of the non-convergence problem is that it will lead the generator to produce synthetic images with artifacts. These artifacts include noise or additional objects that are not meant for generated.

V-C Quantification

To evaluate the problem of non-convergence in GANs, evaluation metrics are proposed to judge the quality of generated images. So, several evaluation metrics are proposed such as peak signal-to-noise ratio (PSNR) and Fréchet Inception Distance (FID) [77]. [78] are proposed to quantify the quality of generated images.

V-C1 Peak signal-to-noise ratio (PSNR)

In GANs, PSNR is used to check the quality of synthetic images to the corresponding real images. PSNR is applied to monochrome images. It is measured in decibels (dB). The higher value of PSNR represents a better quality of synthetic images. PSNR is computed as shown in Eq. (10) reproduced from [61].

(10)

By simplifying,

(11)

Whereas

(12)

The Eq.(10), (11), and (12) are reported in [61]. I and K represent two monochrome images. In Eq. (11), MAXI denotes the highest possible pixel value of an image such as 255 in the case of 8-bit representation.

V-C2 Fréchet Inception Distance (FID)

FID is an evaluation metric used to assess the quality of synthetic images. It is proposed by Heusel et al [77]. FID computes the mean and covariance of synthetic and real images as shown in Eq.(13) that is reproduced from [61]. It visualizes an embedded layer that contains a set of synthetic images in the Inception-Net and uses it as the continuous multivariate Gaussian.

(13)

In Eq.(13), r and s shows real and synthetic images while and denote mean and covariances of real and synthetic images. FID score measures the distance between real and synthetic images in GANs. A higher FID score shows a larger distance between synthetic and real data distributions [61].

for tree= draw, rounded corners, node options=align=center, text width=2.7cm, anchor=center, , where level=0folder, grow’=0, if level=1before typesetting nodes=child anchor=north, edge path’=(!u.parent anchor) – ++(0,-28pt) -— (.child anchor), , [Proposed Solutions of Non-convergence Problem

in GANs, rotate = 0, fill=gray!30, parent [Nash Equilibrium, rotate = 0, for tree=fill=cyan!40, child, calign with current edge [Updating Algorithm:

uGAN [26], rotate = 0] [Learning Rate:

SPGGAN-TTUR [25]

(2021), rotate = 0] [Hyperparameter

Optimization:

Optimized GAN [27] (2021), rotate = 0]]]

Fig. 7: Taxonomy of different proposed solutions for addressing the non-convergence problem of GANs in biomedical imagery analysis

for tree= draw, rounded corners, node options=align=center, text width=2.7cm, anchor=center, , where level=0folder, grow’=0, if level=1before typesetting nodes=child anchor=north, edge path’=(!u.parent anchor) – ++(0,-28pt) -— (.child anchor), , [Non-convergence Problem in Different Applications

of GANs, rotate = 0, fill=gray!30, parent [Image Synthesis, rotate = 0, for tree=fill=cyan!40, child, calign with current edge [Unconditional

Image Synthesis, rotate = 0, [CT Images:

Optimized GAN [27], rotate = 0] [Dermoscopic Images

SPGGAN-TTUR [25] (2021), rotate = 0] [Retinal Images

uGAN [26], rotate = 0]]]]

Fig. 8: An application-based taxonomy of different approaches for addressing the non-convergence problem of GANs in biomedical imagery analysis

V-D Solutions to the Problem

V-D1 Nash Equilibrium

This section discusses the possible solutions in terms of using optimization algorithms and controlling the training iteration () to find a Nash equilibrium.

In vanilla GAN [2], Goodfellow demonstrated that an equilibrium can be achieved with an optimal discriminator during the training of GAN. However, this is an ideal case, and in practice, GAN does not meet the condition. So, the author [2] proposed an algorithm to update the discriminator multiple times per generator’s training update to get the discriminator close to an ideal. In vanilla GAN, the discriminator is updated only once per generator’s training update which was suitable for that specific experiment. Similarly, WGAN [47] uses for discriminator updates per generator’s training update for attaining equilibrium state.

Updating Algorithm

[26] uGAN proposed to control the updates of training iteration of the discriminator () as well as the training iteration of the generator (). The training iteration represents the number of updates of the discriminator while shows the number of updates of the generator. The authors [26] adapted this approach to address the non-convergence training problem while reaching the Nash equilibrium. This experiment is conducted to synthesize retinal images with high quality of 256 x 256 resolution. It is reviewed that the same number of updates for both models yields high-quality images. It is also analyzed that with large values can generate high-quality realistic images by keeping . In contrast, noisy images are generated using larger values of with .

The synthetic images are evaluated with an SSIM metric. The mean, maximum, and mean maximum values of SSIM are measured between synthetic and real images to check the quality and similarity between images. A higher score of SSIM shows higher similarity and high-quality measures. The mean SSIM score of 0.61, maximum SSIM score of 0.73, and mean maximum SSIM score of 0.81 are achieved.

Learning Rate

To address the non-convergence problem, [77] proposed an algorithm namely the Two Time-scale Update Rule (TTUR). The authors argued that a local Nash equilibrium can be achieved using distinct learning rates of the discriminator and the generator instead of using multiple update algorithms. However, the choice of appropriate learning rates depends on the GAN architecture, type of experiments, and nature of datasets.

[25] investigated the use of both TTUR [77] and discriminator updates in SPGGAN for skin lesion image synthesis. The authors updated the discriminator 5 times for every single update of the generator’s training. Update algorithm slows down the training process while TTUR tries to balance it to generate noise-free images.

SPGGAN-TTUR [25] shows visually appealing results of generated images as compared to SPGGAN. The results are evaluated through a paired t-test with confidence . Paired t-test gives the mean difference between two sample observations. The P-value of the t-test (PVT) is calculated to check the performance of SPGGAN-TTUR for generating synthetic train and test sets images. The PVT of for training set while for test sets are achieved which outperformed the SPGGAN. However, SPGGAN-TTUR [25] suffers from artifacts in the generated image that need to be researched.

Hyperparameter Optimization

In GANs, the choice of appropriate hyperparameters to control the discriminator and the generator is a challenging task. To address this problem, optimization techniques can be used to obtain adaptive losses for updating the weights of the generator.

[27] proposed an optimized GAN to generate synthetic chest CT images of COVID-19 disease. The optimized GAN utilizes a CGAN with Whale Optimization Algorithm (WOA) [79] to optimize its hyperparameters. The hunting trick of humpback whales is adapted to optimize the prey’s location. This hunting trick is used to determine the generator’s best search agents with the given discriminator. To update the position of search agents, the optimization of hyperparameters follows three rules; first, the leader whale finds the prey’s position and encircles it. Similarly, the generator’s search agents calculate the fitness function at each iteration to achieve the best position and then updates their positions. Second, the distance between prey and the location of the generator’s search agents is measured and then the generator’s search agents update their position based on these measures. Third, it is the same as the first rule but it updates the position of search agents based on the random search instead of the best search as in the first rule. The Optimized GAN [27] improves the performance of the discriminator and can generate adaptive losses to update weights of the generator to produce good quality diverse images.

The performance of Optimized GAN [27] is compared with the baseline CGAN. The generated images are used with training images for classification tasks. So, F1-score and accuracy of and 98.78% respectively are achieved with Optimized GAN while 91.60% accuracy and 90.99% F1-score are achieved with the baseline CGAN. It shows that Optimized GAN can perform better with accuracy and F1-score measures, as well as in optimizing hyperparameters for a balanced GAN.

V-D2 Summary

In this section, technical papers of GANs are reviewed to address the non-convergence problem in the domain of biomedical imagery. Achieving a Nash equilibrium during the training of GANs is a remedy to this non-convergence problem [43]. Training GANs at an equilibrium state is not an easy task. By keeping this concept in mind, the reviewed papers are classified into three different categories as shown in Fig. 7. First, updating algorithms [26], second, learning rate [25], and third is hyperparameter optimization [27]. Another taxonomy is also proposed on application-based biomedical imagery as shown in Fig. 8. This is further classified into image modality types such as dermoscopic [25], CT [27], and retinal images [26].

The updating algorithm is reviewed for vanilla GAN [2], WGAN [47], and then state-of-the-art uGAN [26]. The updating algorithms in vanilla GAN [2] and WGAN [47] are proposed for the general imagery domain while updating algorithm in uGAN [26] is proposed for the biomedical imagery domain. All of these propose their strategies to update discriminator time-steps per generator time-steps during the training of GANs. They show that their proposed solutions work better in attaining an equilibrium state while training the GANs.

Another idea of achieving equilibrium in training the GANs is proposed by [77]. It also helps to achieve an equilibrium using adaptive learning rates for the discriminator and the generator. This technique is used by Abdelhalim et.al [25] to address the non-convergence problem in the biomedical domain. The Hyperparameter optimization approach is also helpful in reaching the Nash equilibrium. For this, Goel et. al [27] investigated the use of optimization algorithms such as the Whale optimization algorithm (WOA) [79] for biomedical imagery.

To summarize this section, Table II shows a comparison of proposed techniques adapted by the contributing papers based on the underlying problem. It is observed that all of the technical papers belong to the image synthesis of CT, dermoscopic, and retinal image modalities. Among all of the contributed solutions, the TTUR [77] scheme provides relatively good performance to address the non-convergence problem in the biomedical imaging domain. High-quality realistic images can be achieved using this approach in biomedical imagery.

Vi The Instability Problem

Vi-a Definition

The training of the GANs can get unstable due to the vanishing gradient problem. The vanishing gradient problem occurs when the discriminator becomes an optimal classifier and produces smaller values of gradients (approaching zero) for back-propagation. These gradients are unable to update the weights of the generator due to which the generator stops producing new images and the overall training of the GANs becomes unstable [43].

Vi-B Identification

The unstable training behavior can be identified by checking the quality of generated images. Moreover, the underlying problem takes much time to train with unstopping behavior. This results in generating poor-quality images.

Vi-C Quantification

The instability problem of training GANs can be evaluated by the same metrics that are used for mode collapse and non-convergence problems such as single-scale SSIM, FID, and PSNR. The generated images can be evaluated in terms of similarity measures.

for tree= draw, rounded corners, node options=align=center, text width=2.7cm, anchor=center, , where level=0folder, grow’=0, if level=1before typesetting nodes=child anchor=north, edge path’=(!u.parent anchor) – ++(0,-20pt) -— (.child anchor), , [Proposed Solutions for Instability Problem

in GANs, fill=gray!30, parent [Modified

Architecture, for tree=fill=red!30, child [Generator:

ciGAN [30] (2018)

CF-SAGAN [29] (2020)]] [Loss Function, for tree=fill=green!50, child [Adversarial: S-CycleGAN [75] (2020)] [Regularization:

Modified CGAN [18] (2019)

AEGAN [20] (2019)

DCR AE Alpha GAN [19] (2020)

MSG-GAN [28] (2020)] ]]

Fig. 9: Taxonomy of different proposed solutions for addressing the instability problem of GANs in biomedical imagery analysis

for tree= draw, rounded corners, node options=align=center, text width=2.7cm, anchor=center, , where level=0folder, grow’=0, if level=1before typesetting nodes=child anchor=north, edge path’=(!u.parent anchor) – ++(0,-20pt) -— (.child anchor), , [Instability Problem in Different Applications

of GANs, rotate = 0, fill=gray!30, parent [Image

Segmentation, rotate = 0, for tree=fill=red!30, child [Mammography Images:

ciGAN [30] (2018), rotate = 0] ] [Image Synthesis, rotate = 0, for tree=fill=green!50, child, calign with current edge [Conditional

Image Synthesis:

CF-SAGAN [29] (2020), rotate = 0] [Unconditional

Image Synthesis, rotate = 0, [MR Images:

MSG-GAN [28] (2020), rotate = 0]]] [Image

Reconstruction, rotate = 0, for tree=fill=cyan!40, child [PET Images

S-CycleGAN [75] (2020), rotate = 0] ]]

Fig. 10: An application-based taxonomy of different approaches for addressing the instability problem of GANs in biomedical imagery analysis

Vi-D Solutions to the Problem

In synthetic image generation using GANs, the stability of GANs is an important aspect to consider. If the training of GANs becomes unstable, the network could not generate high-resolution realistic images. To alleviate this problem, the following possible solutions are proposed for the domain of biomedical imagery.

Vi-D1 Modified Architecture

The architecture of GANs plays a key role to avoid the vanishing gradient problem. The selection of the generator and the discriminator have a great impact on the training performance of GANs. To synthesize PET images from multi-sequence MR images, a Refined CF-SAGAN is proposed [29]. In the proposed architecture [29], the problem of vanishing gradient occurs when the long-skip connections are used in the generator to recover the lost spatial information during the down-sampling operations. Then, short skip connections are used to handle this problem. This process is known as the residual connections [80]. The residual connection helps to mitigate the problem of vanishing gradient by allowing an alternative shortcut track for the gradient to flow through. It also enhanced the feature exchanges across layers. The generated synthetic PET images are evaluated with the PSNR for image quality. The proposed Refined CF-SAGAN outperformed by in PSNR (p 0.05).

The generation of high-dimensional synthetic images is a challenging task using GANs. To address this problem in biomedical imaging, a modified architecture namely ciGAN is proposed [30]. The ciGAN [30] utilizes a multi-scale generator architecture to infill a segmented area in a target image of breast Mammography. The proposed generator uses a cascaded refinement network that helps to generate features at multiple scales before being concatenated. This process improves the training stability at high resolutions. The generated synthetic images are used for data augmentation in the cancer detection task using ResNet-50. Traditional augmentation techniques like rotation, flip, and rescaling are also used. The proposed ciGAN with traditional augmentation achieved an area under the curve (AUC) score of 0.896 while the real dataset with no augmentation achieves a 0.882 AUC score.

Vi-D2 Loss Function

Adversarial

In vanilla GANs, a cross-entropy loss is introduced that is usually described as an adversarial loss. This loss can cause a vanishing gradient problem. To address this problem, WGAN loss is introduced to utilize as an adversarial loss. (Please refer to section 2.4.3 (WGAN) for more detail). A similar study was found in a task of reconstructing low-dose PET images from full-dose PET images [75]. Authors [75] use a 1-Wasserstein distance instead of cross-entropy in supervised CycleGAN namely S-CycleGAN to improve the training stability of the proposed network. To evaluate the quality of generated low-dose images, authors [75] utilized a learned perceptual image patch similarity (LPIPS) score. The lower value of the score shows better image quality regarding the actual image patches. The S-CycleGAN achieved a 0.026 LPIPS score that is small as compared to the actual low-dose PET images of 0.035. The results show better performance of S-CycleGAN regarding training stability.

Regularization

This section elaborates on the use of regularization terms with additional loss functions in GANs to stabilize the training of GANs.

Gradient penalization (GP) is used to force the discriminator for producing meaningful gradients. For this, the discriminator D is enforced to be Lipschitz continuous [81]. GP enables D to target the to 1. The is defined as Lipschitz continuity as shown in Eq. (14) reproduced from [40].

(14)

In the Eq. (14), denotes the left side of the equation. K is the real constant, known as Lipschitz constant [40], and implies within the range where . To address the training instability problem, GP is applied as a using 2 norm. The is defined as . In this way, gradients that vary from one are penalized.

The gradient penalty regularization term is investigated by Gulrajani et.al [81] with WGAN loss to improve the training stability of the network.

In the biomedical imaging domain, WGAN-GP loss is used as an additional loss in many biomedical image analysis tasks such as synthesis of cervical histopathology images [18] and MR images [19] [20] to improve training stability. In multi-scale Gradient GAN (MSG-GAN) [28], a WGAN-GP loss is used to train the MSG-GAN and improved the training stability.

Vi-D3 Summary

In this section, technical papers of GANs are reviewed to address the instability problem in the domain of biomedical imagery. The problem of unstable training triggers due to vanishing gradient problem when the discriminator becomes optimal and sends no feedback to update the generator’s weights as shown in Fig. 2. So, to stabilize the training of GANs, the generator should receive significant feedback in the form of gradients from the discriminator to produce high-quality realistic images. Considering this aim, many work solutions have been proposed in the domain of biomedical imaging. With this aim, the technical papers are classified into two taxonomies. The first is based on solutions in terms of modified architectures and loss functions as shown in Fig. 9. The second is based on the applications with different image modalities as shown in Fig. 10.

With modified architecture, technical papers provide their solutions by changing the generator either its layers such as [29] or complete generator like [30]. Both of the solutions provide a stable conditioned training of proposed GANs but found some artifacts generated in the output images. Loss function plays a key role to address the vanishing gradient problem. The reason behind this phenomenon is that loss function backpropagated feedback in the form of gradients to update the generator weights. When the discriminator becomes optimal then its loss approaches zero which can’t provide feedback to the generator. Technical papers are reviewed that provide solutions in biomedical imagery. In loss function section, technical papers are further classified into adversarial loss [75] and regularization loss [18] [19] [20]. The WGAN loss is used as an adversarial loss in [75]. The WGAN-GP loss is used as regularization loss in [18] [19] [20] [28] to address the instability problem in different application-based solutions.

To address the instability problem in the biomedical imaging domain, Table II shows a comparative analysis of different approaches provided in the literature. It is analyzed that WGAN-GP loss [81] can be a suitable candidate to address the training instability problem in the biomedical imagery as it works with various GANs architectures to alleviate the problem. The generated images can be obtained from GANs with high-quality and realistic nature.

Vii Challenges and Future Research Directions

Vii-a The Mode Collapse Problem

In biomedical image analysis, the mode collapse problem is one of the severe problems that occurs during the training of GANs. The mode collapse problem has a direct impact on the diversity of synthetic images generated by GANs. The synthetic images lack diversity as compared to the real images. Due to this problem, the generator in the GAN misses salient features of the image and repeats the same features in the generation of new synthetic images. It is challenging for researchers to train a GAN with completely avoiding mode collapse problem and its subsequent impact on the synthetic images. The underlying problem behaves differently for a number of GAN-based applications of biomedical image analysis. For example, a mode collapse occurs when a GAN uses a segmented mask with ground truth chest radiographs to generate segmented radiographs. Similarly, significant features of cell images can be affected and missed during the GAN-based generation of synthetic images. The mode collapse problem also occurs due to the complexity of 3-dimensional brain MR images in the process of image synthesis. For instance, modifications in GANs such as perceptual image hashing [22], the mixture of distributions in the generator [23], and VAEGAN-based architectures [19] have been used to alleviate the mode collapse problem.

In GANs, several techniques have been used to address the mode collapse problem in biomedical image analysis. It is critical for a GAN to train the generator and the discriminator in such a manner that the generator can learn a complete distribution of features and anatomical structure of biomedical images while the discriminator returns constructive feedback to the generator. The modifications in the generator or discriminator architectures or their loss functions can alleviate the mode collapse but do not solve the problem completely. Thus, there is a research gap to find a significant solution either based on architecture or loss function that should be capable of addressing the mode collapse problem in biomedical image analysis. The proposed solutions may consider the performance of generated images to analyze the effect of mode collapse. The analysis of generated images can better direct researchers to propose an effective solution in this field. However, it is important to address the mode collapse problem during the training of GANs so that the GAN-based applications can be utilized effectively in biomedical image analysis. Future research directions include the modified architectures based on state-of-the-art attention networks, novel regularization techniques, capsule networks, and advanced normalization techniques to address the mode collapse problem in biomedical image generation. Autoencoders are also recognized as a significant technique to address the mode collapse problem in GANs, but it generates blurry images. Nevertheless, autoencoders with powerful discriminators can improve the existing solutions in the biomedical imaging domain.

Vii-B The Non-convergence Problem

In GANs, the non-convergence is a major failure of the generator and the discriminator models to reach an unbalanced state. When the training of GANs becomes unbalanced, there is a direct impact on the performance of synthetic images generated by GANs. The synthetic images can be generated blurry or with artifacts. It is very critical to train a GAN in a way that both models train in a balanced state during the whole training time. One solution is to reach a Nash equilibrium. It is very difficult to reach Nash equilibrium in practice. The issue is that the GAN sticks to the saddle point where the objective function gives minimal weight parameters for one model while the maximal weight parameters for the other model. However, a minimax game can be used to find a Nash equilibrium. In biomedical image analysis, researchers devise new methodologies to address the non-convergence problem. Like, optimization algorithms such as Whale optimization, improving learning rate, and novel updating algorithms for training the generator and the discriminator have been used.

The non-convergence problem is a potential challenge faced by GANs during training. Updating algorithms such as proposed in vanilla GAN limited to its initial experiment. The updating algorithm of WGAN can work for few applications to achieve a Nash equilibrium. Similarly, TTUR and hyperparameter optimization techniques can also work for limited architectures and lack generalization ability. So, there is a need for a compact and generalized solution to achieve the Nash equilibrium during the training of GANs. Recently, non-convergence is a generic problem for GANs, and researchers use JS divergence to find a balanced state during the training of GANs that is difficult to achieve in practice. Different techniques have been proposed to cope with this problem, such as f-divergence and improved Wasserstein loss functions that still need improvement. These approaches can be used with different GANs architectures to address the underlying problem in biomedical image analysis. However, future research directions should focus on advancing JS divergence to balance the training of GANs while considering different optimization techniques such as stochastic gradient descent, Pareto-optimality, etc. Novel game theories with divergences can also be explored based on existing schemes that will be helpful for GANs to address the non-convergence problem.

Vii-C Instability Problem

The training stability of GANs is important to achieve for any GAN-based application of biomedical image analysis. The problem occurs due to the vanishing gradient problem. Thus, there are solutions proposed to address the vanishing gradient problem such as modified architectures and modified loss functions. The loss function has a great impact to stabilize the training of GANs so WGAN-GP loss [81] is analyzed in almost all of the reviewed technical papers. The WGAN-GP loss helps in acquiring stable training of GANs in the technical solutions but there is no guaranty or generalization criterion about its suitability and utility for other applications as well as other imaging modalities. It is important to consider that if GANs can handle the training strategy to achieve the Nash equilibrium and try to reach the optimal discriminator then a vanishing gradient problem gets triggered due to the optimality of the discriminator as discussed in section VI-A. It is also suspected that the stability of training depends on the mode collapse and non-convergence problems as well but sometimes, it can be seen that architecture is trained in stable condition but has been affected by mode collapse. So, this situation could be a question of the performance of GANs. Therefore, all of these technical training challenges must be addressed in biomedical image analysis.

Future research directions should consider the above-mentioned constraints and propose novel techniques to address the instability problem in the biomedical imaging domain. There have been several approaches that are experimented with GANs to stabilize the training while addressing the vanishing gradient problem. There is a need for devising novel regularization, normalization, and game theory techniques to be used in the GANs which are unexplored previously. WGAN-GP is a widely used loss to cope with this problem in the general imaging domain yet requires more work and modifications to reach the stable training of GANs. Hybrid multiple GAN-based architectures based on WGAN-GP loss, attention mechanisms, novel regularization, and optimization techniques can also be explored to address the underlying problem.

Vii-D Evaluation Metrics

In GANs, evaluation metrics play a key role to represent the performance of GANs. These metrics provide a quantification of the problems such as mode collapse, non-convergence, and training instability during the training of GANs. Although, evaluation metrics like IS, FID, MS-SSIM, MMD, and PSNR have been used to evaluate the performance of GANs based on the generated images. Nevertheless, these metrics are application-dependent and lack the capacity in visualizing the occurrence of the challenges during training of GANs.

In relation to the training challenges of GANs, evaluation metrics are used to capture the diversity and quality of the generated images. Generally, for the mode collapse problem, the diversity of images is quantified by the IS, MS-SSIM, and MMD metrics. While, for the non-convergence and instability problems, PSNR and FID are used. IS and FID metrics are frequently used to evaluate generated images via the quality of images. Both of these metrics are pretrained on ImageNet

[82]

dataset. The ImageNet dataset lacks the class of biomedical images thus IS and FID metrics are not recommended to be used in the biomedical imaging domain. Similarly, MS-SSIM is a human perceptual metric that only considers luminance and contrast estimations to measure the similarity of image features between two images. PSNR is a widely used metric to measure the quality of images but is limited to monochrome images. In biomedical image analysis, the performance parameters vary based on the type of imagery domain as every domain-specific images have different image features and properties.

In biomedical image analysis, researchers utilize traditional pixel-wise evaluation metrics to quantify the performance of GANs. Most traditional metrics are suitable for supervised learning tasks that require reference images. In the biomedical imagery domain, the availability of reference images is limited due to privacy issues and inaccurate manual annotation. This ensures the use of unsupervised learning in the biomedical imagery domain. Furthermore, it is also important to evaluate the training performance of GANs because of randomization of initialization, optimization, and technical challenges. The evaluation of generated images as compared to the real images remains challenging and needs to be explored. There has been a list of metrics reported in

[61] to evaluate the performance of GANs. In spite of all these proposed metrics, still there is a research gap for finding a metric that can capture the salient features such as texture and shape of objects in the biomedical images. It is important to analyze the symptoms of each individual training problem of GANs for a number of applications in biomedical image analysis. An evaluation metric that can capture pre and post-training dynamics of a GAN model is important to investigate. The proposed metric should work with most of the image modalities such as X-rays, MR images, Dermoscopic images, Ultrasound, and PET images to measure the efficacy of GANs in the domain of biomedical imaging.

Viii Conclusion

In this survey, training challenges of GANs such as mode collapse, non-convergence, and instability have been reviewed in detail for the domain of biomedical imagery. The three challenges are discussed via definitions, identifications, quantifications, and possible solutions. To address these training challenges in the biomedical imagery domain, technical literature has been discussed based on applications and solutions taxonomies. Existing literature shows that addressing these challenges entirely is a challenging task, but few techniques have been proposed that can partially alleviate these training challenges. Moreover, this survey also elaborated that how each training problem can affect the quality of generated biomedical images in terms of realistic nature, diversity, resolution, and artifacts? How to cope with these challenges and generate high-quality images? In this survey, it is concluded that all of the three technical challenges faced during the training of GANs need more research work to bridge this gap for biomedical image analysis. This motivates the researchers to propose advanced solutions to address the underlying training challenges of GANs in the domain of biomedical imagery.

References

  • [1]
  • [2] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, ser. NIPS’14.    Cambridge, MA, USA: MIT Press, 2014, p. 2672–2680.
  • [3] D. Bhattacharya, S. Banerjee, S. Bhattacharya, B. U. Shankar, and S. Mitra, “GAN-Based Novel Approach for Data Augmentation with Improved Disease Classification,” in Advancement of Machine Intelligence in Interactive Medical Image Analysis.    Springer, 2020, pp. 229–239.
  • [4] Z. Qin, Z. Liu, P. Zhu, and Y. Xue, “A GAN-based image synthesis method for skin lesion classification,” Computer Methods and Programs in Biomedicine, vol. 195, p. 105568, 2020.
  • [5] G. Shi, J. Wang, Y. Qiang, X. Yang, J. Zhao, R. Hao, W. Yang, Q. Du, and N. G.-F. Kazihise, “Knowledge-guided synthetic medical image adversarial augmentation for ultrasonography thyroid nodule classification,” Computer Methods and Programs in Biomedicine, vol. 196, p. 105611, 2020.
  • [6] D. Lee, H. Yu, X. Jiang, D. Rogith, M. Gudala, M. Tejani, Q. Zhang, and L. Xiong, “Generating sequential electronic health records using dual adversarial autoencoder,” Journal of the American Medical Informatics Association, vol. 27, no. 9, pp. 1411–1419, 2020.
  • [7] L. Zhao, J. Wang, L. Pang, Y. Liu, and J. Zhang, “GANsDTA: predicting drug-target binding affinity using GANs,” Frontiers in genetics, vol. 10, p. 1243, 2020.
  • [8] A. Waheed, M. Goyal, D. Gupta, A. Khanna, F. Al-Turjman, and P. R. Pinheiro, “CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection,” IEEE Access, vol. 8, pp. 91 916–91 923, 2020.
  • [9] M. Saini and S. Susan, “Deep transfer with minority data augmentation for imbalanced breast cancer dataset,” Applied Soft Computing, vol. 97, p. 106759, 2020.
  • [10] A. B. Qasim, I. Ezhov, S. Shit, O. Schoppe, J. C. Paetzold, A. Sekuboyina, F. Kofler, J. Lipkova, H. Li, and B. Menze, “Red-GAN: Attacking class imbalance via conditioned generation. Yet another medical imaging perspective.” in Medical Imaging with Deep Learning.    PMLR, 2020, pp. 655–668.
  • [11] Y. Mao, F.-F. Xue, R. Wang, J. Zhang, W.-S. Zheng, and H. Liu, “Abnormality Detection in Chest X-Ray Images Using Uncertainty Prediction Autoencoders,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.    Springer, 2020, pp. 529–538.
  • [12] T. Zhou, H. Fu, G. Chen, J. Shen, and L. Shao, “Hi-net: hybrid-fusion network for multi-modal MR image synthesis,” IEEE transactions on medical imaging, vol. 39, no. 9, pp. 2772–2781, 2020.
  • [13] Y. Li, J. Li, F. Ma, S. Du, and Y. Liu, “High quality and fast compressed sensing MRI reconstruction via edge-enhanced dual discriminator generative adversarial network,” Magnetic Resonance Imaging, vol. 77, pp. 124–136, 2020.
  • [14] S. Liu, J. Hong, X. Lu, X. Jia, Z. Lin, Y. Zhou, Y. Liu, and H. Zhang, “Joint optic disc and cup segmentation using semi-supervised conditional GANs,” Computers in biology and medicine, vol. 115, p. 103485, 2019.
  • [15] N. H. N. Tegang, J.-R. Fouefack, B. Borotikar, V. Burdin, T. S. Douglas, and T. E. Mutsvangwa, “A Gaussian Process Model Based Generative Framework for Data Augmentation of Multi-modal 3D Image Volumes,” in International Workshop on Simulation and Synthesis in Medical Imaging.    Springer, 2020, pp. 90–100.
  • [16] C. Han, L. Rundo, R. Araki, Y. Furukawa, G. Mauri, H. Nakayama, and H. Hayashi, “Infinite brain MR images: PGGAN-based data augmentation for tumor detection,” in Neural approaches to dynamics of signal exchanges.    Springer, 2020, pp. 291–303.
  • [17] F. Pollastri, F. Bolelli, R. Paredes, and C. Grana, “Augmenting data with GANs to segment melanoma skin lesions,” Multimedia Tools and Applications, vol. 79, no. 21, pp. 15 575–15 592, 2020.
  • [18] Y. Xue, Q. Zhou, J. Ye, L. R. Long, S. Antani, C. Cornwell, Z. Xue, and X. Huang, “Synthetic augmentation and feature-based filtering for improved cervical histopathology image classification,” in International conference on medical image computing and computer-assisted intervention.    Springer, 2019, pp. 387–396.
  • [19] A. Segato, V. Corbetta, M. Di Marzo, L. Pozzi, and E. De Momi, “Data augmentation of 3D brain environment using Deep Convolutional Refined Auto-Encoding Alpha GAN,” IEEE Transactions on Medical Robotics and Bionics, 2020.
  • [20] G. Kwon, C. Han, and D.-s. Kim, “Generation of 3D Brain MRI Using Auto-Encoding Generative Adversarial Networks,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.    Springer, 2019, pp. 118–126.
  • [21] L. Xu, X. Zeng, Z. Huang, W. Li, and H. Zhang, “Low-dose chest X-ray image super-resolution using generative adversarial nets with spectral normalization,” Biomedical Signal Processing and Control, vol. 55, p. 101600, 2020.
  • [22] T. Neff, C. Payer, D. Stern, and M. Urschler, “Generative Adversarial Network based Synthesis for Supervised Medical Image Segmentation,” in Proceedings of the OAGM&ARW Joint Workshop 2017.    Verlag der Technischen Universität Graz, 5 2017, pp. 140–145.
  • [23] Y. Wu, Y. Yue, X. Tan, W. Wang, and T. Lu, “End-to-end chromosome Karyotyping with data augmentation using GAN,” in 2018 25th IEEE International Conference on Image Processing (ICIP).    IEEE, 2018, pp. 2456–2460.
  • [24] G. Modanwal, A. Vellal, and M. A. Mazurowski, “Normalization of breast mris using cycle-consistent generative adversarial networks,” Computer Methods and Programs in Biomedicine, p. 106225, 2021.
  • [25] I. S. A. Abdelhalim, M. F. Mohamed, and Y. B. Mahdy, “Data augmentation for skin lesion using self-attention based progressive generative adversarial network,” Expert Systems with Applications, vol. 165, p. 113922, 2021.
  • [26] S. Biswas, J. Rohdin, and M. Drahanskỳ, “Synthetic Retinal Images from Unconditional GANs,” in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).    IEEE, 2019, pp. 2736–2739.
  • [27] T. Goel, R. Murugan, S. Mirjalili, and D. K. Chakrabartty, “Automatic Screening of COVID-19 Using an Optimized Generative Adversarial Network,” Cognitive Computation, pp. 1–16, 2021.
  • [28] S. Deepak and P. Ameer, “MSG-GAN Based Synthesis of Brain MRI with Meningioma for Data Augmentation,” in 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT).    IEEE, 2020, pp. 1–6.
  • [29] W. Wei, E. Poirion, B. Bodini, M. Tonietto, S. Durrleman, O. Colliot, B. Stankoff, and N. Ayache, “Predicting PET-derived myelin content from multisequence MRI for individual longitudinal analysis in multiple sclerosis,” NeuroImage, vol. 223, p. 117308, 2020.
  • [30] E. Wu, K. Wu, D. Cox, and W. Lotter, “Conditional Infilling GANs for Data Augmentation in Mammogram Classification,” in Image Analysis for Moving Organ, Breast, and Thoracic Images, D. Stoyanov, Z. Taylor, B. Kainz, G. Maicas, R. R. Beichel, A. Martel, L. Maier-Hein, K. Bhatia, T. Vercauteren, O. Oktay, G. Carneiro, A. P. Bradley, J. Nascimento, H. Min, M. S. Brown, C. Jacobs, B. Lassen-Schmidt, K. Mori, J. Petersen, R. San José Estépar, A. Schmidt-Richberg, and C. Veiga, Eds.    Cham: Springer International Publishing, 2018, pp. 98–106.
  • [31] M. Wiatrak, S. V. Albrecht, and A. Nystrom, “Stabilizing Generative Adversarial Networks: A Survey,” arXiv preprint arXiv:1910.00927, 2019.
  • [32] A. Jabbar, X. Li, and B. Omar, “A Survey on Generative Adversarial Networks: Variants, Applications, and Training,” arXiv preprint arXiv:2006.05132, 2020.
  • [33]

    J. J. A. M. . A. G. Vignesh Sampath, Iñaki Maurtua, “A survey on generative adversarial networks for imbalance problems in computer vision tasks,”

    Journal of Big Data, vol. 8, no. 27, 2021.
  • [34] Saxena, Divya and Cao, Jiannong, “Generative Adversarial Networks (GANs) Challenges, Solutions, and Future Directions,” ACM Computing Surveys (CSUR), vol. 54, no. 3, pp. 1–42, 2021.
  • [35] Z. Pan, W. Yu, X. Yi, A. Khan, F. Yuan, and Y. Zheng, “Recent progress on generative adversarial networks (GANs): A survey,” IEEE Access, vol. 7, pp. 36 322–36 333, 2019.
  • [36] J. Gui, Z. Sun, Y. Wen, D. Tao, and J. Ye, “A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications,” arXiv preprint arXiv:2001.06937, 2020.
  • [37] S. Kazeminia, C. Baur, A. Kuijper, B. van Ginneken, N. Navab, S. Albarqouni, and A. Mukhopadhyay, “GANs for medical image analysis,” Artificial Intelligence in Medicine, vol. 109, p. 101938, 2020.
  • [38] N. K. Singh and K. Raza, “Medical Image Generation Using Generative Adversarial Networks: A Review,” Health Informatics: A Computational Perspective in Healthcare, pp. 77–96, 2021.
  • [39] Y. Hong, U. Hwang, J. Yoo, and S. Yoon, “How Generative Adversarial Networks and Their Variants Work: An Overview,” ACM Computing Surveys (CSUR), vol. 52, no. 1, pp. 1–43, 2019.
  • [40] M. Lee and J. Seok, “Regularization Methods for Generative Adversarial Networks: An Overview of Recent Studies,” arXiv preprint arXiv:2005.09165, 2020.
  • [41] P. Shamsolmoali, M. Zareapoor, E. Granger, H. Zhou, R. Wang, M. E. Celebi, and J. Yang, “Image synthesis with adversarial networks: A comprehensive survey and case studies,” Information Fusion, 2021.
  • [42] Z. Wang, Q. She, and T. E. Ward, “Generative Adversarial Networks in Computer Vision: A Survey and Taxonomy,” ACM Computing Surveys (CSUR), vol. 54, no. 2, pp. 1–38, 2021.
  • [43] I. Goodfellow, “NIPS 2016 Tutorial: Generative Adversarial Networks,” arXiv preprint arXiv:1701.00160, 2016.
  • [44] A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” arXiv preprint arXiv:1511.06434, 2015.
  • [45] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [46] M. Mirza and S. Osindero, “Conditional Generative Adversarial Nets,” arXiv preprint arXiv:1411.1784, 2014.
  • [47] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in

    International conference on machine learning

    .    PMLR, 2017, pp. 214–223.
  • [48] H. B. Yedder, B. Cardoen, and G. Hamarneh, “Deep learning for biomedical image reconstruction: A survey,” Artificial Intelligence Review, pp. 1–37, 2020.
  • [49] X. Yi, E. Walia, and P. Babyn, “Generative adversarial network in medical imaging: A review,” Medical image analysis, vol. 58, p. 101552, 2019.
  • [50] J. Nalepa, M. Marcinkiewicz, and M. Kawulok, “Data Augmentation for Brain-Tumor Segmentation: A Review,” Frontiers in computational neuroscience, vol. 13, p. 83, 2019.
  • [51] K. L.-L. Román, M. I. G. Ocaña, N. L. Urzelai, M. Á. G. Ballester, and I. M. Oliver, “Medical Image Segmentation Using Deep Learning,” in Deep Learning in Healthcare.    Springer, 2020, pp. 17–31.
  • [52] C. Tian, L. Fei, W. Zheng, Y. Xu, W. Zuo, and C.-W. Lin, “Deep learning on image denoising: An overview,” Neural Networks, vol. 131, pp. 251–275, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0893608020302665
  • [53] G. Haskins, U. Kruger, and P. Yan, “Deep learning in medical image registration: a survey,” Machine Vision and Applications, vol. 31, no. 1, pp. 1–18, 2020.
  • [54] Y. Li, B. Sixou, and F. Peyrin, “A Review of the Deep Learning Methods for Medical Images Super Resolution Problems,” IRBM, 2020.
  • [55] A. Alotaibi, “Deep Generative Adversarial Networks for Image-to-Image Translation: A Review,” Symmetry, vol. 12, no. 10, p. 1705, 2020.
  • [56] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, and X. Chen, “Improved techniques for training gans,” in Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, Eds., vol. 29.    Curran Associates, Inc., 2016. [Online]. Available: https://proceedings.neurips.cc/paper/2016/file/8a3363abe792db2d8761d6403605aeb7-Paper.pdf
  • [57] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola, “A kernel two-sample test,” The Journal of Machine Learning Research, vol. 13, no. 1, pp. 723–773, 2012.
  • [58] Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, vol. 2.    IEEE, 2003, pp. 1398–1402.
  • [59] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved Techniques for Training GANs,” arXiv preprint arXiv:1606.03498, 2016.
  • [60] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
  • [61] A. Borji, “Pros and cons of GAN evaluation measures,” Computer Vision and Image Understanding, vol. 179, pp. 41–65, 2019.
  • [62] A. Odena, C. Olah, and J. Shlens, “Conditional Image Synthesis with Auxiliary Classifier GANs,” in International conference on machine learning.    PMLR, 2017, pp. 2642–2651.
  • [63] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  • [64] A. Kudo, Y. Kitamura, Y. Li, S. Iizuka, and E. Simo-Serra, “Virtual thin slice: 3D conditional GAN-based Super-resolution for CT slice interval,” in International Workshop on Machine Learning for Medical Image Reconstruction.    Springer, 2019, pp. 91–100.
  • [65] F. Lau, T. Hendriks, J. Lieman-Sifry, S. Sall, and D. Golden, “ScarGAN: Chained Generative Adversarial Networks to Simulate Pathological Tissue on Cardiovascular MR Scans,” in Deep learning in medical image analysis and multimodal learning for clinical decision support.    Springer, 2018, pp. 343–350.
  • [66] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep learning.    MIT press Cambridge, 2016, vol. 1, no. 2.
  • [67] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4681–4690.
  • [68] Q. Hoang, T. D. Nguyen, T. Le, and D. Phung, “MGAN: Training Generative Adversarial Nets with Multiple Generators,” in International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=rkmu5b0a-
  • [69] T. Karras, S. Laine, and T. Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4401–4410.
  • [70] D. P. Kingma and M. Welling, “Stochastic gradient VB and the variational auto-encoder,” in Second International Conference on Learning Representations, ICLR, vol. 19, 2014.
  • [71] S. Targ, D. Almeida, and K. Lyman, “Resnet in Resnet: Generalizing Residual Architectures,” arXiv preprint arXiv:1603.08029, 2016.
  • [72] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in International Conference on Medical image computing and computer-assisted intervention.    Springer, 2015, pp. 234–241.
  • [73] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb, “Learning from simulated and unsupervised images through adversarial training,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2107–2116.
  • [74] L. Du, A. T. Ho, and R. Cong, “Perceptual hashing for image authentication: A survey,” Signal Processing: Image Communication, vol. 81, p. 115713, 2020.
  • [75] K. Zhao, L. Zhou, S. Gao, X. Wang, Y. Wang, X. Zhao, H. Wang, K. Liu, Y. Zhu, and H. Ye, “Study of low-dose PET image recovery using supervised learning with CycleGAN,” PloS one, vol. 15, no. 9, p. e0238455, 2020.
  • [76] M. Arjovsky and L. Bottou, “Towards Principled Methods for Training Generative Adversarial Networks,” arXiv preprint arXiv:1701.04862, 2017.
  • [77] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, ser. NIPS’17.    Red Hook, NY, USA: Curran Associates Inc., 2017, p. 6629–6640.
  • [78] S. Xiang and H. Li, “On the effects of batch and weight normalization in generative adversarial networks,” arXiv preprint arXiv:1704.03971, 2017.
  • [79] S. Mirjalili and A. Lewis, “The Whale Optimization Algorithm,” Advances in engineering software, vol. 95, pp. 51–67, 2016.
  • [80] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  • [81] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved Training of Wasserstein GANs,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, ser. NIPS’17.    Red Hook, NY, USA: Curran Associates Inc., 2017, p. 5769–5779.
  • [82] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.    Ieee, 2009, pp. 248–255.