In this paper, we propose a novel normalization method called gradient normalization (GN) to tackle the training instability of Generative Adversarial Networks (GANs) caused by the sharp gradient space. Unlike existing work such as gradient penalty and spectral normalization, the proposed GN only imposes a hard 1-Lipschitz constraint on the discriminator function, which increases the capacity of the discriminator. Moreover, the proposed gradient normalization can be applied to different GAN architectures with little modification. Extensive experiments on four datasets show that GANs trained with gradient normalization outperform existing methods in terms of both Frechet Inception Distance and Inception Score.READ FULL TEXT VIEW PDF
One of the challenges in the study of generative adversarial networks is...
Lipschitz continuity recently becomes popular in generative adversarial
Generative adversarial networks(GANs) is a popular generative model. Wit...
Generative adversarial networks (GANs) are known to benefit from
Spectral normalization (SN) is a widely-used technique for improving the...
We examined the use of modern Generative Adversarial Nets to generate no...
We present a novel adversarial framework for training deep belief networ...
Generative Adversarial Networks (GANs) 
have recently achieved great success for synthesizing new data from a given prior distribution, which facilitates a variety of applications, , super resolution imaging, style transfer between domains [24, 36]. In the original definition, GANs consist of two networks: the generator aims to construct realistic samples to fool the discriminator while the discriminator learns to discriminate real samples from synthetic samples which are produced by the generator.
Although the state-of-the-art GANs generate high fidelity images that easily fool humans, the unstable training process remains a challenging problem. Therefore, a recent line of study focuses on overcoming the unstable training issue [2, 3, 10, 25, 27]. For example, one of the reasons for the unstable GAN training is caused by the sharp gradient space of the discriminator, which causes the mode collapse in the training process of the generator . Although L2 normalization  and weight clipping  are simple but effective methods in stabilizing GANs, these additional constraints limit the model capacity of discriminator. As a result, the generator is inclined to fool the discriminator before learning to generate real images. Another kind of popular approaches is to formulate the discriminator as a Lipschitz continuous function bounded under a fixed Lipschitz constant by applying regularization or normalization on discriminator [10, 22, 26, 27]. As such, the discriminator gradient space becomes smoother without significantly sacrificing the performance of the discriminator.
Imposing a Lipschitz constraint on the discriminator can be characterized by three properties. 1) Model- or module-wise constraint. If the constraint objective depends on full model instead of the summation of internal modules, we define such methods to be a model-wise constraint and the converse to be a module-wise constraint. For instance, the Gradient penalty (1-GP)  is a model-wise constraint, while Spectral Normalization (SN)  is a module-wise (layer-wise) constraint, i.e.
. We argue that model-wise constraints are better since the module-wise constraints limit the layer capacities and thus reduce the multiplicative power of neural networks. 2)Sampling-based or non-sampling-based constraint. If the approach requires sampling data from a fixed pool, such method is defined as a sampling-based constraint. For example, 1-GP is a sampling-based constraint due to the regularization, while SN is a non-sampling-based constraint since the normalization only depends on the model architecture. The non-sampling-based methods are expected to perform better than sampling-based methods since the sampling-based constraints might not be effective for the data that are not sampled before. 3) Hard or soft constraint. If the Lipschitz constant of any function in the function space of the constrained discriminators is not greater than a fixed finite value, such approach is defined as a hard constraint and the converse as a soft constraint. For example, SN is a hard constraint and the fixed finite value is equal to , while 1-GP is a soft constraint since the tightness of the constraint is fluctuated by the regularization and thus the upper bound is not limited. The hard constraint is expected to perform better since the consistent Lipschitz constant can guarantee the gradient stability of unseen data.
To the best of our knowledge, none of the constraints in previous work is model-wise, non-sampling-based, and hard at the same time. In this paper, we propose a new normalization method, named gradient normalization (GN), to enforce the Lipschitz constant to 1 for the discriminator model through dividing the outputs by the gradient norm of the discriminator. Unlike SN, the proposed Lipschitz constant does not decay from the multiplicative form of neural networks since we consider the discriminator as a general function approximator and the calculated normalization term is independent of internal layers. The proposed gradient normalization enjoys the following two favorable properties. 1) The normalization simultaneously satisfies three properties including model-wise, non-sampling-based and hard constraints, and does not introduce additional hyperparameters. 2) The implementation of GN is simple and compatible with different kinds of network architectures.
The contributions of our paper are summarized as follows.
In this paper, we propose a novel gradient normalization for GANs to strike a good balance between stabilizing the training process and increasing the ability of generation. To the best of our knowledge, this is the first work simultaneously satisfying the above-mentioned three properties.
We theoretically prove that the proposed gradient normalization is 1-Lipschitz constrained. This property helps the generator to avoid the gradient explosion or vanishing, and thus stabilizes the training process.
Experimental results show the proposed gradient normalization consistently outperforms the state-of-the-art methods with the same GAN architectures in terms of both Inception Score and Frechet Inception Distance. Our implementation is available at: https://github.com/basiclab/GNGAN-PyTorch.
Previous works for addressing the issues of stabilizing GANs training mainly focus on two kinds of approaches: regularization and normalization of the discriminator. The idea behind these solutions is preventing the discriminator from generating sharp gradients during the training process. Specifically, regularization-based methods add regularization terms into the optimization process for stabilizing the training. For example, gradient penalty-based approaches [10, 27, 31, 32] compute the gradient norm from random samples as the penalty term. Lipschitz regularization  approximates maximum perturbation by power iteration. Moreover, consistency regularization  regularizes the discriminator outputs of two augmented samples. Orthogonal regularization 
forces the square of layer weights to be an identity matrix.
Normalization-based methods normalize each layer by different kinds of layer norms. For example, spectral normalization-based approaches [22, 19, 13] normalize layer weights by the spectral norm. Furthermore, weight normalization 
normalizes layer weights by L2 norm. It is worth noting that we observed that all the normalization-based approaches are non-sampling-based and thus are more general in stabilizing the training of GANs than regularization-based methods. For example, spectral normalization divides the discriminator weight matrix by the largest singular value so that the weight matrix satisfies the Lipschitz constant constraint around 1 for all inputs. However, due to the multiplicative nature of neural networks, the final Lipschitz constant of the discriminator is decayed to a small value if the Lipschitz constant of each layer is smaller than 1. This limitation restricts the capacity of the discriminator and thus deteriorates the performance of the generative model. It is worth noting that GANs are inclined to collapse when using spectral normalization with Wasserstein loss, which is also shown in the comments from the author of spectral normalization. We believe that the failure is caused by the imprecise dual form solution of WGAN since the discriminator function has a limited Lipschitz constant.
The game between generator and discriminator can be formulated as a minimax objective , ,
where is the distribution of real data and is the distribution defined by ( is the pushforward measure and is the -dimensional prior distribution). In this setting, the generator is guaranteed to converge to real distribution if discriminator is always optimal . However, training GANs suffers from many difficulties including but not limited to gradient vanishing and gradient explosion. There are two major reasons causing the unstable training issues. First, optimizing the objective function (1) is equivalent to minimizing Jensen–Shannon divergence (JS-divergence) between and [2, 3]. If and do not overlap each other, the JS-divergence would be a constant value which results in gradient vanishing. Second, finite real samples often make discriminator fall into overfitting , which indirectly causes gradient explosion around real samples.
WGAN is proposed to optimize GANs by minimizing the Wasserstein-1 distance between and , i.e.,
where is the Lipschitz constant of discriminator . is defined as follows.
In other words, is the minimum real number such that:
It is worth noting that the metric
can be any norm of vector. The discriminator in WGAN aims to approximate Wasserstein distance by maximizing the objective function (2) under the Lipschitz constraint (4). Indeed, different choices of Lipschitz constant do not affect the results since the Lipschitz constant of a function can be easily scaled by multiplying the function by a scaling factor. Moreover, the Wasserstein metric has been proved to be more sensible than KL metrics when the learning distributions are supported by low-dimensional manifolds . Obviously, the approximation error rate is related to the size of discriminator capacity. If a discriminator can search in a larger function space, it can approximate the Wasserstein metric more precisely, and hence makes the generator better for modeling the real distribution. Meanwhile, the Lipschitz constraint limits the steepness of value surface and therefore alleviates the overfitting of the discriminator.
However, it is still a great challenge to achieve the Lipschitz constraint on neural networks since striking a good balance between the Lipschitz constraint and network capacity is a hard task. There are many approaches proposed to achieve this constraint. Some of the approaches directly limit the Lipschitz constant for each layer, but sacrifice the function space and get limited network capacity. On the contrary, weight clipping or regularization [3, 10] allows networks to search in a larger function space, but loosens the constraint. In the next section, we prove that the Lipschitz constant of a layer-wise Lipschitz constraint network is upper-bounded by any of its first -layer subnetworks and propose a normalization method to solve this issue.
Let be a -layer network, which can be formulated as a function composed of a bunch of affine transformations:
where and are the parameters of the -th layer, is the target dimension of the -th layer and is the non-linear element-wise activation function at layer
is the non-linear element-wise activation function at layer. Let denote the first -layer subnetwork.
In order to analyze the behavior of a layer-wise constrained network, we also define layer-wise Lipschitz networks as follows.
Let be a -layer network. is defined to be layer-wise -Lipschitz constrained if s.t. is the Lipschitz constant of the -th layer:
Since the definition of the Lipschitz constraint (Inequality (4)) is a pairwise relation which associates only two data points at a time, the sampling-based regularizations based on Inequality (4) cannot ensure that the Lipschitz constraint is tight enough everywhere for the discriminator. Such approaches are sampling-based and soft that may cause the gradient explosion due to the non-smooth decision boundary. Consequently, a non-pairwise condition of the Lipschitz constraint is needed for helping us to associate the Lipschitz constraints with gradients. Therefore, we propose the following lemma which associates the Lipschitz constant and the gradient norm of the discriminator.
Let be a continuously differentiable function and be the Lipschitz constant of . Then the Lipschitz constraint (4) is equivalent to
It is worth noting that the equivalence in Lemma 3 holds if and only if the underlying function is continuous from the perspective of multivariate functions. Practically, almost all the neural networks have finite discontinuous points and therefore are continuous functions. We extend this observation and give a more useful assumption for characterizing such thorny points.
Let be a continuous function which is modeled by a neural network, and all the activation functions of network f are piecewise linear. Then the function is differentiable almost surely.
This can be heuristically justified as the implementations of neural networks on a computer are subject to numerical error anyway. For complex arithmetic operations, , matrix multiplication, the numerical errors are accumulated and therefore the output values are perturbed to avoid the non-differentiable points.
Lemma 3 motivates us to design a normalization technique by directly constraining the gradient norm. We first show that the Lipschitz constant of a layer-wise -Lipschitz constraint, , SN-GAN, may significantly decrease when the number of layers increases, which inspires the concept of the proposed gradient normalization. In the following, we assume that the activation functions are 1-Lipschitz functions*** Activation functions commonly-used, , ReLU, Leaky ReLU, SoftPlus, Tanh, Sigmoid, ArcTan, Softsign are 1-Lipschitz functions
Activation functions commonly-used, , ReLU, Leaky ReLU, SoftPlus, Tanh, Sigmoid, ArcTan, Softsign are 1-Lipschitz functions. and prove that the Lipschitz constant of a deeper network is bounded by its shallow network.
Let be a layer-wise 1-Lipschitz constrained -layer network. The Lipschitz constant of the first -layer network is upper-bounded by , ,
It is difficult to ensure that the equation holds, especially when the network is optimized by stochastic-gradient-based methods. If the equality does not hold at the -th layer, the Lipschitz constant of the first -layer subnetwork is less than the Lipschitz constant of the first -layer subnetwork. By applying this rule iteratively, we have
Therefore, the Lipschitz constant may be drastically reduced when the number of layers increases. It is worth noting that this is a potential reason which makes SN-GAN fail to integrate with Wasserstein distance when the gradient-based regularization is not applied. On the other hand, the Lipschitz constrained networks do not need to be layer-wise Lipschitz constrained, i.e., it is possible to build a network which is Lipschitz constrained by a model-wise characteristic of a network instead of a module-wise constraint.
Inspired by this observation, we propose a normalization method which strictly limits the Lipschitz constant but also maintains a high discriminator capacity. Specifically, as shown in Theorem 3, the Lipschitz constant is associated with the gradient norm. We then propose a new solution, Gradient Normalization (GN), to make the network search in a function space induced by constraint (7). Let be a continuously differentiable function, the proposed GN normalizes the norm of the gradient and bounds simultaneously:
where is a universal term which can be associated with or a constant to avoid the situation that approximates infinity or approximates . Here, we propose to set as and prove that this gradient normalization is still 1-Lipschitz constrained. After that, the variants of are discussed for explanations.
Let be a continuous function which is modeled by a neural network, and all the activation functions of network are piecewise linear. The normalized function is 1-Lipschitz constrained, ,
The potential issues of two basic approaches for , , and , are summarized in Table 1. Considering the overfitting of a gradient normalized discriminator, the discriminator gives out confident predictions for both real and fake samples. From Eq.(11), the confident predictions are produced by and , which are also the conditions shown in Table 1. Since the function norm is not directly related to the gradient norm , the confident predictions may make normalized gradient norm and normalized function value explode†††The empirical results will be discussed later in Section 5.5. To deal with this problem, therefore, we propose the formulation which sets to . In this setting, when the discriminator is saturated due to overfitting, the normalized gradient norm (12) is close to . This self-control mechanism prevents the generator from getting an exploded gradient and consequently stabilizes the training process of GANs. The pseudocode of the proposed GN is presented in Algorithm 1.
Gradient Analysis of Gradient Normalization. The gradient of with respect to is derived as follows. Please note that is set as and function arguments are ignored here for simplicity.
Interestingly, according to Eq.(13c), the gradient normalization is a special form of an adaptive gradient regularization. More specifically, in Eq.(13a), the first term is the gradient of GAN objective which improves the discriminating power, while the second term is the regularization which penalizes the gradient norm of with the adaptive regularization coefficient. Compared with 0-GP  and 1-GP , the gradient penalty in GN is more flexible and can automatically negotiate with GAN loss. Consequently, this self-balancing mechanism forces GN to become a hard Lipschitz constraint.
Comparisons of Different Approaches. Table 2 summarizes these three properties of several well-known methods. For the regularization-based methods [4, 10, 27, 35], the constraints are always soft due to the trade-off between the regularization and the objective of GANs. Furthermore, the non-sampling-based approaches [4, 22] consider the network as a function composed of multiple layers and impose the constraints on individual layers to achieve the Lipschitz constraint on the full model. By Theorem 5, these layer-wise constraints potentially reduce the capacity of the discriminator and therefore sacrifice the generation quality. In contrast, the gradient normalization does not depend on the specific subset of data and is applicable on the full model. Accordingly, by Theorem 6, such normalization is a model-wise, non-sampling-based and hard constraint.
To evaluate gradient normalization, we first conduct the experiments of unconditional and conditional image generation on two standard datasets, namely CIFAR-10 dataset
and STL-10 dataset. CIFAR-10 dataset contains 60k images of size which was partitioned into 50k training instances and 10K testing instances. STL-10 dataset is designed for developing unsupervised feature learning, containing 5k training images, 8k testing images and 100k unlabeled set of size . Moreover, we also test the proposed method on two datasets with a higher resolution including CelebA-HQ  and LSUN  Church Outdoor. CelebA-HQ contains 30k human faces of size and LSUN Church Outdoor is a subset of the LSUN dataset containing 126k church outdoor scenes of size .
|Inception Score||FID(train)||FID(test)||Inception Score||FID(50k)||FID(10k)|
|Neural Architecture Search|
Inception Score and FID with unconditional image generation on CIFAR-10 and STL-10. We report the average and standard deviation of the results trained with 5 different random seeds. Note that “-” denotes that result is not reported by the original paper. Moreover,represents that the original paper does not provide an evaluation on STL-10, so we provide a implementation for reference. denotes that we provide a re-implementation result for reliable comparison.
Two popular evaluation metrics for generative models, , Inception Score and Frechet Inception Distance (FID) , are used to quantitatively evaluate the proposed method. For a fair comparison, all evaluations are computed by the official implementation of the Inception Score and FID. In order to compare with previous works which did not carefully follow the standard evaluation protocol, our evaluation is designed according to several different configurations for calculating FID [8, 17]. Furthermore, we record the best model checkpoint in terms of FID throughout the training and report the averaged results. For more details of evaluation, see Appendix C in the supplementary material.
Table 3 compares the proposed GN-GAN with several state-of-the art models, , SN-GAN , WGAN-GP  and CR-GAN . The results manifest that the proposed GN-GAN outperforms existing normalization methods in terms of Inception Score and FID. We also combine GN-GAN with consistency regularization (CR) , named GN-GAN-CR, by directly replacing spectral normalization with GN. The result indicates that GN-GAN-CR can further improve SN-GAN-CR in both Inception Score and FID by simply replacing the normalization method. It is worth noting that the Wasserstein loss degenerates into hinge loss in the proposed GN-GAN since GN makes the output of network range within
. Therefore, we only test our model with two loss functions: hinge loss and non-saturating loss (NS). We derive the best performance by using hinge loss with ResNet and NS loss with Standard CNN. The architectures and most of optimization parameters in our experiments are the same as . We stop training after 200k generator update steps for all the experiments shown in Table 3. The learning rate of the discriminator is slightly increased from to for ResNet architecture. In GN-GAN-CR, we test regularization weight on CIFAR-10 with ResNet and find that performs better than the others.
We further conduct the experiments on the CIFAR-10 dataset with the same architecture proposed by BigGAN . Similarly, SN in the discriminator is replaced by GN.‡‡‡Following BigGAN, SN is still used in the generator. Table 4 shows that GN can further improve BigGAN by 31.8% in terms of FID. It is worth noting that we increase the discriminator learning rate from to , which is two times greater than that of the generator during training. This modification is motivated by the self-control mechanism of GN, which makes the outputs of the discriminator saturate thus needing more steps to give confident predictions.
To show that the proposed Gradient Normalization is able to generate high-resolution images, we leverage the architecture proposed by SN-GAN for generating 256256 images on CelebA-HQ and LSUN Church Outdoor. Similarly, SN is replaced by GN in the experiments. In Figures 1 and 2, generated results show the competitive quality. Due to the space constraint, more samples and quantitative results are provided in Appendix D of the supplementary materials.
According to Theorem 5, the Lipschitz constant decreases as the number of layers increases. To explain the results clear, we first introduce Corollary 1 in WGAN-GP , which states that if the GAN is trained with Wasserstein loss, there exists at least one optimal discriminator satisfying almost everywhere under the support of and . Accordingly, by Lemma 3, the Lipschitz constant for such optimal discriminator is equal to almost surely. We therefore design the experiments to test SN and GN with the Wasserstein loss. Figures 2(a) and 2(b) respectively show the Inception Scores and the Lipschitz constants of discriminators with regards to the training iterations for different approaches on CIFAR-10 dataset, where L means both generator and discriminator are modeled by convolutional layers and is calculated by sampling 50k data from each of and . However, Figures 2(a) and 2(b) show that the generators of SN-GANs do not lead to a high IS and the Lipschitz constant of discriminators are much smaller than . Moreover, all the discriminators of SN-GANs cannot well approximate Wasserstein distance under the Lipschitz constraint even if the above corollary guarantees the existence of optimal discriminator. We believe the reason is that the SN over-restricts the Lipschitz constants of discriminators and hence discriminators cannot increase the Lipschitz constants of themselves to approach . This situation becomes more worse when the number of layers is increased (from L to L). Conversely, the magnitude of Lipschitz constants of GN-GANs is invariant to the depth of model and leads to a better performance, which indeed matches the Theorem 5.
We further investigate the Lipschitz constant of internal layers. Figure 2(c) shows the Lipschitz constants of GN-9L and SN-9L in the layer level on CIFAR-10 dataset. It is worth noting that GN can achieve 1-Lipschitz constraint without constraining internal layers. Thus, the Lipschitz constant of each layer in GN-9L is unlimited and is more flexible than module-wise approach, , SN-9L. The multiplicative power of discriminators, therefore, is not limited, which is the potential superiority that makes GN converge well with Wasserstein loss.
Activation Function. Theorem 6 only provides the upper bound of gradient norm with the assumption that the activation functions of the discriminator are piecewise linear. However, we hypothesize that GN works for most activation functions. Therefore, we reproduce the Standard CNN training reported by SN-GAN , WGAN-GP  and Vanilla GAN  with different activation functions. Figure 4 shows the performance of GAN, WGAN-GP, SN-GAN and GN-GAN with different activation functions in terms of IS and FID, which indicates that GN achieves the best scores on the ELU and ReLU, while gets competitive scores on Softplus (). It is worth noting that Softplus becomes similar to ReLU if the increases. As such, GN performs better on Softplus() than Softplus.
Variants of Gradient Normalization. As discussed in Section 4, we set to in all experiments for stability issues. Here, we compare three variants of gradient normalization, (), () and (), by applying them for unconditional image generation. Figure 5 shows the results of three normalization approaches across different model architectures and datasets with the total iteration as 50k. The hyperparameter used for different is the same as 5.1
. Moreover, we repeat the training process 4 times with different random seeds and report the average performance of the last checkpoints. The results manifest that the variance of Inception Score and FID foris large for different architectures and datasets. Even if we add a constant to the denominator of (11) by setting to a non-zero constant , the training results of are still inferior to the proposed . These results match our discussion in Section 4.
In this paper, we propose a novel gradient normalization method for stabilizing the training of GANs, which can facilitate a variety of applications. The proposed GN is simple to implement and theoretically proven to satisfy the hard Lipschitz constraint and can effectively make any discriminator as a Lipschitz continuous function. Also, we apply GN to several different architectures on different datasets and most of them achieve state-of-the-art results. In the future, we plan to replace the gradient norm of the denominator term with a quasi formulation for further reducing the computation. Another interesting direction is to apply GN to other GAN-related tasks such as style transfer, super-resolution, and video generation.
This work is supported in part by the Ministry of Science and Technology (MOST) of Taiwan under the grants MOST-109-2221-E-009-114-MY3 and MOST-110-2218-E-A49-018. This work was also supported by the Higher Education Sprout Project of the National Yang Ming Chiao Tung University and Ministry of Education (MOE), Taiwan. We are grateful to the National Center for High-performance Computing for computer time and facilities.
We first define the Lipschitz constant again for a better readability. Let be a mapping function. Then, is the minimum real number such that:
Lemma 3. Let be a continuously differentiable function and be the Lipschitz constant of . Then the Lipschitz constraint (14) is equivalent to
We first prove the sufficient condition.
From the definition of Lipschitz constraint (14), we know
Now, we consider the norm of directional derivative at along with the direction of :
where is the inner product. Since the norm of gradient is the maximum norm of directional derivative, then
We then prove the necessary condition.
By the assumption, is continuous and differentiable. Therefore, the conditions of Gradient theorem are satisfied, and thus we can only consider the line integral along the straight line from to :
The theorem follows. ∎
Theorem 5. Let be a layer-wise 1-Lipschitz constrained network with layers. Then the Lipschitz constant of the first -layer network is upper-bounded by , i.e.,
Since all the layers including activation functions are all 1-Lipschitz constrained, ,
We can infer the upper bound of feature distance at layer by Eq.(21):
This result implies
The theorem follows. ∎
Theorem 6. Let be a continuously differentiable function which is modeled by a neural network, and all the activation functions of network are piecewise linear. Then the normalized function is 1-Lipschitz constrained, ,
For simplicity, function arguments are ignored here. By definition, the gradient norm of is:
By simple chain rule, we know that:
Since the network contains only piecewise linear activation functions, the Hessian matrix
is a zero matrix. The Eq.(25b) can be simplified:
The theorem follows. ∎
Please note that the source codes are archived for the verification in supplementary materials.
Figure 6 compares the effectiveness of different activation functions in terms of IS and FID on CIFAR-10 dataset. The results show that the ReLU activation function achieves best IS and FID for different approaches. Moreover, the ReLU activation function with the proposed GN outperforms other state-of-the-art normalization and regularization approaches. It is worth noting that the original Softplus activation function achieves low IS and high FID for different approaches. However, by setting to 20, the result can be significantly better since Softplus becomes similar to ReLU if increases. Moreover, Figure 7 compares the effectiveness of different in terms of IS and FID on CIFAR-10 dataset. The results indicate that the variance of Inception Score and FID for is large for different architectures and datasets. The proposed outperforms the alternatives, which is consistent to the experiments on STL-10 dataset.
We conduct an experiment similar to  for the visualization. The value surfaces of binary classification tasks are demonstrated in Figure 8. The results demonstrate that the value surface of vanilla GAN (Figure 7(b)) contains steep cliffs near to the decision boundary, which causes gradient explosion when the synthetic samples are located in this area. With the regularization or normalization applied to discriminator, the value surface becomes smooth in varying levels as shown in Figures 8(value_surface_appendix:gp0)-(value_surface_appendix:gn).
Table 5 shows the training speed of different approaches with ResNet as the backbone network on CIFAR-10 dataset. All the training processes are performed on NVIDIA RTX 2080Ti five times, and we report the average results in terms of update iterations per second. The results show that different approaches require additional computation as compared to the Vanilla GAN. It is worth noting that although the training speed of the proposed GN is only compatible with 1-GP, the proposed GN outperforms the other approaches in terms of IS and FID. In other words, even with more computation, other approaches can not improve their results. On the other hand, the training process is offline, while the inference speed is the same for different approaches.
|Method||Generator (it/s)||Discriminator (it/s)|
We further investigate the performance of the proposed GN with different loss functions. Notably, the Gradient Normalization makes the outputs of discriminators saturate in range , and thus the sigmoid at the end of discriminator can be eliminated when the non-saturating loss is used. Moreover, the hinge loss is equivalent to Wasserstein loss in the perspective of gradients when GN is used, ,
Table 6 shows the evaluation results of different loss functions on CIFAR-10 in terms of Inception score and FID. Both ResNet and CNN architectures are reported. Since the Wasserstein loss is equivalent to hinge loss, the Wasserstein loss is not listed. The performance of GN-GANs is consistent with different loss functions.
Inception Score. For the Inception Score (IS), we divide 50k generated images into 10 partitions and calculate the average and the standard deviation of Inception Score over each partition. The final results are the average scores of different training sessions.
Frechet Inception Distance. The configurations of FID are described as follow. For the CIFAR-10 dataset, we use 50k generated samples vs. 50k training images and 10k generated samples vs. 10k test images. For the STL-10 dataset, we use 50k generated samples vs. 100k unlabeled images and 10k generated samples vs. 100k unlabeled images. For the CelebA-HQ, we use 30k generated samples vs. 30k training images. For the LSUN Church Outdoor, we use 50k generated samples vs. 126k training images. In the training process, models are trained on CIFAR-10 training set, STL-10 unlabeled images, CelebA-HQ training set and LSUN Church Outdoor training set.
Unconditional Image Generation on CIFAR-10 and STL-10. For the fair comparison, we use the ResNet architecture as well as the Standard CNN used in . The last layer of ResNet, , global sum pooling, is replaced by the global average pooling. Moreover, all the weights of fully-connected layers and CNN layers are initialized by Kaiming Normal Initialization , and the biases are initialized to zero. We use Adam  as the optimizer with parameters , , , and batch size . The learning rate linearly decays to through the training. The generator is updated once for every 5 discriminator update steps. All the training processes are stopped after the generator update 200k steps. For the data augmentation, the random horizontal flipping is applied for every method (including our method and re-implementation). The augmentation setting in Table 7 is used for Consistency Regularization . For more qualitative results, please refer to Figures 9 and 10.
Conditional Image Generation on CIFAR-10. To show the results of conditional image generation on CIFAR-10 dataset, we compare the results of BigGAN , BigGAN with the Consistency Regularization (CR), BigGAN with the proposed GN, and BigGAN with the proposed GN and CR. Here, the discriminator in the conditional GAN is considered as a conditional function, , , instead of the multi-variable function, , D(, ). Therefore, the Gradient Normalization can be formulated as follows:
where is the discriminator conditional on . Similarly, by Theorem 5, is a Lipschitz constrained network with respect to .
Moreover, we take the official implementation of BigGAN  for the reference. We use Adam as the optimizer with parameters , , , and the batch size as . The generator is updated once for every 4 discriminator update steps. All the training processes are stopped after the generator updates steps. The real images are augmented by the random horizontal flipping. Following the previous setting [14, 20], we employ the moving averages on generator weights with a decay of . The pipeline for CR is shown in Table 7. Table 8 shows the performance of different approaches in terms of IS, FID (train) and FID (test). The results indicate that BigGAN with the proposed GN is better than BigGAN with CR, while BigGAN with both GN and CR achieves the best performance. For more qualitative results, please refer to Figure 11.
Unconditional Image Generation on CelebA-HQ and LSUN Church Outdoor. We further evaluate the proposed Gradient Normalization on two high-resolution image datasets, i.e., CelebA-HQ and LSUN Church Outdoor. For the augmentation, the random horizontal flipping is adopted for both datasets. We use the architecture proposed by SN-GAN  for generating images. We use Adam again as the optimizer with parameters , , , and batch size as . The generator is updated once for every 5 discriminator update steps. All the training processes are stopped after the generator update 100k steps. We employ the moving averages on generator weights with a decay of 0.9999. The Inception Score and FID are shown in Table 9. It is worth noting that the performance can be further improved with a better architecture. For more qualitative results, please refer to Figures 12 and 13.
|CelebA-HQ 128||14.78||25.95 (from )|
|LSUN Church 256||5.41||8.44|
Experiments on Progressive Growing Architecture. We further test the StyleGAN  with the proposed Gradient Normalization on CelebA-HQ . Note that the R1 regularization and Gradient Penalty are replaced with GN in our experiment. We use hinge loss as the objective function and Adam as the optimizer. The learning rates and are both set to for resolutions of , , and , and otherwise. For the other settings, we use the same parameters as StyleGAN. The FID of GN-StyleGAN is which is calculated by 50k generated images vs. 30k training images. The generated samples are shown in Figures 14-17.
Proceedings of the 34th International Conference on Machine Learning (ICML), pages 214–223, 2017.
Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 215–223, 2011.
IEEE International Conference on Computer Vision (ICCV), 2019.
Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.In IEEE International Conference on Computer Vision (ICCV), pages 1026–1034, 2015.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
Off-policy reinforcement learning for efficient and effective gan architecture search.In Proceedings of the 15th European Conference on Computer Vision (ECCV), 2020.
80 million tiny images: A large data set for nonparametric object and scene recognition.IEEE Transactions on Pattern Analysis and Machine Intelligence, 30:1958–1970, 2008.
Lipschitz regularity of deep neural networks: analysis and efficient estimation.In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS), 2018.
Unpaired image-to-image translation using cycle-consistent adversarial networks.In IEEE International Conference on Computer Vision (ICCV), pages 2242–2251, 2017.