Bridging adversarial samples and adversarial networks

12/20/2019 ∙ by Faqiang Liu, et al. ∙ 10

Generative adversarial networks have achieved remarkable performance on various tasks but suffer from training instability. In this paper, we investigate this problem from the perspective of adversarial samples. We find that adversarial training on fake samples has been implemented in vanilla GAN but that on real samples does not exist, which makes adversarial training unsymmetric. Consequently, discriminator is vulnerable to adversarial perturbation and the gradient given by discriminator contains uninformative adversarial noise. Adversarial noise can not improve the fidelity of generated samples but can drastically change the prediction of discriminator, which can hinder generator from catching the pattern of real samples and cause instability in training. To this end, we further incorporate adversarial training of discriminator on real samples into vanilla GANs. This scheme can make adversarial training symmetric and make discriminator more robust. Robust discriminator can give more informative gradient with less adversarial noise, which can stabilize training and accelerate convergence. We validate the proposed method on image generation on CIFAR-10 , CelebA, and LSUN with varied network architectures. Experiments show that training is stabilized and FID scores of generated samples are improved by 10%∼ 50% relative to the baseline with additional 25% computation cost.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative adversarial networks (GANs) have been applied successfully in various research fields such as natural image modeling Radford2015Unsupervised , image translation Isola2016Image ; Zhu2017Unpaired , cross-modal image generation Dash2017TAC

, image super-resolution

Ledig2016Photo

, semi-supervised learning

Odena2016Semi and sequential data modeling Mogren2016C ; Yu2016SeqGAN

. Different from explicit density estimation based models

kingma2014semi ; Oord2016Pixel ; hinton2012a

, GANs are implicit generative models with two neural networks playing min-max game to find a map from random noise to target distribution, in which the generator tries to generate fake samples to fool discriminator and the discriminator tries to distinguish them from real samples

Goodfellow2014Generative . In original GANs formula, optimal discriminator measures the Jensen-Shannon divergence between real data distribution and generated distribution. The discrepancy measure can be generalized to f-divergence nowozin2016f or replaced by earth-mover distance arjovsky2017wasserstein . Despite the success, GANs are notoriously difficult to trainkodali2018on ; arjovsky2017towards , which are very sensitive to hyper-parameters. When the support of these two distributions are approximately disjoint, gradient given by discriminator with standard objective may vanish, and training becomes unstable arjovsky2017wasserstein . More seriously, generated distribution can fail to cover the whole data distribution and collapse to a single mode in some cases dumoulin2017adversarially ; che2016mode .

The condition of discriminator determines the training stability and performance to a great extent. From a practical standpoint, representation capacity of discriminator realized by a neural network is not infinite. Meanwhile, the discriminator is usually not optimal to measure true discrepancy when trained in an alternative manner. On the other hand, discriminator as a binary classifier is also vulnerable to adversarial samples

szegedy2014intriguing ; goodfellow2015explaining , which can be validated by the experiments shown in Figure 1. Real samples added by imperceptible perturbation, eg, with a norm of 1/255, can mislead classifier to give wrong prediction. Adversarial samples can be easily crafted by gradient-based method such as Fast Gradient Sign Method (FGSM) goodfellow2015explaining and Basic Iterative Method (BIM) kurakin2017adversarial . It should be noted that the gradient given by discriminator that guides update of the generator is exactly the same as gradient used to craft adversarial samples of the discriminator. In other words, the gradient contains uninformative adversarial noise which is imperceptible but can mislead the generator.

Figure 1:

Benign samples (on odd rows) and adversarial samples of standard discriminator (on even rows). Confidence is depicted at the corner. Standard discriminator is extremely vulnerable to imperceptible perturbation. The perturbation level is 1/255.

However, in vanilla training procedure of GANs, generator can still succeed to generate meaningful samples instead of adversarial noise. This is because discriminator is adversarially trained with diverse generated fake samples, which can partly alleviate misleading effect of adversarial noise although training is usually unstable. Nevertheless, from the perspective of symmetry, adversarial training on real samples, ie, training with adversarial samples of real data, does not exist. Intuitively, we can improve GAN with adversarial training on real samples to further reduce adversarial noise. In other words, we propose to augment training set of discriminator with adversarial samples of real data. As shown in Figure 2, we visualize the gradient of standard discriminator and the discriminator further adversarially trained on real samples. The gradient of standard discriminator seems like uninformative noise but the gradient of further adversarially trained discriminator contains more semantic information, eg, profile of face. Therefore, the proposed method can indeed eliminate adversarial noise contained in gradient of discriminator.

What’s more, we validate the proposed method on image generation with widely adopted DCGAN Radford2015Unsupervised and ResNet he2015deep ; gulrajani2017improved architecture, which shows consistent improvement of training stability and acceleration of convergence. More importantly, FID scores of generated samples are improved by relative to the baseline on CIFAR-10, CelebA, and LSUN. The computation overhead of additional adversarial training is about . We term the proposed method as adversarial symmetric GAN (AS-GAN), for the key point of the method is adversarial training both on fake samples and real samples.

Figure 2: Visualization of the gradient of DCGAN discriminator with respect to input images. The first row shows samples from CelebA dataset. The second row and third row show gradients of standard discriminator and adversarially symmetrically trained discriminator, respectively. We clip gradient to within

3 standard deviations of their mean and take the average absolute value of three channels for visualization. We can see that adversarially trained discriminator can provide more informative gradient with less adversarial noise.

2 Related Work

There is a large body of work on how to stabilize GANs training and alleviate mode collapse. arjovsky2017towards

proved that the widely adopted non-saturating loss function for the generator can be decomposed into Kullback–Leibler divergence minus two Jensen-Shannon divergence when discriminator trained to be optimal, which accounts for training instability and mode dropping during GANs training.

metz2016unrolled proposed to unroll the optimization of discriminator as surrogate objective to guide update of the generator, which shows improvement of training stability with relatively large computation overhead. kodali2018on claimed that the existence of undesirable local equilibria is responsible for mode collapse and proposed to regularize the discriminator around real data samples with gradient penalty.

Integral probability metric (IPM) based GANs such as Wasserstein GAN

arjovsky2017wasserstein and its variants gulrajani2017improved ; wu2018wasserstein can solve gradient vanishing in GANs training theoretically but it is not simple to make discriminator 1-Lipschitz required by the duality conversion practically. Wasserstein GAN arjovsky2017wasserstein suggests earth-mover (EM) distance as a measure of discrepancy between two distributions and adopts weight clip to make the discriminator 1-Lipschitz constrained. WGAN-GP gulrajani2017improved adopts gradient penalty to regularize discriminator in a less rigorous way, but it requires calculation of the second order derivative with remarkable computation overhead. Spectral normalization on the weight of discriminator proposed by Miyato2018Spectral can make discriminator 1-Lipschitz constrained efficiently, but capacity of discriminator is significantly constrained.

Adversarial vulnerability is an intriguing property of neural network-based classifier szegedy2014intriguing . A well-trained neural network can give totally wrong prediction to adversarial samples that human can recognize accurately. Small-magnitude adversarial perturbation added to benign data can be easily calculated based on gradient goodfellow2015explaining ; carlini2017towards ; dong2018boosting . goodfellow2015explaining proposed to augment training data with adversarial samples to improve the robustness of neural networks, which can smooth the decision boundary of classifier around training samples. Gradient of adversarially trained classifier contains more semantic information and less adversarial noise tsipras2018robustness ; Kim2019Bridging .

Some work tried to craft or defense against adversarial samples using GANs. Xiao2018Generating proposed to generate adversarial samples efficiently with GANs, in which a generator is used to generate adversarial perturbation for target classifier given original samples. Shen2017AE proposed AE-GAN to eliminate adversarial perturbation in an adversarial training manner, which can generate images with better perceptual quality. Different from their motivations, our work aims at improving the robustness of discriminator by introducing adversarial training on real samples, which does not exist in classic GANs. zhou2018dont proposed to perform additional adversarial training on fake samples to robustify discriminator. However, our work clarifies that standard GAN training is approximately equivalent to adversarial training on fake samples.

3 Method

Figure 3: Schematic of the proposed AS-GAN. Standard GAN training procedure is illustrated as the first forward pass and backward pass. In addition to that, we introduce adversarial training of discriminator on real samples, illustrated as the second forward pass and backward pass, which is equivalent to training discriminator with robust optimization.

3.1 Vanilla GAN

In GAN proposed by Goodfellow2014Generative , the generator parameterized by tries to generate fake samples to fool discriminator and the discriminator parameterized by tries to distinguish between generated samples and real samples. The formulation using min-max optimization is as follows:

(1)

where is the objective function. Equation 1 can be formulated as a binary classification problem with cross entropy loss:

(2)

where is real data distribution and noise

obeys standard Gaussian distribution. When discriminator is trained to be optimal, the training objective for the generator can be reformulated as Jensen-Shannon divergence, which can measure dissimilarity between two distributions.

In practice, we use mini-batch gradient descent to optimize generator and discriminator alternatively. At each iteration, update rule can be derived as follows:

(3)

where and are the learning rate of discriminator and generator, respectively. denotes the objective function of mini-batch with real samples and fake samples, which is:

(4)

After updating parameters of networks, fake samples generated by

are adjusted as following equation according to chain rule:

(5)

where is a Jacobian matrix. The updated can be seen as adversarial samples of the discriminator at this iteration because is usually small. These samples will be fed into the discriminator at future iteration to perform adversarial training. From this point of view, vanilla GANs mainly include adversarial training on fake samples, which is illustrated as the first pass in Figure 3. Nevertheless, adversarial training of discriminator on real samples does not exist in this framework, which makes adversarial training unsymmetric and unbalanced. Adversarial noise contained in the gradient of non-robust discriminator can make training unstable because of the unsmoothed decision boundary of discriminator.

3.2 Adversarial training on real samples

To further robustify discriminator, we incorporate adversarial training on real samples into vanilla GANs. Specifically, we perform adversarial training after Equation 3 at each iteration as the following equation:

(6)

where is an adversarial sample of discriminator, perturbation of which is the gradient of discriminator with respect to . The adversarial sample can be calculated with constant as follows:

(7)

This adversarial training formulation is adopted from goodfellow2015explaining , which calculates -norm constrained perturbation by linearizing objective function. Adversarial training on real samples of discriminator is illustrated as the second pass in Figure 3, where denotes the minus of . The reason why we adopt this training scheme is two fold: First, gradient used to craft perturbation can be obtained from the first backward pass conveniently and the overall computation overhead of additional training is relatively low, ie, about 25%. Second, the simple scheme can already provide significant improvement. We do not adopt other more complicated adversarial training schemes such as Projected Gradient Descent (PGD) MadryTowards because further improvement is marginal. Actually, we can feed both real samples and adversarial samples at one pass but this need additional pass to calculate adversarial perturbation, which is computational-inefficient. Therefore, we train discriminator with real samples and adversarial samples in separate passes, respectively. Please refer to Algorithm 1 for more details about symmetric adversarial training.

1:  for number of training iterations do
2:     Sample mini-batch of noise samples from Gaussian distribution
3:     Sample mini-batch of data samples from real data distribution
4:     Update discriminator by gradient ascent: 
(8)
5:     Craft adversarial samples of real samples for discriminator:
(9)
6:     Perform adversarial training of discriminator on real samples:
(10)
7:     Update generator by gradient descent:
(11)
8:  end for
Algorithm 1

Mini-batch stochastic gradient descent training of AS-GAN. We set perturbation

to as default for image generation tasks.

3.3 Robust optimization

By introducing adversarial training on real samples, we generalize the original objective of GAN to the following one that forces discriminator to be robust:

(12)

In fact, when training data of real samples is infinite and the distribution of real data is continuous, the above objective is approximately equivalent to the original. However, in practice, contrary to the fake data, number of real samples is always limited, which partly accounts for the existence of adversarial samples. With the proposed objective, discriminator is not only required to classify real data correctly but also should be not vulnerable to small perturbation. In a sense, adversarial training on real samples regularizes discriminator by augmenting training data, which can smooth the decision boundary of discriminator. Moreover, the capability of two networks become more balanced. This can prevent discriminator from being too strong to alleviate training collapse.

3.4 Effective perturbation level

It is crucial to set an appropriate perturbation level to make adversarial training effective. When perturbation set to zero, the proposed method degrades to updating discriminator twice on the same real data. When perturbation set to a too large value, real data will be drastically perturbed. Semantic information and quality will be changed, which can mislead discriminator to recognize degraded samples as real data incorrectly. In addition, we suggest to set perturbation to zero at the beginning of training in case discriminator is too weak. We do extensive experiments on how perturbation affects training in the next section.

4 Experiments

For the purpose of evaluating our method and investigating the reason behind its efficacy, we test our adversarial training method on image generation on CIFAR-10, CelebA, and LSUN with DCGAN and ResNet architecture. CIFAR-10 is a well-studied dataset of 32 x 32 natural images, containing 10 categories of objects and 60k samples. CelebA is a face attributes dataset with more than 200k images. LSUN is a large scale scene understanding dataset with 10 categories and we choose 3000k images labeled as bedroom for training. For fast validation, We resize images in CelebA and LSUN to 64 x 64. Before fed into discriminator, images are rescaled within

. Dimension of the latent vector is set to 100 for all implementations.

In ResNet architecture, the residual block is organized as BatchNorm-ReLU-Resample-Conv-BatchNorm-ReLU-Conv with skip connection. We use bilinear interpolation for upsampling and average pooling for downsampling. Batch normalization

Ioffe2015Batch is used for both generator and discriminator. Parameters of network are initialized by Xavier method glorot2010understanding .

In DCGAN architecture, the basic block is organized as Conv-BatchNorm-LeakyReLU for discriminator and ConvTransposed-BatchNorm-ReLU for generator. The weights of convolution are initialized by normal distribution with zero mean and 0.02 standard deviation. We do not use bias for convolution.

We implement models by Pytorch with acceleration of RTX 2080ti GPUs. We train networks with Adam optimizer

kingma2014adam with learning rate 2e-4. is set to 0.5 and is set to 0.999. Because training standard GAN on CelebA is unstable, we decrease learning rate of discriminator to 5e-5 to balance training as TTUR training strategy Heusel2017GANs

. We train models on CelebA for 100 epochs, CIFAR-10 for 200 epochs, LSUN for 10 epochs.

In this paper, we use Fréchet inception distance (FID) Heusel2017GANs and inception score Salimans2016Improved to measure model performance, both of which are well-studied metric of image modeling. Our source code is available on Github https://github.com/CirclePassFilter/AS_GAN.

4.1 Evaluation with different perturbation level

In this section, we conduct experiments on how perturbation level affects performance improvement. Specifically, we do unsupervised image generation on CIFAR-10 with DCGAN architecture in different settings of perturbation level. Due to the large searching space, we select several typical values for experiments such as {0,1,2,3,4}/255. All experiments are run three times independently to reduce effect of randomness.

Shown as Figure 4, our method performs better than the baseline when lies in interval . FID score can be improved by with perturbation of . However, when the perturbation level is too tiny (), the method improves original model marginally. On this condition, the effect of adversarial training is limited. Remarkably, when the imposed perturbation is too strong, the model performs even worse than the baseline. This is because discriminator recognizes degraded samples as real data and can not provide correct gradient to update generator.

Above all, with appropriate perturbation, discriminator can be regularized to be more robust, facilitating itself to produce more accurate and informative gradient. In this way, generator obtains more reliable gradient, which can alleviate collapse of training and improve fidelity of generated samples.

Figure 4: (a) Best FID scores in different settings of three independent runs (Lower is better). (b) Mean FID scores of last 20 epochs in different settings of three independent runs. ‘Gaussian’ means that the gradient used to craft adversarial samples is replaced by Gaussian noise.

4.2 Ablation study

In addition, we plot the training curve of FID on CIFAR-10 as shown in Figure 5. In particular, when is zero, the proposed method degrades to updating discriminator twice on the same real data, result of which is comparable to the baseline and much worse than the setting with appropriate perturbation. This indicates that performance improvement provided by the proposed method does not attribute to additional update of discriminator. Furthermore, we do another experiment in which gradient used to craft perturbation is replaced by Gaussian noise. FID score of this setting is slightly worse than the baseline, which indicates that perturbation of gradient direction instead of random direction is a key factor that makes the proposed method effective.

Figure 5: Training curve of FID in different settings. Updating discriminator twice on the same data () or perturbing samples with random noise can not work, which indicates that the comparison between the proposed method and the baseline is fair.

4.3 Evaluation with different architectures

To explore the compatibility of the proposed method, we test it with widely adopted DCGAN and ResNet architecture on CIFAR-10 and CelebA. Figure 6 plots the comparison results. With the proposed method, FID scores on CelebA is improved significantly and convergence is also accelerated. Even with the setting in which vanilla GANs collapse, our model can still converge stablely.

In addition, we further test the proposed method with spectral normalization. Results show that the proposed method can work well with spectral normalization and achieve a FID of 5.88 on CelebA, exceeding the FID of 11.71 achieved by the proposed method alone. Therefore, we can combine these two schemes to obtain better performance in practice.

As shown in Table 1, through unsupervised training on CIFAR-10, CelebA and bedroom in LSUN, the proposed method with spectral normalization can achieve comparable performance to state of the art. It should be noted that the six rows at the bottom show the results of our implementation. Generated samples are shown in Figure 7.

Figure 6: Training curves of FID on CIAFR-10 (upper) and CelebA (lower) with DCGAN (left) and ResNet (right). Results show that the proposed method can accelerate convergence and achieve better FID. Meanwhile, it can stabilize training with less sensitivity to network architecture and hyper-parameter setting.
Figure 7: (a): 64 x 64 results generated by AS-DCGAN on bedroom in LSUN. (b): 64 x 64 CelebA samples generated by AS-ResNet.
Figure 8: (a): Collapsed samples generated by standard GAN trained on CIFAR-10. (b): Samples generated by AS-ResNet trained on CIFAR-10.
Method Inception score FID
CIFAR-10 CIFAR-10 CelebA LSUN
(Standard CNN)
WGAN-GP 6.68.06 40.2 21.2
SN-GAN 7.58.12 25.5
WGAN-GP(ResNet) 7.86.07 18.8 18.4 26.8
WGAN-div(ResNet) 18.1 15.2 15.9
DCGAN 7.05.14 28.05 20.45 25.36
AS-DCGAN 7.21.02 25.50 10.90 18.08
AS-SN-DCGAN 7.24.14 24.50 10.60 21.84
ResNet 7.35.16 22.92 25.72 175.70
AS-GAN(ResNet) 7.65.15 21.84 11.71 45.96
AS-SN-GAN(ResNet) 7.84.17 22.26 5.88 8.00
Table 1: Inception scores and FIDs with unsupervised image generation on CIFAR-10 and CelebA. Miyato2018Spectral , Wu2017Wasserstein , gulrajani2017improved

5 Analysis

5.1 Computation overhead

The training computation overhead of the proposed method is about 25% relative to the baseline, which is comparable to that of spectral normalization and much smaller than that of gradient penalty. Comparison of average training time of one epoch of different methods is shown in Table 2.

Setting DCGAN (ours)AS-DCGAN SN-DCGAN DCGAN-GP
Training time 19.83s 26.40s 24.50s 31.57s
Table 2: Average training time of different methods

5.2 Gradient visualization

Gradient given by discriminator is the key to update generator. Hence, we visualize the gradient in Figure 2, which shows that the gradient of adversarially trained discriminator contains more semantic information, eg, profile of face, but the gradient of standard discriminator looks like uninformative noise. We further show the histogram of the gradient of discriminator with respect to real samples as Figure 9a. Our method can obtain more sparse gradient and lower L1 norm (Figure 9b) as training iteration increases, which means adversarial noise in gradient is partly eliminated. What’s more, the proposed method can augment training data and smooth the decision boundary of discriminator, which can alleviate mode collapse to a large extent shown as Figure 8. By means of adversarial training evenly on both real and fake samples, we make the training scheme symmetric and stable.

Figure 9: (a) Histogram of gradient of discriminator with respect to input images. The left is the baseline, and the right is the adversarial symmetrically trained discriminator. (b) L1-norm evolution of discriminator gradient during training. Gradient of adversarially trained discriminator is more sparse and contains less uninformative noise.

5.3 Training loss

With the proposed method, the confidence and the loss of discriminator become more smooth and stable during training, which are depicted as Figure 10. Moreover, when adversarial perturbation is appropriate for training, the confidence about adversarial samples will be lower than by a large margin because discriminator is sensitive to adversarial samples in the beginning. With more iterations, discriminator will become robust to adversarial samples. As shown in Table 3, accuracy on real samples under FGSM attack with perturbation of 1/255 is improved significantly compared to the baseline. Similarly, the loss is also stabilized with our algorithm (Figure 10b).

Figure 10: (a) Confidence of discriminator on real data and adversarial samples of real data during training. (b) Evolution of loss of discriminator in different settings.
Model standard accuracy adversarial accuracy
GAN(ResNet) 0.98 0.50
AS-GAN(ResNet) 0.99 0.93
Table 3: Accuracy on real samples under FGSM attack with perturbation of 1/255

6 Conclusion

The relationship between GANs and adversarial samples has been a open question since both models emerged. In this paper, we analyze that adversarial training on fake samples is already taken into account in standard GAN training, but adversarial training on real samples does not exist, which can make training unbalanced and unstable. This is because gradient given by non-robust discriminator contains more adversarial noise, which can mislead update of the generator. In order to make training scheme symmetric and make discriminator more robust, we introduce adversarial training on real samples. We validate the proposed method on image generation on CIFAR-10, CelebA, and LSUN with varied network architectures. Experiments show that the gradient of adversarially trained discriminator contains less adversarial noise. Convergence speed and performance are improved with additional computation overhead. Moreover, mode collapse is alleviated. The proposed method with spectral normalization can achieve comparable FID to state of the art on these datasets.

References