Deep neural networks (DNNs) have achieved significant success when applied to a variety of challenging machine-learning tasks. For example, DNNs have obtained state-of-the-art accuracy on large-scale image classification(He et al., 2016b; huang2017densely)
. At the same time, vulnerability to adversarial examples, an undesired property of DNNs, has drawn attention in the deep-learning community(Szegedy et al., 2013; Goodfellow et al., 2014)
. Generally speaking, adversarial examples are perturbed versions of the original data that successfully fool a classifier. For example, in the image domain, adversarial examples are images transformed from natural images with visually negligible changes but that lead to wrong predictions(Goodfellow et al., 2014). The existence of adversarial examples has raised many concerns, especially in scenarios with a high risk of misclassification, such as autonomous driving.
To tackle adversarial examples, a variety of defensive methods against adversarial attacks have been proposed, yet most of them remain vulnerable to adaptive attacks (Szegedy et al., 2013; Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016; Kurakin et al., 2016; Carlini & Wagner, 2017; Brendel et al., 2017; Athalye et al., 2018). One type of adversarial defense that demonstrated good performance against strong attacks is based on adversarial training (Goodfellow et al., 2014; Madry et al., 2017; Zhang et al., 2019). Adversarial training constructs a defense model by augmenting the training set with adversarial examples. Though this is a simple strategy, it has achieved a great success in adversarial defense.
The strength of attacks are commonly quantified by the distance between adversarial examples and natural examples. One desired property of a defense model is norm-agnostic, which requires a model to be robust against attacks constrained by a variety of norms. Recently, a more general attack mechanism called unrestricted adversarial attacks are introduced by Brown et al. (2018), where adversarial examples are not necessarily close to a natural image as long as they are semantically similar. To achieve robustness against unrestricted attacks, being norm-agnostic is a minimum requirement.
In this paper, we propose a new attack method and show adversarial training, the most successful adversarial defense models, is not norm-agnostic. Previously, it was reported both in (Madry et al., 2017) and (Zhang et al., 2019) that adversarial training is robust against attacks. Our experiments, however, suggest they fail to defend against and adversarial examples simultaneously.
2 Background and Related Work
Adversarial training constructs adversarial examples that are included to the training set to train a new and more robust classifier. This method is intuitive and has gained great success in defense (Szegedy et al., 2013; Goodfellow et al., 2014; Madry et al., 2017; Zhang et al., 2019). Madry et al. (2017) showed that iterative attacks during training yield strong defense models to while-box attacks (Athalye et al., 2018). More recently, another adversarial training based defense model (Zhang et al., 2019) has won the first place in the defense track of the NIPS 2018 Adversarial Vision Challenge (Brendel et al., 2018).
Although adversarial training has been so far one of the most successful defense methods, it has its limitations. In (Tramèr et al., 2017), it was pointed out that single-step adversarial training, where single-step method (e.g., FGSM (Goodfellow et al., 2014)) is used for constructing adversarial examples, suffers from the “degenerate global minimum” issue and thus is not robust. To mitigate this issue, they propose ensemble adversarial training to improve the generalization of adversarial training. More recently, (Song et al., 2018) suggests using domain adaption as an improvement of ensemble adversarial training, leading to better robustness. However, both works only focus on single-step attack based adversarial training, while the most advanced adversarial training models are based on multi-step attacks. Tramèr et al. (2017) states that incorporating multi-step attacks during training could fix the degenerate-global-minimum issue. In this paper, we show multi-step adversarial training still suffers from this issue.
3.1 Adversarial Examples
Given a classifier for an image , an adversarial example satisfies for some small , and , where is some distance metric, i.e., is close to but yields a different classification result. The distance is often described in terms of an metric, and in most of the literature and metrics are considered.
One of the simplest and widely used attack methods is a single-step method, the Fast Gradient Sign Method (FGSM) (Kurakin et al., 2016), which manipulates inputs along the direction of the gradient with respect to the outputs:
where is the projection operation that ensures adversarial examples stay in the ball around .
Its multi-step variant FGSM is more powerful and has been shown to be equivalent to exploring adversarial examples with the projected gradient descent (PGD) method (Madry et al., 2017):
3.2 Adversarial Training
The motivation behind adversarial training is that finding a robust model against adversarial examples is equivalent to solving the saddle-point problem:
The inner maximization is equivalent to constructing adversarial examples, while the outer minimization can be performed by standard training procedure for loss minimization.
Therefore, to achieve robustness to adversarial examples, adversarial training augments the training data with adversarial examples constructed during training, as an approximation to the inner maximization procedure.
3.3 Degenerate Global Minimum
In (Tramèr et al., 2017), it is pointed out that if denotes the adversarial example generated by FGSM, adversarial training ideally results in a robust classification model such that:
However, the training procedure may instead discover a “degenerate global minimum” :
In another word, the training procedure may generate a model that makes finding adversarial examples difficult for FGSM instead of a truly robust model.
(Tramèr et al., 2017) proposes two possible solutions for mitigating this issue. One is to use a strong multi-step adversarial training, such as PGD, at a cost of increased computational burden. Another is ensemble adversarial training, that is incorporating adversarial examples generated from multiple pre-trained classifiers that are different from the original one. In this way, they can decouple the construction of adversarial examples and the training to prevent “degenerate global minimum”, while still obtain the robustness of adversarial training due to the transferability of adversarial perturbations across models (Goodfellow et al., 2014).
4 Second Order Attack
We propose a new attack motivated by the “degenerate global minimum”. Note adversarial training is equivalent to solving the optimization problem:
Its solution is a saddle point of , i.e., the gradient ideally vanishes at as . In practice, an adversarial training often finds
that makes the loss function flat in the neighborhood of a natural example, which leads to inefficient exploration for adversarial examples when performing attacks. This is intuitively the cause of “degenerate global minimum”.
Most current attack methods construct adversarial examples based on the gradient of a loss function. However, according to the analysis above, first-order derivative is not effective for attacks if the defense model is trained adversarially. This motivates utilization of the second-order derivative of a loss function to construct adversarial examples.
To this end, assume the loss function is twice differentiable with respect to . Using Taylor expansion on the difference between the losses on the original and perturbed samples, and assuming the gradient vanishes, we have
with being the perturbation, and is the Hessian matrix of the loss function. Our goal is to find a small perturbation that maximizes the difference
. Our idea is based on the observation that the optimal perturbation direction should be in the same direction as the first dominant eigenvector,, of , that is for some constant . However, computing the eigenvectors of the Hessian matrix requires runtime with the dimension of the data. To tackle this issue, we adopt the fast approximation method from (Miyato et al., 2017), which is essentially a combination of the power-iteration method and the finite-difference method, to efficiently find the direction of the eigenvector. Based on this method, the optimal direction, denoted , is approximated***Detailed derivations are provided in the Supplementary Material. by
is a randomly sampled unit vector andis a manually chosen step size. In practice,
is drawn from a centered Gaussian distribution and normalized such that itsnorm is .
This procedure is essentially a stochastic approximation to the optimal second-order direction, where the randomness comes from
. To reduce the variance of the approximation, we further take the expectation over the Gaussian noise, yielding. Note that choosing is equivalent to choosing the step size in (3). Finally, we construct adversarial examples by an iterative update via PGD:
where . Intuitively, this method perturbs the example at each iteration and tries to move out of the local maximum in the sample space, due to the introduction of random Gaussian noise.
We perform experiments on the MNIST data set to validate our claims on adversarial training.
The architecture of our model follows the ones used in (Madry et al., 2017). Specifically, the model contains two convolutional layers with and filters, each followed by max-pooling, and a fully connected layer of size . Image intensities are scaled to , and the size of attacks are also rescaled accordingly. In all the experiments, we bound the norm less than while the norm less than .
We evaluate PGD and proposed S-O attacks on four settings: adversarial training with PGD adversarial examples (Madry et al., 2017), Tradeoff-inspired Adversarial Defense (TRADES) (Zhang et al., 2019), ensemble adversarial training (Tramèr et al., 2017), adversarial training via domain adaption(Song et al., 2018).
Specifically, we first consider constructing adversarial examples with and constraints during training respectively. Table 1 shows, as expected, that Madry’s model and TRADES successfully defend attacks with the same norm constraints. However, in spite of the fact that adversarial training stays robust against PGD attack, S-O attack can effectively reduce the accuracy of both models when a different norm is used. This suggests that standard adversarial training is not norm-agnostic.
It is natural to wonder whether the issue will be fixed if two kinds of adversarial examples are both included during training. To this end, we conduct additional experiments with mixed adversarial examples, that is alternating between and bounded examples for adversarial training. Using the mixed strategy, the accuracy on both attacks are no longer reduced to almost zero, but the overall performance is still unsatisfying. We conclude that mixing adversarial examples barely helps improving norm-agnostic robustness.
According to our analysis, the poor performance of adversarial training is due to “degenerate global minimum”, therefore, we expect ensemble adversarial training could fix the problem, as suggested in (Tramèr et al., 2017). The results from Table 1 suggest ensemble adversarial training and domain adaption partially fixes the issue, although the accuracy against attacks is still far from ideal.
In addition, we found two more interest phenomenons that can support our claims. Firstly, the relatively good performance of ensemble adversarial training implies that the vulnerability to adversarial perturbations with different norms is indeed caused by the “degenerate global minimum” issue, similar to the single-step adversarial training. Secondly, the performance of PGD and S-O attacks become similar for ensemble adversarial training model. This implies the effectiveness of S-O attacks compared to PGD attacks is due to exploitation of the “degenerate global minimum” issue.
In figure 1, we take a closer look at the behaviour of the attack methods by plotting the average norms of the gradients of the loss function with respect to the adversarial examples during the construction processes. Specifically, we compute for each , where is the index set of a batch.
We monitor this quantity for three settings: naturally trained model attacked by PGD, TRADES attacked by PGD, and TRADES attacked by S-O. The difference between the blue and orange lines show the norms of the gradients of the adversarially trained model are much smaller than the ones of the naturally trained model under PGD attacks, validating our explanation in Section 4, that an adversarially trained model tends to make the loss function “flat” in the neighborhood of natural examples which makes PGD attacks inefficient. The difference between the orange and green lines shows S-O attack is able to construct adversarial examples more efficiently by correctly finding the steepest direction, which explains why adversarially trained models are vulnerable to it.
Finally, one may argue that the perturbation size is too large that it violates the assumption that adversarial perturbations are visually negligible. We therefore perform S-O attack with perturbation size , which results in accuracy . We also illustrate some randomly selected perturbed adversarial examples that are misclassified by TRADES in figure 2.
One can observe that although noticeable, there are only a limited amount of perturbations in the adversarial examples that do not change the semantic meaning of the images.
It is worth-noting that we do not observe similar results on CIFAR-10. We believe on CIFAR-10, it is difficult to reach even a “degenerate global minimum” for adversarial training due to the high dimensionality of the input space. This explains why adversarial training is still far from being perfectly robust even against PGD (Madry et al., 2017; Zhang et al., 2019).
In this paper, we show multi-step adversarial training models suffer from “degenerate global minimum” and thus are not norm-agnostic robust. Our proposed attack method is capable of constructing adversarial examples that reduces the accuracy of state-of-the-art adversarial training when different norms are used for training and attacking.
On the other hand, ensemble adversarial training can mitigate the issue thus should be considered as a standard procedure for adversarial training, even though they can only obtain moderate adversarial robustness.
In general, considering state-of-the-art results in adversarial defense are often achieved by adversarial training, we believe it is important to check the norm-agnostic robustness when designing adversarial defense models.
- Athalye et al. (2018) Athalye, A., Carlini, N., and Wagner, D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
- Brendel et al. (2017) Brendel, W., Rauber, J., and Bethge, M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248, 2017.
- Brendel et al. (2018) Brendel, W., Rauber, J., Kurakin, A., Papernot, N., Veliqi, B., Salathé, M., Mohanty, S. P., and Bethge, M. Adversarial vision challenge. arXiv preprint arXiv:1808.01976, 2018.
- Brown et al. (2018) Brown, T. B., Carlini, N., Zhang, C., Olsson, C., Christiano, P., and Goodfellow, I. Unrestricted adversarial examples. arXiv preprint arXiv:1809.08352, 2018.
- Carlini & Wagner (2017) Carlini, N. and Wagner, D. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pp. 39–57. IEEE, 2017.
- Goodfellow et al. (2014) Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- He et al. (2016a) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In
- He et al. (2016b) He, K., Zhang, X., Ren, S., and Sun, J. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Springer, 2016b.
- Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
- Kurakin et al. (2016) Kurakin, A., Goodfellow, I., and Bengio, S. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
- Madry et al. (2017) Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
- Miyato et al. (2017) Miyato, T., Maeda, S.-i., Koyama, M., and Ishii, S. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:1704.03976, 2017.
- Moosavi-Dezfooli et al. (2016) Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582, 2016.
- Papernot et al. (2016) Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A. Distillation as a defense to adversarial perturbations against deep neural networks. In Security and Privacy (SP), 2016 IEEE Symposium on, pp. 582–597. IEEE, 2016.
- Song et al. (2018) Song, C., He, K., Wang, L., and Hopcroft, J. E. Improving the generalization of adversarial training with domain adaptation. arXiv preprint arXiv:1810.00740, 2018.
- Szegedy et al. (2013) Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- Tramèr et al. (2017) Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.
- Zagoruyko & Komodakis (2016) Zagoruyko, S. and Komodakis, N. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
- Zhang et al. (2019) Zhang, H., Yu, Y., Jiao, J., Xing, E. P., Ghaoui, L. E., and Jordan, M. I. Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573, 2019.
Appendix A Fast Approximate Method 
Power iteration method [golub2001eigenvalue] allows one to compute the dominant eigenvector of a matrix . Let be a randomly sampled unit vector which is not perpendicular to , the iterative calculation of
leads to . Given is the Hessian matrix of , we further use finite difference method to reduce the computational complexity:
where is the step size. If we only take one iteration, it gives an approximation that only requires the first-order derivative:
which gives equation 3.