Deep neural networks (DNNs) have achieved significant success when applied to many challenging machine learning tasks. For example, DNNs have obtained state-of-the-art accuracy on large-scale image classificationKrizhevsky et al. (2012); LeCun et al. (1998)
. At the same time, vulnerability to adversarial examples, an undesired property of DNNs, has drawn attention in the deep-learning communitySzegedy et al. (2013); Goodfellow et al. (2014)
. Generally speaking, adversarial examples are perturbed versions of the original data that successfully fool a classifier. For example, in the image domain, adversarial examples are images transformed from natural images with visually negligible changes, but lead to different classification resultsGoodfellow et al. (2014). The existence of adversarial examples has raised many concerns within the deep-learning community, especially in scenarios with high risk of misclassification, such as autonomous driving.
To tackle adversarial examples, many works have been proposed to improve the robustness of DNNs Papernot et al. (2016); Meng & Chen (2017), referred to as defense models. However, most of these defense methods have been later attacked successfully by new attack methods Carlini & Wagner (2017b, a). For example, Athalye et al. (2018) conducted a case study that successfully attacked seven defense methods submitted to ICLR2018.
One model that demonstrated good performance against strong attacks, and has thus far not been successfully attacked, is based on adversarial training Goodfellow et al. (2014); Madry et al. (2017). Adversarial training constructs a defense model by augmenting the training set with adversarial examples. Though successful in adversarial defensing, the underlying mechanism is still unclear. In this paper, we explain the reason for the good performance, and show that the Madry’s defense model Madry et al. (2017), an adversarially trained model with attack is not robust against an attack even on MNIST data set. In particular, we develop a new attack method based on approximated second-order derivative that forces its accuracy worse than naturally trained baseline models. Considering that Sharma & Chen (2017) has proposed -based adversarial examples that breaks the Madry’s model, we believe the robustness of adversarially trained model may overfit to the choice of norms.
Our findings lead to a concern that most of existing defense methods are heuristically driven, thus any defense method that cannot provide theoretically-provabale robustness guarantee is potentially vulnerable to future attacks. In fact,Fawzi et al. (2018) showed that if the data are generated from a generative model with latent representations, then no classifier is robust to adversarial perturbations when the latent space is sufficiently large and the generative model is sufficiently smooth.
. However, most of them either have to make strong assumptions, such as on the structure of the model or on the smoothness of the loss function, or are difficult to extend to large-scale data sets, the most common application scenario for DNNs.
Recently, Mathias et al. (2018) developed theoretical insight of certifiable robust prediction, by building a connection between differential privacy and model robustness. It is shown that adding properly chosen noise to the classifier will lead to certifiable robust prediction. More importantly, their framework allows one to calculate an upper bound on the size of attacks a model is robust to.
Built on the idea in Mathias et al. (2018), as our second contribution, we conduct an analysis based on Rényi divergence Van Erven & Harremos (2014), and show a higher upper bound on the tolerable size of attacks compared with Mathias et al. (2018). In addition, we suggest there exists a connection between adversarial defense and robustness to random noise. Based on this, we introduce a more comprehensive framework, incorporating stability training to improve classification accuracy. Intuitively, our framework uses a model that is more robust to random noise, which yields a higher classification accuracy when random noise is present for adversarial defense. Considering MNIST and CIFAR-10, our experiments demonstrate that the proposed defense yields stronger robustness to adversarial attacks, compared to other models.
We consider the task of image classification. Natural images are represented by , where represents the image space, with the height, width, and channels of an image, respectively. An image classifier over classes is considered as a function . In this paper, we only consider classifiers constructed by DNNs. To better present our framework, we define a stochastic classifier, a function over with output being a multinomial distribution over , i.e., for . One can classify by picking . Note this distribution is different from the one generated from softmax.
2.2 Rényi Divergence
Our theoretical result depends on the Rényi divergence, defined as follows Van Erven & Harremos (2014):
Definition 1 (Rényi Divergence)
For two probability distributions
For two probability distributionsand over , the Rényi divergence of order is
Note that when
, it can be verified that Rényi divergence converge to Kullback-Leibler divergence,i.e., .
2.3 Adversarial Examples
Given a classifier for an image , an adversarial example satisfies for some small , and , where is some distance metric, i.e., is close to but yields a different classification result. The distance is often described in terms of an metric, and in most of the literature and metrics are considered. In this paper, we focus on the metric.
Adversarial examples are often constructed by iterative optimization methods. Previous work has proposed a number of adversarial attack methods, such as the Fast Gradient Sign Method (FGSM) Kurakin et al. (2016), along with its multi-step variant FGSM, which is equivalent to exploring adversarial examples that increase the classification loss using projected gradient descent (PGD) Madry et al. (2017):
where is the projection operation that ensures adversarial examples stay in the ball around . In Madry et al. (2017), it has also been shown that a PGD attack is a universal adversary among all first-order attack methods. Their analysis and evidence from experiments all suggest any adversarial attack method that only incorporates gradients of the loss function w.r.t. the input cannot do significantly better than PGD.
2.4 Adversarial Training
Adversarial training constructs adversarial examples and includes them into a training set to train a new and more robust classifier. This method is intuitive, and has gained great success in terms of defense Goodfellow et al. (2014); Madry et al. (2017). The motivation behind adversarial training is that finding a robust model against adversarial examples is equivalent to solving the following saddle-point problem:
The inner maximization is equivalent to constructing adversarial examples, while the outer minimization is the standard training procedure for loss minimization.
3 Second-Order Adversarial Attack
In this section, we propose an efficient second-order adversarial attack method. As one motivation, note most current attack methods construct adversarial examples based on the gradient of a loss function. However, a first-order derivative is not effective for attacks if the defense model is trained adversarially.
To see this, first note that adversarial training is equivalent to solving the optimization problem: . The solution is a saddle point, i.e., it not only converges to a local minimum in the parameter space, but also converges to a local maximum in the sample space, in which the gradient ideally vanishes at as . In practice, an adversarially trained defense model often finds that makes the loss function flat in the neighborhood of a natural example , which leads to inefficient exploration for adversarial examples in the attack methods. This motivates utilization of the second-order derivative of the loss function to construct adversarial examples.
Specifically, assume the loss function is twice differentiable with respect to . Using Taylor expansion on the difference between the losses on the original and perturbed samples, and assuming the gradient vanishes, we have
where is the perturbation, and is the Hessian matrix of the loss function. Our goal is to find a small perturbation that maximizes the difference (4
). Our idea is based on the observation that the optimal perturbation direction should be in the same direction as the first dominant eigenvector,, of , that is for some constant . However, computing the eigenvectors of the Hessian matrix requires runtime with the dimension of the data. To tackle this issue, we adopt the fast approximation method from Miyato et al. (2017), which is essentially a combination of the power-iteration method and the finite-difference method, to efficiently find the direction of the eigenvector. Based on this method, the optimal direction, denoted , is approximated***Detailed derivations are provided in the Appendix. by
is a randomly sampled unit vector andis a manually chosen step size. In practice,
is drawn from a centered Gaussian distribution and normalized such that itsnorm is .
This procedure is essentially a stochastic approximation to the optimal direction, where the randomness comes from
. To reduce the variance of the approximation, we further take the expectation over the Gaussian noise, yielding. Note that choosing is equivalent to choosing the step size in (5). Finally, we construct adversarial examples by an iterative update via PGD:
Intuitively, this method perturbs the example at each iteration and tries to move out of the local maximum in the sample space, due to the introduction of random Gaussian noise.
Connection to Expectation of Transformation (EOT) Attack
We can think of (6) as gradient descent for maximizing an objective function of . Note . Consequently, (6) is equivalent to constructing adversarial examples with the EOT method proposed by Athalye & Sutskever (2017), when a defense model contains added Gaussian noise into the original data.
In general, EOT tries to construct adversarial samples against a defense models with randomization, by solving the optimization problem: , where
is the randomization in the defense model. Intuitively, EOT aims to estimate the correct gradient by excluding the effect of randomization via expectation. If a defense model tries to add Gaussian noise to its inputs, the corresponding optimization problem for EOT becomes, which is the same as the approximate solution to our second-order attack method.
Note a significant difference between our method and EOT is that the random noise added in EOT depends on its corresponding defense model, whereas randomness in our method is independent of a defense model. Furthermore, such a connection suggests that our attack method is powerful for not only defenses based on adversarial training, but also for defenses based on randomization. In the experiments, we show our attack method dramatically reduces the accuracy of a defense model proposed by Madry et al. (2017).
4 Certifiable Robustness
The effectiveness of the proposed attack method, as we will show in the experiments, suggests defensing from adversarial attack is extremely difficult. In fact, the existence of robust models against arbitrary adversarial attack is still questionable. Therefore, any defense model without certifiable robustness are always possible to be bypassed by stronger attacks.
Inspired by the PixelDP method Mathias et al. (2018), we propose a framework that enables certifiable robustness on any classifier. Intuitively, our approach adds random noise to pixels of adversarial examples before classification, to eliminate the effects of adversarial perturbations. The most important feature of this framework is that it allows calculation of an upper bound on the size of attacks it is robust to.
Our approach is summarized in Algorithm 1. In the following, we develop theory to prove the certifiable robustness of the proposed algorithm. Our goal is to show that if the classification of in Algorithm 1 is , then for any examples such that , the classification of is also .
To prove our claim, first recall that a stochastic classifier over is a classifier whose output has a multinomial distribution over with probabilities as . In this context, robustness to an adversarial example generated from means with and , where denotes the probability of a specific output value. In the remainder of this section, we show Algorithm 1 achieves such robustness based on the Rényi divergence, starting with the following lemma.
Let and be two multinomial distributions over the same index set . If the indexes of the largest probabilities do not match on and , that is , then
where and are the largest and the second largest probabilities in ’s.
To simplify notation, we define as the generalized mean. The RHS in condition (7) becomes .
Lemma 1 proposes a lower bound of the Rényi divergence for changing the index of the maximum of , i.e., for any distribution , if , the index of maximum of and must be the same. Based on Lemma 1, we obtain our main theorem on certifiable robustness, validating our claim:
Suppose we have , and a potential adversarial example such that . Given a -classifier , let and .
If the following condition is satisfied, with and being the first and second largest probabilities in ’s:
With Theorem 2, we can enable certifiable robustness on any classifier by adding i.i.d. Gaussian noise to pixels of inputs during testing, as done in Algorithm 1. It provides an upper bound for the size of attacks that a classifier is robust to. Since the upper bound only depends on the output distribution
, one can evaluate the bound based only on natural examples. Note the evaluation requires adjustment and computing confidence intervals forand , but we omit the details as it is a standard statistcal procedure. In the experiments, we use the end points of the confidence intervals for and .
Based on the property of generalized mean, one can show that the upper bound is larger when the difference between and becomes larger. This is consistent with the intuition that a larger difference between and indicates more confident classification. In other words, more confident classification, in the sense that more probability concentrated on one class, is beneficial for robustness.
It is worth mentioning that, as pointed out in Mathias et al. (2018), the noise is not necessarily added directly to the inputs but also to the first layer of a DNN. Given the Lipschitz constant of the first layer, one can still calculate an upper bound using our analysis. We omit the details here for simplicity.
Compared to the upper bound in PixelDP, our upper bound is strictly higher. We show the improvement using a simulation detailed in the Appendix. In the next section, we propose a simple strategy to improve the empirical performance of this framework.
5 Improved Certifiable Robustness
In Algorithm 1, for a classifier with a standard DNN, the added Gaussian noise is harmful to the classification accuracy on the original data. As discussed above, inaccurate prediction, in the sense that and are close, leads to weak robustness. Fortunately, one strength of Algorithm 1 is it requires nothing particular on the classifier , which yields flexibility to modify to make it more robust against Gaussian noise.
Note robustness to Gaussian noise is much easier to achieve than robustness to carefully crafted adversarial examples. In PixelDP, the authors incorporated noise by directly adding the same noise during the training procedure. However, we note that there have been notable efforts at developing robust DNNs to natural perturbations Xie et al. (2012); Zhang et al. (2017); yet, these methods failed to defend models from adversarial attacks, as they are not particularly designed for that task. Our framework allows us to adapt these methods to improve the accuracy of classification when Gaussian noise is present, improving the robustness of our model. We emphasize that a connection between robustness to adversarial examples and robustness to natural perturbation Xie et al. (2012); Zhang et al. (2017) has been established, which introduces a much wider scope of literature into the adversarial defense community.
5.1 Stability Training
The idea of introducing perturbations during training to improve model robustness has been studied in many works. In Bachman et al. (2014)
the authors regard perturbing models as a construction of pseudo-ensembles, to improve semi-supervised learning. More recently,Zheng et al. (2016) used a similar training strategy, named stability training, to improve classification robustness on noisy images.
For any natural image , stability training encourages its perturbed version to yield a similar classification result under a classifier , i.e., is small for some distance measure . Specifically, given a loss function for the original classification task, stability training introduces a regularization term , where controls the strength of the stability term. As we are interested in a classification task, we use cross-entropy as the distance between and , yielding the stability loss where and are the probabilities generated after softmax.
In this paper, we add i.i.d. Gaussian noise to each pixel of to construct , as suggested in Zheng et al. (2016). Note that this is in the same spirit as adversarial training, but is only designed to improve the classification accuracy under a Gaussian perturbation.
We conduct experiments to evaluate the performance of our proposed methods in terms of attack effectiveness and defense robustness. Our methods are tested on the MNIST and CIFAR-10 data sets. The architecture of our model follows the ones used in Madry et al. (2017). Specifically, for the MNIST data set, the model contains two convolutional layers with and filters, each followed by max-pooling, and a fully connected layer of size . For the CIFAR-10 dataset, we use a wide ResNet model Zagoruyko & Komodakis (2016); He et al. (2016). The network contains residual unites with filters. The implementation details are provided in the Appendix. In both datasets, image intensities are scaled to , and the size of attacks are also rescaled accordingly. For reference, a distortion of in the scale corresponds to in scale.
6.1 Theoretical Bound
We first evaluate the theoretical bounds of the size of attacks. With Algorithm 1, we are able to classify a natural image and calculate an upper bound for the size of attacks for this particular image. Thus, with a given size of attack , we know the classification result is robust if .
Further, if the classfication for a natural example is correct and robust for simultaneously, we know any adversarial examples such that will be classified correctly. Therefore, we can calculate the proportion of such examples in the test set to determine a lower bound of accuracy under attack of size . We plot different lower bounds for various for both MNIST and CIFAR-10 in Figure 1. To interpret the result, for example on MNIST, when , Algorithm 1 achieves at least accuracy under attacks whose -norm sizes, , are smaller than .
A clear trade-off between the tolerable attack sizes and the accuracy lower bound is present in Figure 1. This is anticipated, as our lower bound 8 indicates higher standard deviation results in higher proportion of robust classification, but in practice larger noise would also lead to a worse classification accuracy.
6.2 Empirical Results
We next perform classification and measure the accuracy on real adversarial examples to evaluate the performance of our attack and defense methods. We first apply our attack method to the state-of-the-art defense model proposed in Madry et al. (2017) based on adversarial training. In all experiments, we focus on five settings, summarized in Table 1. The adversarially trained model is trained against PGD attack. All the attacks are attack.
|Number||Defense Model||Attack Method|
|1||Naturally Trained Model||PGD|
|2||Adversarially Trained Model (Madry’s)||PGD|
|3||Naturally Trained Model||second-order(S-O) attack|
|4||Adversarially Trained Model (Madry’s)||second-order(S-O) attack|
|5||Stability trained model with Gaussian Noise (STN)||second-order(S-O) attack|
In the first plot in Figure 2, we monitor the average norm of gradients of the loss function during the construction of adversarial examples. Specifically, we compute for each , where is the index set of a batch. We monitor this quantity under the setting 1, 2 and 4 in Table 1. The result shows the norms of the gradients of the adversarially trained model are much smaller than the ones of the naturally trained model, validating our explanation in Section 3, that an adversarially trained model tends to make the loss function flat in the neighborhood of natural examples. It also shows our attack method can find adversarial examples with large loss more efficiently, by incorporating second-order derivative information.
In the second plot, we show the classification accuracy on MNIST under the setting 1, 2 and 4. The plot suggests that S-O attack is able to dramatically reduce the accuracy of Madry’s model. The resulting accuracy is even worse than the one from the naturally trained baseline model.
In the third plot, we show the accuracy of different defense models under S-O attack, under the setting 3, 4 and 5. The plot suggests our model achieves better accuracy than both the baseline and the Madry’s model. Note that due to the equivalence between the S-O and EOT attacks for Gaussian noise, the robustness of STN is not achieved by hiding the true gradients via randomization.
In Figure 3, we show the classification accuracy on CIFAR-10 under the setting 3, 4 and 5. In addition, we include results for PixelDP from Mathias et al. (2018) to show how stability training helps improve classification accuracy. Note that our S-O attack does not further reduce the accuracy of Madry’s model on CIFAR-10 significantly. For example, with the size of attack in norm being , we only reduce the accuracy from to . We argue this is because their adversarially trained model only achieves weak robustness, and thus it is difficult to further attack it. As a comparison, our defense model obtains higher accuracies than both Madry’s model and PixelDP.
Overall, stability training combined with Gaussian noise shows a promising level of robustness, and performs better than other models even under attacks that incorporate randomization.
Another strength of our method is that stability training only requires twice the computational time, whereas adversarial training is extremely time-consuming due to the iterative construction of adversarial examples.
Attacks and Adversarial Training
In Madry et al. (2017), the authors propose an adversarially trained model, and argue it is robust against attacks even though it is trained against attacks. Our proposed attack significantly reduces their classification accuracy.
In general, we find adversarial training is only effective against the types of attacks used during the training. For example, although our attack is effective on adversarially trained model against attacks, it cannot defeat a model that is adversarially trained against our attack. However, this defense model is then vulnerable to attack. This observation makes us believe that adversarial training may overfit to their choice of norms.
On the Gap between Empirical and Theoretical Results
A noticeable gap exists between the theoretical bound shown in Figure 1 and the empirical accuracy. There are three possible explanations for this gap, each pointing to a direction for future works.
The most obvious one is that the proposed upper bound can still be improved with a better analysis. The second explanation is that the empirical result should be worse under stronger attacks that have not been proposed, as has happened to many defense models. The third explanation is due to the limitation of the distance. Although our framework is proposed in the context of adversarial attacks, where potential attacks are limited to be non-perceptible by humans, our theoretical analysis does not distinguish the types of perturbations up to their norms. In practice, one can perturb a few pixels with large values, such that the change is perceptible by humans and indeed leads to a change in classification, but it only yields small distance. The existence of such perturbations forces the upper bound to be a small number. Future work might explore similar guarantees for the stronger distance.
We propose a new attack method based on an approximated second-order derivative of the loss function. We show the attack method can effectively reduce accuracy of adversarially trained defense models that demonstrated significant robustness in previous works. We also propose an analysis on constructing defense models with certifiable robustness. A strategy based on stability training for improving the robustness of these defense models is also introduced. Our defense model shows better accuracy compared to previous works under our proposed attack.
- Athalye & Sutskever (2017) Anish Athalye and Ilya Sutskever. Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397, 2017.
- Athalye et al. (2018) Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
- Bachman et al. (2014) Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In Advances in Neural Information Processing Systems, pp. 3365–3373, 2014.
- Carlini & Wagner (2017a) Nicholas Carlini and David Wagner. Magnet and" efficient defenses against adversarial attacks" are not robust to adversarial examples. arXiv preprint arXiv:1711.08478, 2017a.
- Carlini & Wagner (2017b) Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pp. 39–57. IEEE, 2017b.
- Fawzi et al. (2018) Alhussein Fawzi, Hamza Fawzi, and Omar Fawzi. Adversarial vulnerability for any classifier. arXiv preprint arXiv:1802.08686, 2018.
- Golub & Van der Vorst (2001) Gene H Golub and Henk A Van der Vorst. Eigenvalue computation in the 20th century. In Numerical analysis: Historical developments in the 20th century, pp. 209–239. Elsevier, 2001.
- Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In
- Kolter & Wong (2017) J Zico Kolter and Eric Wong. Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv preprint arXiv:1711.00851, 2017.
- Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
- Kurakin et al. (2016) Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
- LeCun et al. (1998) Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- Madry et al. (2017) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
- Mathias et al. (2018) Lecuyer Mathias, Atlidakis Vaggelis, Geambasu Roxana, Hsu Daniel, and Jana Suman. On the connection between differential privacy and adversarial robustness in machine learning, 2018.
- Meng & Chen (2017) Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147. ACM, 2017.
- Miyato et al. (2017) Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:1704.03976, 2017.
- Papernot et al. (2016) Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In Security and Privacy (SP), 2016 IEEE Symposium on, pp. 582–597. IEEE, 2016.
- Raghunathan et al. (2018) Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344, 2018.
- Sharma & Chen (2017) Yash Sharma and Pin-Yu Chen. Breaking the madry defense model with -based adversarial examples. arXiv preprint arXiv:1710.10733, 2017.
- Sinha et al. (2017) Aman Sinha, Hongseok Namkoong, and John Duchi. Certifiable distributional robustness with principled adversarial training. arXiv preprint arXiv:1710.10571, 2017.
- Szegedy et al. (2013) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- Van Erven & Harremos (2014) Tim Van Erven and Peter Harremos. Rényi divergence and kullback-leibler divergence. IEEE Transactions on Information Theory, 60(7):3797–3820, 2014.
- Xie et al. (2012) Junyuan Xie, Linli Xu, and Enhong Chen. Image denoising and inpainting with deep neural networks. In Advances in neural information processing systems, pp. 341–349, 2012.
- Zagoruyko & Komodakis (2016) Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
- Zhang et al. (2017) Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017.
- Zheng et al. (2016) Stephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. Improving the robustness of deep neural networks via stability training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4480–4488, 2016.
9.1 Fast Approximate Method Miyato et al. (2017)
Power iteration method Golub & Van der Vorst (2001) allows one to compute the dominant eigenvector of a matrix . Let be a randomly sampled unit vector which is not perpendicular to , the iterative calculation of
leads to . Given is the Hessian matrix of , we further use finite difference method to reduce the computational complexity:
where is the step size. If we only take one iteration, it gives an approximation that only requires the first-order derivative:
which gives equation 5.
9.2 Proof of Lemma 1
Lemma 1 Let and be two multinomial distributions over the same index set . If the indexes of the largest probabilities do not match on and , that is , then
where and are the largest and the second largest probabilities in ’s.
Proof Think of this problem as finding that minimizes such that argmax argmax for fixed . Without loss of generality, assume .
It is equivalent to solving the following problem:
As the logrithm is a monotonically increasing function, we only focus on the quantity part for fixed .
We first show for the that minimizes , it must have . Note here we allow a tie, because we can always let and for some small to satisfy while not changing the Renyi-divergence too much by the continuity of .
If for some , we can define by mutating and , that is , then
which conflicts with the assumption that minimizes . Thus for . Since cannot be the largest, we have .
Then we are able to assume , and the problem can be formulated as
which forms a set of KKT conditions. Using Lagrange multipliers, one can obtain the solution and for . Plug in these quantities, the minimized Renyi-divergence is
Thus, we obtain the lower bound of for argmax argmax.
9.3 Proof of Theorem 2
A simple result from imformation theory:
Given two real-valued vectors and , the Rényi divergence of and is
Theorem 2 Suppose we have , and a potential adversarial example such that . Given a k-classifier , let and .
If the following condition is satisfied, with and being the first and second largest probabilities in ’s:
Proof From lemma 3, we know for and such that , with a k-class classification function :
if is a standard Gaussian noise. The first inequality comes from the fact that for any function .
Therefore, if further we have
Then from Lemma 1 we know that the index of the maximums of and must be the same, which means they have the same prediction, thus implies robustness.
9.4 Comparison between Bounds
We use simulation to show our proposed bound is higher than the one from PixelDP Mathias et al. (2018).
In PixelDP, the upper bound for the size of attacks is indirectly defined: if and the added noise has the distribution , then the classifier is robust against attacks whose size is less than .
As both bounds are determined by the models and data only through and , it is sufficient to compare them with simulation for different and as long as , and are satisfied.
For fixed , and are two tuning parameters that affect the result. For fair comparision, we use grid search to find and that maximizes their bound.
The simulation result shows our bound is strictly higher than the one from PixelDP.
9.5 Implementation Details
In this section, we specify the hyperparameters used in our experiments and other details. For all experiments, the baseline models are implemented using the codes from<https://github.com/MadryLab/mnist_challenge> and <https://github.com/MadryLab/cifar10_challenge>.
Our defense model only requires the following modification: 1) For stability training, we remove the adversarial training part and include the stability training regularizor. 2) At test time, we add i.i.d. Gaussian noise to pixels before feeding the images into the model.
For MNIST, we use stability training with STD of the Gaussian noise being . The STD of Gaussian noise during testing is . For CIFAR-10, the STD of Gaussian noise in stability training . The STD of Gaussian noise during testing is . Weight of the regularizor for both data sets.