Second-Order Adversarial Attack and Certifiable Robustness

09/10/2018 ∙ by Bai Li, et al. ∙ Duke University University at Buffalo 0

We propose a powerful second-order attack method that outperforms existing attack methods on reducing the accuracy of state-of-the-art defense models based on adversarial training. The effectiveness of our attack method motivates an investigation of provable robustness of a defense model. To this end, we introduce a framework that allows one to obtain a certifiable lower bound on the prediction accuracy against adversarial examples. We conduct experiments to show the effectiveness of our attack method. At the same time, our defense models obtain higher accuracies compared to previous works under our proposed attack.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (DNNs) have achieved significant success when applied to many challenging machine learning tasks. For example, DNNs have obtained state-of-the-art accuracy on large-scale image classification

Krizhevsky et al. (2012); LeCun et al. (1998)

. At the same time, vulnerability to adversarial examples, an undesired property of DNNs, has drawn attention in the deep-learning community

Szegedy et al. (2013); Goodfellow et al. (2014)

. Generally speaking, adversarial examples are perturbed versions of the original data that successfully fool a classifier. For example, in the image domain, adversarial examples are images transformed from natural images with visually negligible changes, but lead to different classification results

Goodfellow et al. (2014). The existence of adversarial examples has raised many concerns within the deep-learning community, especially in scenarios with high risk of misclassification, such as autonomous driving.

To tackle adversarial examples, many works have been proposed to improve the robustness of DNNs Papernot et al. (2016); Meng & Chen (2017), referred to as defense models. However, most of these defense methods have been later attacked successfully by new attack methods Carlini & Wagner (2017b, a). For example, Athalye et al. (2018) conducted a case study that successfully attacked seven defense methods submitted to ICLR2018.

One model that demonstrated good performance against strong attacks, and has thus far not been successfully attacked, is based on adversarial training Goodfellow et al. (2014); Madry et al. (2017). Adversarial training constructs a defense model by augmenting the training set with adversarial examples. Though successful in adversarial defensing, the underlying mechanism is still unclear. In this paper, we explain the reason for the good performance, and show that the Madry’s defense model Madry et al. (2017), an adversarially trained model with attack is not robust against an attack even on MNIST data set. In particular, we develop a new attack method based on approximated second-order derivative that forces its accuracy worse than naturally trained baseline models. Considering that Sharma & Chen (2017) has proposed -based adversarial examples that breaks the Madry’s model, we believe the robustness of adversarially trained model may overfit to the choice of norms.

Our findings lead to a concern that most of existing defense methods are heuristically driven, thus any defense method that cannot provide theoretically-provabale robustness guarantee is potentially vulnerable to future attacks. In fact,

Fawzi et al. (2018) showed that if the data are generated from a generative model with latent representations, then no classifier is robust to adversarial perturbations when the latent space is sufficiently large and the generative model is sufficiently smooth.

Several works on provable or certifiable adversarial defense methods have been proposed recently Raghunathan et al. (2018); Kolter & Wong (2017); Sinha et al. (2017)

. However, most of them either have to make strong assumptions, such as on the structure of the model or on the smoothness of the loss function, or are difficult to extend to large-scale data sets, the most common application scenario for DNNs.

Recently, Mathias et al. (2018) developed theoretical insight of certifiable robust prediction, by building a connection between differential privacy and model robustness. It is shown that adding properly chosen noise to the classifier will lead to certifiable robust prediction. More importantly, their framework allows one to calculate an upper bound on the size of attacks a model is robust to.

Built on the idea in Mathias et al. (2018), as our second contribution, we conduct an analysis based on Rényi divergence Van Erven & Harremos (2014), and show a higher upper bound on the tolerable size of attacks compared with Mathias et al. (2018). In addition, we suggest there exists a connection between adversarial defense and robustness to random noise. Based on this, we introduce a more comprehensive framework, incorporating stability training to improve classification accuracy. Intuitively, our framework uses a model that is more robust to random noise, which yields a higher classification accuracy when random noise is present for adversarial defense. Considering MNIST and CIFAR-10, our experiments demonstrate that the proposed defense yields stronger robustness to adversarial attacks, compared to other models.

2 Preliminary

2.1 Notation

We consider the task of image classification. Natural images are represented by , where represents the image space, with the height, width, and channels of an image, respectively. An image classifier over classes is considered as a function . In this paper, we only consider classifiers constructed by DNNs. To better present our framework, we define a stochastic classifier, a function over with output being a multinomial distribution over , i.e., for . One can classify by picking . Note this distribution is different from the one generated from softmax.

2.2 Rényi Divergence

Our theoretical result depends on the Rényi divergence, defined as follows Van Erven & Harremos (2014):

Definition 1 (Rényi Divergence)

For two probability distributions

and over , the Rényi divergence of order is

(1)

Note that when

, it can be verified that Rényi divergence converge to Kullback-Leibler divergence,

i.e., .

2.3 Adversarial Examples

Given a classifier for an image , an adversarial example satisfies for some small , and , where is some distance metric, i.e., is close to but yields a different classification result. The distance is often described in terms of an metric, and in most of the literature and metrics are considered. In this paper, we focus on the metric.

Adversarial examples are often constructed by iterative optimization methods. Previous work has proposed a number of adversarial attack methods, such as the Fast Gradient Sign Method (FGSM) Kurakin et al. (2016), along with its multi-step variant FGSM, which is equivalent to exploring adversarial examples that increase the classification loss using projected gradient descent (PGD) Madry et al. (2017):

(2)

where is the projection operation that ensures adversarial examples stay in the ball around . In Madry et al. (2017), it has also been shown that a PGD attack is a universal adversary among all first-order attack methods. Their analysis and evidence from experiments all suggest any adversarial attack method that only incorporates gradients of the loss function w.r.t. the input cannot do significantly better than PGD.

2.4 Adversarial Training

Adversarial training constructs adversarial examples and includes them into a training set to train a new and more robust classifier. This method is intuitive, and has gained great success in terms of defense Goodfellow et al. (2014); Madry et al. (2017). The motivation behind adversarial training is that finding a robust model against adversarial examples is equivalent to solving the following saddle-point problem:

(3)

The inner maximization is equivalent to constructing adversarial examples, while the outer minimization is the standard training procedure for loss minimization.

3 Second-Order Adversarial Attack

In this section, we propose an efficient second-order adversarial attack method. As one motivation, note most current attack methods construct adversarial examples based on the gradient of a loss function. However, a first-order derivative is not effective for attacks if the defense model is trained adversarially.

To see this, first note that adversarial training is equivalent to solving the optimization problem: . The solution is a saddle point, i.e., it not only converges to a local minimum in the parameter space, but also converges to a local maximum in the sample space, in which the gradient ideally vanishes at as . In practice, an adversarially trained defense model often finds that makes the loss function flat in the neighborhood of a natural example , which leads to inefficient exploration for adversarial examples in the attack methods. This motivates utilization of the second-order derivative of the loss function to construct adversarial examples.

Specifically, assume the loss function is twice differentiable with respect to . Using Taylor expansion on the difference between the losses on the original and perturbed samples, and assuming the gradient vanishes, we have

(4)

where is the perturbation, and is the Hessian matrix of the loss function. Our goal is to find a small perturbation that maximizes the difference (4

). Our idea is based on the observation that the optimal perturbation direction should be in the same direction as the first dominant eigenvector,

, of , that is for some constant . However, computing the eigenvectors of the Hessian matrix requires runtime with the dimension of the data. To tackle this issue, we adopt the fast approximation method from Miyato et al. (2017), which is essentially a combination of the power-iteration method and the finite-difference method, to efficiently find the direction of the eigenvector. Based on this method, the optimal direction, denoted , is approximated***Detailed derivations are provided in the Appendix. by

(5)

where

is a randomly sampled unit vector and

is a manually chosen step size. In practice,

is drawn from a centered Gaussian distribution and normalized such that its

norm is .

This procedure is essentially a stochastic approximation to the optimal direction, where the randomness comes from

. To reduce the variance of the approximation, we further take the expectation over the Gaussian noise, yielding

. Note that choosing is equivalent to choosing the step size in (5). Finally, we construct adversarial examples by an iterative update via PGD:

(6)

Intuitively, this method perturbs the example at each iteration and tries to move out of the local maximum in the sample space, due to the introduction of random Gaussian noise.

Connection to Expectation of Transformation (EOT) Attack

We can think of (6) as gradient descent for maximizing an objective function of . Note . Consequently, (6) is equivalent to constructing adversarial examples with the EOT method proposed by Athalye & Sutskever (2017), when a defense model contains added Gaussian noise into the original data.

In general, EOT tries to construct adversarial samples against a defense models with randomization, by solving the optimization problem: , where

is the randomization in the defense model. Intuitively, EOT aims to estimate the correct gradient by excluding the effect of randomization via expectation. If a defense model tries to add Gaussian noise to its inputs, the corresponding optimization problem for EOT becomes

, which is the same as the approximate solution to our second-order attack method.

Note a significant difference between our method and EOT is that the random noise added in EOT depends on its corresponding defense model, whereas randomness in our method is independent of a defense model. Furthermore, such a connection suggests that our attack method is powerful for not only defenses based on adversarial training, but also for defenses based on randomization. In the experiments, we show our attack method dramatically reduces the accuracy of a defense model proposed by Madry et al. (2017).

4 Certifiable Robustness

The effectiveness of the proposed attack method, as we will show in the experiments, suggests defensing from adversarial attack is extremely difficult. In fact, the existence of robust models against arbitrary adversarial attack is still questionable. Therefore, any defense model without certifiable robustness are always possible to be bypassed by stronger attacks.

Inspired by the PixelDP method Mathias et al. (2018), we propose a framework that enables certifiable robustness on any classifier. Intuitively, our approach adds random noise to pixels of adversarial examples before classification, to eliminate the effects of adversarial perturbations. The most important feature of this framework is that it allows calculation of an upper bound on the size of attacks it is robust to.

0:  An input image

; A standard deviation

; A classifier over ; Number of iterations ( is sufficient if only the robust classification is desired).
1:  Set .
2:  for  do
3:     Add i.i.d. Gaussian noise to each pixel of and apply the classifier on it. Let the output be .
4:  end for
5:  Estimate the distribution of the output as .
6:  Calculate the upper bound where and are the first and the second largest values in .
7:  Return classification result and the size of attack that it is robust to.
Algorithm 1 Certifiable Robust Classifier

Our approach is summarized in Algorithm 1. In the following, we develop theory to prove the certifiable robustness of the proposed algorithm. Our goal is to show that if the classification of in Algorithm 1 is , then for any examples such that , the classification of is also .

To prove our claim, first recall that a stochastic classifier over is a classifier whose output has a multinomial distribution over with probabilities as . In this context, robustness to an adversarial example generated from means with and , where denotes the probability of a specific output value. In the remainder of this section, we show Algorithm 1 achieves such robustness based on the Rényi divergence, starting with the following lemma.

Lemma 1

Let and be two multinomial distributions over the same index set . If the indexes of the largest probabilities do not match on and , that is , then

(7)

where and are the largest and the second largest probabilities in ’s.

To simplify notation, we define as the generalized mean. The RHS in condition (7) becomes .

Lemma 1 proposes a lower bound of the Rényi divergence for changing the index of the maximum of , i.e., for any distribution , if , the index of maximum of and must be the same. Based on Lemma 1, we obtain our main theorem on certifiable robustness, validating our claim:

Theorem 2

Suppose we have , and a potential adversarial example such that . Given a -classifier , let and .

If the following condition is satisfied, with and being the first and second largest probabilities in ’s:

(8)

then

With Theorem 2, we can enable certifiable robustness on any classifier by adding i.i.d. Gaussian noise to pixels of inputs during testing, as done in Algorithm 1. It provides an upper bound for the size of attacks that a classifier is robust to. Since the upper bound only depends on the output distribution

, one can evaluate the bound based only on natural examples. Note the evaluation requires adjustment and computing confidence intervals for

and , but we omit the details as it is a standard statistcal procedure. In the experiments, we use the end points of the confidence intervals for and .

Based on the property of generalized mean, one can show that the upper bound is larger when the difference between and becomes larger. This is consistent with the intuition that a larger difference between and indicates more confident classification. In other words, more confident classification, in the sense that more probability concentrated on one class, is beneficial for robustness.

It is worth mentioning that, as pointed out in Mathias et al. (2018), the noise is not necessarily added directly to the inputs but also to the first layer of a DNN. Given the Lipschitz constant of the first layer, one can still calculate an upper bound using our analysis. We omit the details here for simplicity.

Compared to the upper bound in PixelDP, our upper bound is strictly higher. We show the improvement using a simulation detailed in the Appendix. In the next section, we propose a simple strategy to improve the empirical performance of this framework.

5 Improved Certifiable Robustness

In Algorithm 1, for a classifier with a standard DNN, the added Gaussian noise is harmful to the classification accuracy on the original data. As discussed above, inaccurate prediction, in the sense that and are close, leads to weak robustness. Fortunately, one strength of Algorithm 1 is it requires nothing particular on the classifier , which yields flexibility to modify to make it more robust against Gaussian noise.

Note robustness to Gaussian noise is much easier to achieve than robustness to carefully crafted adversarial examples. In PixelDP, the authors incorporated noise by directly adding the same noise during the training procedure. However, we note that there have been notable efforts at developing robust DNNs to natural perturbations Xie et al. (2012); Zhang et al. (2017); yet, these methods failed to defend models from adversarial attacks, as they are not particularly designed for that task. Our framework allows us to adapt these methods to improve the accuracy of classification when Gaussian noise is present, improving the robustness of our model. We emphasize that a connection between robustness to adversarial examples and robustness to natural perturbation Xie et al. (2012); Zhang et al. (2017) has been established, which introduces a much wider scope of literature into the adversarial defense community.

5.1 Stability Training

The idea of introducing perturbations during training to improve model robustness has been studied in many works. In Bachman et al. (2014)

the authors regard perturbing models as a construction of pseudo-ensembles, to improve semi-supervised learning. More recently,

Zheng et al. (2016) used a similar training strategy, named stability training, to improve classification robustness on noisy images.

For any natural image , stability training encourages its perturbed version to yield a similar classification result under a classifier , i.e., is small for some distance measure . Specifically, given a loss function for the original classification task, stability training introduces a regularization term , where controls the strength of the stability term. As we are interested in a classification task, we use cross-entropy as the distance between and , yielding the stability loss where and are the probabilities generated after softmax.

In this paper, we add i.i.d. Gaussian noise to each pixel of to construct , as suggested in Zheng et al. (2016). Note that this is in the same spirit as adversarial training, but is only designed to improve the classification accuracy under a Gaussian perturbation.

6 Experiments

We conduct experiments to evaluate the performance of our proposed methods in terms of attack effectiveness and defense robustness. Our methods are tested on the MNIST and CIFAR-10 data sets. The architecture of our model follows the ones used in Madry et al. (2017). Specifically, for the MNIST data set, the model contains two convolutional layers with and filters, each followed by max-pooling, and a fully connected layer of size . For the CIFAR-10 dataset, we use a wide ResNet model Zagoruyko & Komodakis (2016); He et al. (2016). The network contains residual unites with filters. The implementation details are provided in the Appendix. In both datasets, image intensities are scaled to , and the size of attacks are also rescaled accordingly. For reference, a distortion of in the scale corresponds to in scale.

6.1 Theoretical Bound

We first evaluate the theoretical bounds of the size of attacks. With Algorithm 1, we are able to classify a natural image and calculate an upper bound for the size of attacks for this particular image. Thus, with a given size of attack , we know the classification result is robust if .

Further, if the classfication for a natural example is correct and robust for simultaneously, we know any adversarial examples such that will be classified correctly. Therefore, we can calculate the proportion of such examples in the test set to determine a lower bound of accuracy under attack of size . We plot different lower bounds for various for both MNIST and CIFAR-10 in Figure 1. To interpret the result, for example on MNIST, when , Algorithm 1 achieves at least accuracy under attacks whose -norm sizes, , are smaller than .

Figure 1: Accuracy lower bounds for MNIST data set (left) and CIFAR-10 data set (right) against tolerable sizes of attacks with various choices of in Algorithm 1.

A clear trade-off between the tolerable attack sizes and the accuracy lower bound is present in Figure 1. This is anticipated, as our lower bound 8 indicates higher standard deviation results in higher proportion of robust classification, but in practice larger noise would also lead to a worse classification accuracy.

6.2 Empirical Results

We next perform classification and measure the accuracy on real adversarial examples to evaluate the performance of our attack and defense methods. We first apply our attack method to the state-of-the-art defense model proposed in Madry et al. (2017) based on adversarial training. In all experiments, we focus on five settings, summarized in Table 1. The adversarially trained model is trained against PGD attack. All the attacks are attack.

Number Defense Model Attack Method
1 Naturally Trained Model PGD
2 Adversarially Trained Model (Madry’s) PGD
3 Naturally Trained Model second-order(S-O) attack
4 Adversarially Trained Model (Madry’s) second-order(S-O) attack
5 Stability trained model with Gaussian Noise (STN) second-order(S-O) attack
Table 1: Five settings of attack methods and defense models.

Mnist

In the first plot in Figure 2, we monitor the average norm of gradients of the loss function during the construction of adversarial examples. Specifically, we compute for each , where is the index set of a batch. We monitor this quantity under the setting 1, 2 and 4 in Table 1. The result shows the norms of the gradients of the adversarially trained model are much smaller than the ones of the naturally trained model, validating our explanation in Section 3, that an adversarially trained model tends to make the loss function flat in the neighborhood of natural examples. It also shows our attack method can find adversarial examples with large loss more efficiently, by incorporating second-order derivative information.

In the second plot, we show the classification accuracy on MNIST under the setting 1, 2 and 4. The plot suggests that S-O attack is able to dramatically reduce the accuracy of Madry’s model. The resulting accuracy is even worse than the one from the naturally trained baseline model.

In the third plot, we show the accuracy of different defense models under S-O attack, under the setting 3, 4 and 5. The plot suggests our model achieves better accuracy than both the baseline and the Madry’s model. Note that due to the equivalence between the S-O and EOT attacks for Gaussian noise, the robustness of STN is not achieved by hiding the true gradients via randomization.

Figure 2: MNIST data set Left: Average norm of the gradients of the loss function for a batch in each iteration during adversarial attack. Middle: Classification accuracy for Madry’s model under PGD attack and S-O attack. Right: Accuracy for different models under the same S-O attack.

Cifar-10

In Figure 3, we show the classification accuracy on CIFAR-10 under the setting 3, 4 and 5. In addition, we include results for PixelDP from Mathias et al. (2018) to show how stability training helps improve classification accuracy. Note that our S-O attack does not further reduce the accuracy of Madry’s model on CIFAR-10 significantly. For example, with the size of attack in norm being , we only reduce the accuracy from to . We argue this is because their adversarially trained model only achieves weak robustness, and thus it is difficult to further attack it. As a comparison, our defense model obtains higher accuracies than both Madry’s model and PixelDP.

Figure 3: CIFAR-10 data set. We compare the accuracy on adversarial examples with various attack sizes for both (left) and (right).

Overall, stability training combined with Gaussian noise shows a promising level of robustness, and performs better than other models even under attacks that incorporate randomization.

Another strength of our method is that stability training only requires twice the computational time, whereas adversarial training is extremely time-consuming due to the iterative construction of adversarial examples.

7 Discussion

Attacks and Adversarial Training

In Madry et al. (2017), the authors propose an adversarially trained model, and argue it is robust against attacks even though it is trained against attacks. Our proposed attack significantly reduces their classification accuracy.

In general, we find adversarial training is only effective against the types of attacks used during the training. For example, although our attack is effective on adversarially trained model against attacks, it cannot defeat a model that is adversarially trained against our attack. However, this defense model is then vulnerable to attack. This observation makes us believe that adversarial training may overfit to their choice of norms.

On the Gap between Empirical and Theoretical Results

A noticeable gap exists between the theoretical bound shown in Figure 1 and the empirical accuracy. There are three possible explanations for this gap, each pointing to a direction for future works.

The most obvious one is that the proposed upper bound can still be improved with a better analysis. The second explanation is that the empirical result should be worse under stronger attacks that have not been proposed, as has happened to many defense models. The third explanation is due to the limitation of the distance. Although our framework is proposed in the context of adversarial attacks, where potential attacks are limited to be non-perceptible by humans, our theoretical analysis does not distinguish the types of perturbations up to their norms. In practice, one can perturb a few pixels with large values, such that the change is perceptible by humans and indeed leads to a change in classification, but it only yields small distance. The existence of such perturbations forces the upper bound to be a small number. Future work might explore similar guarantees for the stronger distance.

8 Conclusion

We propose a new attack method based on an approximated second-order derivative of the loss function. We show the attack method can effectively reduce accuracy of adversarially trained defense models that demonstrated significant robustness in previous works. We also propose an analysis on constructing defense models with certifiable robustness. A strategy based on stability training for improving the robustness of these defense models is also introduced. Our defense model shows better accuracy compared to previous works under our proposed attack.

References

9 Appendix

9.1 Fast Approximate Method Miyato et al. (2017)

Power iteration method Golub & Van der Vorst (2001) allows one to compute the dominant eigenvector of a matrix . Let be a randomly sampled unit vector which is not perpendicular to , the iterative calculation of

leads to . Given is the Hessian matrix of , we further use finite difference method to reduce the computational complexity:

where is the step size. If we only take one iteration, it gives an approximation that only requires the first-order derivative:

which gives equation 5.

9.2 Proof of Lemma 1

Lemma 1 Let and be two multinomial distributions over the same index set . If the indexes of the largest probabilities do not match on and , that is , then

(9)

where and are the largest and the second largest probabilities in ’s.

Proof  Think of this problem as finding that minimizes such that argmax argmax for fixed . Without loss of generality, assume .

It is equivalent to solving the following problem:

As the logrithm is a monotonically increasing function, we only focus on the quantity part for fixed .

We first show for the that minimizes , it must have . Note here we allow a tie, because we can always let and for some small to satisfy while not changing the Renyi-divergence too much by the continuity of .

If for some , we can define by mutating and , that is , then

which conflicts with the assumption that minimizes . Thus for . Since cannot be the largest, we have .

Then we are able to assume , and the problem can be formulated as

which forms a set of KKT conditions. Using Lagrange multipliers, one can obtain the solution and for . Plug in these quantities, the minimized Renyi-divergence is

Thus, we obtain the lower bound of for argmax argmax.

 

9.3 Proof of Theorem 2

A simple result from imformation theory:

Lemma 3

Given two real-valued vectors and , the Rényi divergence of and is

(10)

Theorem 2 Suppose we have , and a potential adversarial example such that . Given a k-classifier , let and .

If the following condition is satisfied, with and being the first and second largest probabilities in ’s:

(11)

then

Proof  From lemma 3, we know for and such that , with a k-class classification function :

(12)

if is a standard Gaussian noise. The first inequality comes from the fact that for any function .

Therefore, if further we have

(13)

It implies

(14)

Then from Lemma 1 we know that the index of the maximums of and must be the same, which means they have the same prediction, thus implies robustness.  

9.4 Comparison between Bounds

We use simulation to show our proposed bound is higher than the one from PixelDP Mathias et al. (2018).

In PixelDP, the upper bound for the size of attacks is indirectly defined: if and the added noise has the distribution , then the classifier is robust against attacks whose size is less than .

As both bounds are determined by the models and data only through and , it is sufficient to compare them with simulation for different and as long as , and are satisfied.

For fixed , and are two tuning parameters that affect the result. For fair comparision, we use grid search to find and that maximizes their bound.

Figure 4: The upper bounds under different and . In both settings, we let the variance of Gaussian noise . Our bound (red) is strictly higher than the one from PixedDP(blue).

The simulation result shows our bound is strictly higher than the one from PixelDP.

9.5 Implementation Details

In this section, we specify the hyperparameters used in our experiments and other details. For all experiments, the baseline models are implemented using the codes from

<https://github.com/MadryLab/mnist_challenge> and <https://github.com/MadryLab/cifar10_challenge>.

Our defense model only requires the following modification: 1) For stability training, we remove the adversarial training part and include the stability training regularizor. 2) At test time, we add i.i.d. Gaussian noise to pixels before feeding the images into the model.

For MNIST, we use stability training with STD of the Gaussian noise being . The STD of Gaussian noise during testing is . For CIFAR-10, the STD of Gaussian noise in stability training . The STD of Gaussian noise during testing is . Weight of the regularizor for both data sets.