On Configurable Defense against Adversarial Example Attacks

12/06/2018 ∙ by Bo Luo, et al. ∙ The Chinese University of Hong Kong 0

Machine learning systems based on deep neural networks (DNNs) have gained mainstream adoption in many applications. Recently, however, DNNs are shown to be vulnerable to adversarial example attacks with slight perturbations on the inputs. Existing defense mechanisms against such attacks try to improve the overall robustness of the system, but they do not differentiate different targeted attacks even though the corresponding impacts may vary significantly. To tackle this problem, we propose a novel configurable defense mechanism in this work, wherein we are able to flexibly tune the robustness of the system against different targeted attacks to satisfy application requirements. This is achieved by refining the DNN loss function with an attack sensitive matrix to represent the impacts of different targeted attacks. Experimental results on CIFAR-10 and GTSRB data sets demonstrate the efficacy of the proposed solution.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (DNNs) has become the foundation technique for many safety and security critical artificial intelligence (AI) applications such as autonomous driving, medical imaging and biometric authentication 

[25, 20, 8]. No doubt to say, reliability and safety consideration is the biggest concern for these sensitive applications, and it is imperative to mitigate any possible threats.

One of the main threats all DNNs face is adversarial examples (AEs), which are carefully crafted adversarial inputs to deceive the model to make a wrong decision. Due to the severeness of this problem, there has been a large body of research on AE attacks [22, 5, 17, 2, 16] and defenses [10, 23, 18, 19] from both academia and industry.

However, as pointed out in [11]

, existing defense techniques can only defend against limited types of attacks under restricted settings. It is very difficult to have a universal solution to defend against all possible adversarial example attacks, especially considering the fact that we are only able to explore a small set of data domains in the deep learning training phase while the remaining large unexplored space could be exploited by attackers. In practice, the impacts of different misclassifications caused by adversarial example attacks may vary significantly. Considering a medical image diagnostic system, missing a life threatening disease is usually regarded by a patient as much more severe than a false positive diagnosis. Thus, it is imperative to take the impacts of different adversarial example attacks into consideration and develop a configurable defense mechanisms to satisfy the unique requirements of different applications, which has not be explored in the literature.

In this paper, our idea is to refine the loss function of the DNNs by adding a new term, called attack sensitive matrix, to perceive the costs of different targeted attacks. Then, by adjusting the attack sensitive matrix in the loss function, we are able to manipulate the defense strength for different targeted attacks, thereby effectively increasing the attack effort for those high-cost attacks. To the best of our knowledge, this is the first configurable defense mechanism against targeted AE attacks. The main contributions of this paper include:

  • We propose a novel configurable defense method for adversarial example attacks by refining the loss function of DNNs during training.

  • We present two common defense objectives that can be achieved by our configurable defense: one is to increase the weighted average robustness; while the other is to increase the lower bound robustness of the system.

  • We conduct the experiments on CIFAR-10 and GTSRB data sets and show that our solution can achieve significant improvement compared to the state-of-the-art defense methods under a range of attacks.

The remainder of this paper is organized as follows: First, we introduce some preliminary knowledge, the related work and motivation in Section 2. Next, we detail the proposed configurable defense mechanism by refining the loss function in Section 3. After that, in Section 4, two efficient algorithms are presented to achieve two common defense objectives. Lastly, we show the experimental results in Section 5 and conclusions in Section 6.

2 Preliminaries and Motivation

2.1 Adversarial Example Attack

There are two kinds of adversarial example attacks: targeted attacks [22, 17, 2, 3] and un-targeted attacks [16, 15, 7]. Targeted attacks try to make DNNs misclassify the input from a correct source label to a targeted malicious label, while the objective of un-targeted attacks is to fool DNNs make mistakes, regardless of the targeted label. In this paper, we focus on the configurable defense against the targeted adversarial example attack, which can be formulated as follows:

(1)

Under the attack , the DNN model misclassifies the sample from the source correct label to the targeted malicious label

. The objective of attackers is to find the minimum perturbation vector

added on the input so that is misclassified as

. For a classifier with

classes, there are kinds of targeted adversarial example attacks, and each may cause different costs for system users.

The robustness under the adversarial example attack is defined as the correct classification rate of adversarial examples, which measures the authenticity of the model under attacks. The higher the correct classification rate, the more robust the DNN model. The mathematical formulation of the model robustness under the attack is as follows:

(2)

where the numerator denotes the number of adversarial examples generated under the targeted attack that can still be correctly classified as . The denominator represents the total number of adversarial examples crafted under the attack .

2.2 Related Work

In the literature, there are two categories of defenses against adversarial example attacks: one is to build detection systems to recognize adversarial examples during model usage [12, 24, 14], and the other tries to train more robust models to successfully classify adversarial examples [22, 18]. For detecting adversarial examples, a certain number of techniques have been explored, such as performing statistical tests [12] or training an additional model for detection [14]. However, as adversarial examples are very close to legitimate samples, it has been shown that many detection methods can be bypassed by attackers easily [1]

. For training a more robust model, defensive distillation 

[18]

first builds a classification model and smoothes its softmax layer by dividing a constant, then trains a robust model with the soft labels that are the outputs from the first one. Input gradient regularization 

[19, 6] method targets to train a contractive network, minimizing the model input gradients towards the output predictions. Adversarial training [22, 13] augments the original training set with crafted adversarial examples. However, all the previous defenses attempt to provide a universal solution, improving the model robustness against all kinds of adversarial example attacks, which is impossible [11].

Figure 1: Moving decision boundary to mitigate adversarial examples and would lead to the misclassification of .

2.3 Motivation

As the minimum distance from legitimate inputs of a class to the decision boundary dictates the amount of perturbations required to generate the corresponding adversarial examples. Moving the decision boundary of a classifier to mitigate some threats would inevitably leads to some other new threats. For example, in Figure 1, the classifier in Figure 1(a) correctly classifies the adversarial example , while misclassifies adversarial examples and . However, when moving the decision boundary as shown in Figure 1(b), the classifier will correctly classify adversarial examples and , but misclassify . Therefore, it is very difficult, if not impossible, to have a universal solution to defend against all possible adversarial example attacks.

In fact, the impacts of different adversarial example attacks vary significantly in a particular machine learning system. For example, considering a traffic sign classifier used in self-driving cars, it will not cause a problem to misclassify a “yield” sign as a “stop” sign, but it may cause severe traffic accidents for the opposite misclassification. Therefore, a preferred defense mechanism should protect the DNN model in such a manner that it is more difficult for attackers to perturb a legitimate ’stop’ sign to be misclassified as other road signs when compared to other possible misclassifications. In other words, it is imperative to have a configurable defense mechanism against adversarial example attacks to satisfy the unique requirement in particular machine learning applications.

Motivated by the above, in this paper, we propose to investigate the configurable defense mechanisms for satisfying the unique requirements of different applications, as a universal solution is usually intractable.To the best of our knowledge, this is the first work on configurable defense against adversarial example attack, as detailed in the following sections.

3 The Proposed Method

The idea of our configurable defense is to devise a configurable loss function that includes a matrix to perceive the costs of different misclassifications caused by different adversarial example attacks. In this section, we first introduce the limitation of the cross entropy loss used in training DNNs. Then, we propose our configurable loss functions to mitigate its limitation. Lastly, we present the overflow of our configurable defense, performing adversarial training with the refined loss functions to achieve a robust model.

3.1 Limitation of Cross Entropy Loss

In the classification problems based on DNNs, the most popular loss used is the cross entropy loss [4], which has shown great success in achieving high classification accuracy. Assume there are classes, the cross entropy loss for one training sample is:

(3)

where is the

-th element in the one-hot encoded format of the true label. If the sample is of

class, equals to 1, otherwise, it is zero.

is the probability of class

predicted by the classifier. As a result, the cross entropy loss for a sample is simplified as , in which is the index of the true label for this sample. Based on this fact, we observe that the cross entropy loss only cares about the prediction accuracy for the true class, regardless of others. Thus, it cannot consider the costs of misclassifications caused by different adversarial example attacks.

3.2 Attack Sensitive Loss

To solve this problem, we propose a refined loss, called attack sensitive loss, which incorporates all the prediction probability of classes instead of only considering the prediction probability of the true class. Then, we introduce an attack sensitive matrix to the loss, which perceives the costs of different targeted attacks for configuring the attack strength. In this paper, we propose two formulations for the attack sensitive loss.

The first one is defined as follows:

(4)

where calculates the error magnitude of class . As training samples are labeled with one-hot format, when a sample is not class , then equals to 1. But the classifier erroneously predicts this sample to class with probability . The higher the , the larger the prediction error. is a value in the attack sensitive matrix . It denotes the costs of targeted attack . The larger , the larger the loss caused by during training. Then the model will become more robust against attack after well trained. In this way, our defense can configure the model robustness against different adversarial example attacks by adjusting the attack sensitive matrix .

The second attack sensitive loss is slightly different from the first one in the way of calculating the error magnitude. It has the following format:

(5)

The error magnitude of class in this formulation is calculated as the gap between the predicted probability of class and the true class , denoted as . Apparently, the larger the probability gap, the larger the error made by this model. Then, we multiply the error magnitude with the corresponding attack sensitive value . Similarly, we can configure the attack sensitive matrix to adjust the costs caused by different misclassifications.

3.2.1 Loss Function Combination

As the cross entropy loss achieves good accuracy in training DNNs and our attack sensitive loss can configure the model robustness against different adversarial example attacks, we combine these two loss functions together to achieve a good tradeoff between the model classification accuracy and robustness. The combined loss function is formulated as follows:

(6)

where is the cross entropy loss, and is the attack sensitive loss. is a parameter to balance the effects of these two losses.

3.3 Overflow of Our Configurable Defense

Figure 2: The procedure of our configurable defense.

The idea of the proposed configurable defense is to introduce our loss functions into adversarial training method, achieving different defense strength for satisfying the unique requirements of different applications. The overflow of our configurable defense is shown in Figure 2, where we first determine the attack sensitive matrix based on the system objective. Then we refine the loss functions of DNNs by including the attack sensitive matrix as an important parameter to perceive costs of different targeted attacks. Finally, we train with the refined loss on the augmented training set and obtain the robust model satisfying the application requirement.

The key challenge in our configurable defense is to assign the attack sensitive matrix properly. As too large values in will degrade the performance for legitimate samples, while too small values in will not bring a satisfactory robustness improvement. To solve this challenge, we propose algorithms to optimize the attack sensitive matrix, achieving an appropriate robustness improvement under different defense objectives. The details are demonstrated in the following section.

4 Defense Objectives

In this section, we introduce two common defense objectives that can be achieved by the proposed configurable mechanism: one is to increase the weighted average robustness for systems, which concentrates more on some particular severe targeted attacks; while the other is to increase the lower bound robustness of the system considering the bottleneck security.

4.1 Increase Weighted Average Robustness

Many machine learning systems have robustness preference, such as in disease diagnosis systems, the sick cases should not be attacked to become healthy ones. As a result, the defense techniques should concentrate more on the robustness against the high-cost targeted attacks, instead of treating all attacks equally. Based on this analysis, we define the system robustness as the weighted average robustness:

(7)

where is the robustness under the targeted attack . is the weight or the cost of the successful attack specified by system users. The more important in the system robustness, the larger .

As discussed earlier, increasing the values in the attack sensitive matrix will increase the model robustness against the corresponding targeted attacks, however it will inevitably influence the classification rate of legitimate samples. In order to ensure the system usability, the accuracy of legitimate samples should be constrained. Then we can increase the system robustness whenever possible under this constraint. Overall, we formulate the problem as follows:

(8)

Our target is to find the appropriate attack sensitive matrix in our loss functions that can maximize the system robustness when the accuracy of legitimate samples is greater than a given threshold .

Input: Training set , validation set , minimum required accuracy , weight , step .
Output: Attack sensitive matrix .
1 Initialize the attack sensitive matrix with all 1s except 0s in diagonal;
2 Sort according to in descending order;
3 for  do
4       ;
5       while   do
6             Train the model on with the refined loss;
7             Evaluate the accuracy for ;
8             if  then
9                   ;
10                  
11             end if
12            else
13                   ;
14                   ;
15             end if
16            
17       end while
18      
19 end for
Return .
Algorithm 1 Increase Weighted Average Robustness.

However, finding the optimal attack sensitive matrix for this problem is not easy, as the values in this matrix are not constrained and the search space is infinite. Besides, there are no signs, such as gradients for minimizing loss functions, to guide us to update the matrix. To solve this problem, we propose a simple yet efficient greedy algorithm to find an appropriate attack sensitive matrix. The intuition is that increasing the value will certainly increase the robustness against the targeted attack when the model is well trained with our refined loss. As a result, we first increase the value of that corresponds to the most serious attack, which is the attack with the largest defined by users. When is too large to violate the constraint, we fix and start to increase the value in corresponding to the second serious attack. This process is continued until we cannot increase any value in under the accuracy constraint.

The detailed process of finding an appropriate attack sensitive matrix is shown in Algorithm 1, in which we first initialize . Then, in line 2, we sort the tuple in descending order according to the attack seriousness given by users and store the tuples in . Next, the algorithm traverses all positions in the attack sensitive matrix in descending order of the attack seriousness. For each value in , we train the model with our new losses and evaluate the accuracy of legitimate samples in line 6-7. If the constraint on the accuracy of legitimate samples is satisfied, the element of in position will be increased for a small value in line 8-10. The process is continued until the constraint is violated and then the flag of termination is set and the last update of is restored.

4.2 Increase Lower Bound Robustness

Apart from increasing the weighted average robustness, there are some cases where users care more about the lower bound robustness. As the robustness under different targeted attacks are extremely imbalanced in traditionally trained models, some attacks are really easy to implement, while some are quite difficult to succeed [17]. The lower bound robustness is the bottleneck of the system security, as a result, it is essential to improve the lower bound robustness. In this paper, we formulate the problem as follows:

(9)

Our objective is to find the appropriate attack sensitive matrix that can maximize the lower bound robustness when the accuracy of legitimate samples is greater than a given threshold.

Similarly, we propose an efficient algorithm to solve this problem. In previous sections, we know that increasing can improve the robustness under the target attack . Therefore, we can increase the element in the attack sensitive matrix corresponding to the lower bound robustness until the constraint is violated. As the position of in may change frequently between iterations, the convergence speed may be degraded due to these kinds of oscillations. Thus, we propose to increase a batch of elements in corresponding to the lowest robustness simultaneously in each iteration.

Input: Training set , validation set , minimum required accuracy , batch size , step .
Output: Attack sensitive matrix .
1 Initialize the attack sensitive matrix with all 1s except 0s in diagonal;
2 ;
3 while  do
4       Train the model on with the refined loss;
5       Evaluate the accuracy for ;
6       if  then
7             Add to elements in corresponding to the lowest robustness;
8            
9       end if
10      else
11             ;
12             Minus from elements in corresponding to the lowest robustness;
13       end if
14      
15 end while
Return .
Algorithm 2 Increase lower bound robustness.

The whole process is listed in Algorithm 2, where we first initialize the attack sensitive matrix the same as in Algorithm 1. Then in each iteration, we first train the model with our refined loss functions and obtain the accuracy of samples in the validation set in line 4-5. If the new accuracy does not violate the constraint, we choose the elements in corresponding to the lowest robustness and increase them by a small value . However, if the accuracy violates the constraint, we set the terminate flag to become true and decrease from the elements in that have been updated in the last iteration in line 10-11.

(a) C&W
(b) IFGSM
(c) PGD
(d) Legitimate Samples Accuracy
Figure 3: The impacts of the attack sensitive matrix on the model accuracy for adversarial examples and legitimate samples, where we set in our losses.

5 Experimental Results

In this section, we evaluate the effectiveness of our proposed configurable defense from three aspects: the first one is to evaluate the configurability of our refined loss functions on the model robustness; the second is the performance of the two proposed algorithms for different defense objectives; the last one is to evaluate the performance of our configurable defense on a practical problem (road sign recognition system).

5.1 Experimental Setup

Datasets: Our experiments are performed on CIFAR-10 [9] and GTSRB (German Traffic Sign Recognition Benchmark) [21] data sets. The CIFAR-10 contains 60000 color images that represent 10 different natural objects. Each image has the size of 32*32*3. The GTSRB data set contains 50000 images representing 43 kinds of road signs.The intensity values of pixels in all these images are scaled to a real number in the range of .

DNN Models:

The model architectures for these data sets are deep convolutional neural networks (CNNs). They achieve 94.8% and 98.8% classification accuracy for CIFAR-10 and GTSRB, respectively, which is comparable to the state-of-the-art results.

Baselines: In the experiments, we compare the performance of our configurable defense mechanism with three state-of-the-art defenses:

  • PGD-based Adversarial Training [13], it trains with the training set augmented with adversarial examples generated by PGD attack;

  • Feature Squeezing [24], it improves the model robustness by reducing the bit depth of inputs;

  • Input Gradient Regularization [19], it optimizes the model for more smooth input gradients based on the predictions during training.

We use three state-of-the-art attacks, PGD [13], IFGSM [10] and C&W [2], to evaluate the performance of these defenses.

5.2 Configurability of Loss Functions

In this section, we evaluate the configurability of our refined loss functions on the model robustness for a specific targeted attack by adjusting the corresponding attack sensitive value in . In the experiments, we randomly choose a value to change, then we train models with our two configurable losses by increasing from 1 to 100. The model robustness is evaluated under IFGSM, C&W and PGD attacks, respectively. We also evaluate the prediction accuracy of legitimate samples without attacks when training with our new losses. The results for CIFAR-10 data set are shown in Figure 3. To better evaluate performance of our configurable loss functions, we train the models with two schemes: one in training without adversarial examples, and the other is adversarial training (augmenting the training set with adversarial examples crafted by PGD).

PGD-based Adv. Train Input Gradient Regularization Feature Squeezing Our Configurable Defense
Train with PGD Adv. Train with Ensemble Adv.

IFGSM
63.4% 67.2% 63.7% 81.1% 90.8%
PGD 68.6% 63.6% 58.9% 86.5% 88.1%
C&W 55.4% 60.1% 52.1% 76.4% 84.3%
Table 1: The performance of our configurable defense compared with the state-of-the-art defenses for improving the weighted average robustness on CIFAR-10.
PGD-based Adv. Train Input Gradient Regularization Feature Squeezing Our Configurable Defense
Train with PGD Adv. Train with Ensemble Adv.
IFGSM 46.2% 53.5% 43.1% 63.7% 71.4%
PGD 44.6% 50.3% 42.8% 65.1% 66.2%
C&W 39.4% 44.7% 36.9% 56.3% 64.8%
Table 2: The performance of our configurable defense compared with the state-of-the-art defenses for improving the lower-bound robustness on CIFAR-10.

Firstly, from the dotted lines in Figure 3(a), (b), (c), we can see that our refined loss functions can improve the model robustness even without adversarial training. The model robustness increases from about 5% to 18% when increases from 1 to 100. However, the accuracy of legitimate samples degrades about 1.5%, as the dotted lines in Figure 3(d) shows.

Secondly, from the solid lines in Figure 3(a), (b), (c), with adversarial training, our losses are effective to increase the model robustness against the targeted attack by increasing the corresponding value . But when increases largely, the accuracy of legitimate samples degrades about 3.2%. This indicates that increasing model robustness would inevitably degrade the accuracy of legitimate samples. We need to consider this when achieving our defense objectives.

Finally, we can observe that our second loss function performs better than the first one, as its improvement of model robustness is larger. We analyze this phenomenon that our second loss uses the probability gap to denote the error magnitude. In this way, the model trained with this loss tends to learn the unique features in each class to increase the probability gap, and thus becomes more robust.

5.3 Defense Objective Evaluation

In this section, we evaluate the effectiveness of the algorithms proposed for the two different defense objectives, improving the weighted average robustness and the lower bound robustness. As our second refined loss performs better than the first one, these results are conducted by the second loss function. To compare the performance of different defense mechanisms, we control that the degradation of legitimate sample accuracy is less than 1%. That is, we set to 93.8% for CIFAR-10.

5.3.1 Increase Weighted Average Robustness

We evaluate the effectiveness of our Algorithm 1 for improving the weighted average robustness, compared with the three baseline defense mechanisms. The weights of in for each targeted attack are set as follows: we randomly select 6 weights and set them to 0.4, 0.2, 0.08, 0.06, 0.04 and 0.02, respectively. Except for , the remaining weights are set as the same value, which is determined under the constraint that all weights are sum to 1. In practice, the weights are set based on application requirements.

Table 1 shows the weighted average robustness achieved by different defense methods under a range of attacks on CIFAR-10. We can see that the Input Gradient Regularization achieves the best weighted average robustness among the three baseline defenses under IFGSM and C&W attacks, with robustness of 67.2% and 60.1%, respectively. For the PGD attack, PGD-based Adversarial Train achieve the best performance of 68.6%. This is commensurate with the previous discovery that adversarial training is not effective for the attacks, based on which adversarial examples generated are not included in the training set.

Our configurable defense can even improve this benefit. When training with adversarial examples generated by PGD, we get 81.1%, 86.5% and 76.4% weighted average robustness under IFGSM, PGD and C&W attacks, respectively. And when we use the ensemble adversarial training method (including adversarial examples crafted by IFGSM, PGD and C&W), our configurable defense can achieve 90.8% and 84.3% weighted average robustness under IFGSM and C&W attacks, which is almost 35% improvement compared with the best results among the three baselines. The reason of this significant improvement of our defense is that we judiciously protect the model from severe attacks instead of treating all of them equally as previous methods do.

(a) Original Image
(b) PGD Adversarial Train
(c) Gradient Regularization
(d) Feature Squeezing
(e) Configurable Defense
Figure 4: Original sample and the corresponding adversarial examples crafted against defended models by different defenses under C&W attack.
PGD-based Adv. Train Input Gradient Regularization Feature Squeezing Our Configurable Defense
Train with PGD Adv. Train with Ensemble Adv.
IFGSM 58.3% 61.2% 56.1% 80.7% 87.8%
PGD 63.8% 58.1% 53.4% 84.8% 86.2%
C&W 53.1% 56.3% 51.6% 76.8% 82.6%
Table 3: our configurable defense compared with the state-of-the art defenses under a range of attacks on GTSRB.

5.3.2 Increase Lower Bound Robustness

We evaluate the effectiveness of our Algorithm 2 for improving the lower bound robustness, compared with the three baseline defense mechanisms. Table 2 shows the lower bound robustness achieved by different defense methods on CIFAR-10. The best performance among the three baseline defenses is achieved by the Input Gradient Regularization method. The lower bound robustness is 53.5%, 50.3% and 44.7% under IFGSM, PGD and C&W attacks, respectively. However, compared with our configurable defense, this improvement is not good enough. Our solution with the PGD adversarial training and ensemble training can achieve 22% and 30% improvement compared to the best results obtained by the three baselines.

To conclude, our solution makes significant improvement compared with previous defense methods for different defense objectives with only little degradation on the accuracy of legitimate samples. When a universal defense solution is not available, it is essential to employ our configurable defense mechanism to protect the model against those severe attacks.

5.4 Case Study

To evaluate the effectiveness of our configurable defense on practical problems, we implement a use case on traffic road sign recognition system. We conduct the experiments on the GTSRB data set. In a road sign recognition system, the most important security guarantee is that the “Stop” sign should not be classified as others. Thus, the defense objective in this system is to improve the model robustness against the adversarial example attack from misclassifying the “Stop” sign into other labels ().

This problem can be solved using Algorithm 1, improving the weighted average robustness of the model, where the weight of in should be the largest. In this experiment, we set =0.8 and the left weights are set as the same value except for . All the weights are sum to 1. Therefore, in this setting, the adversarial example attacks that cause misclassifying the “Stop” sign to other “Non-stop” will incur the most serious impact.

Table 3 shows the weighted average robustness of different defense methods under the attack . The maximum degradation of legitimate sample accuracy is 1%. We observe that the best robustness under IFGSM attack among three baselines is achieved by Input Gradient Regularization, which is 61.2%. However, our configurable defense can even improve this benefit. We can correctly classify 80.7% and 87.8% adversarial examples on the stop signs under IFGSM when using PGD based adversarial training and ensemble training, respectively. The results for PGD and C&W attacks are similar, that our configurable defense can largely improve the robustness compared with the best results achieved by the three baselines.

Figure 4 shows the adversarial examples crafted against different defended models under the C&W attack. The first image is the original sample, and the following three are the adversarial examples crafted against models defended with the three baseline methods. The final figure is the adversarial example generated against the model defended by our configurable defense. We can see that the perturbations needed by our method are the largest compared with the three baselines. This corresponds to the conclusions in Table 3, that our configurable defense can improve system robustness considering application requirements by largely increasing the attack strength.

6 Conclusions

In this paper, we propose a configurable defense against adversarial example attacks by refining loss functions during training, adding a new term to perceive the cost of different target attacks. In this way, the model robustness can be configured by adjusting the attack sensitive matrix in our new losses. Moreover, we present two efficient algorithms to achieve two different defense objectives: one is to increase the weighted average robustness, and the other is to increase the lower bound robustness. Experimental results on CIFAR-10 and GTSRB data sets show that the proposed mechanism can significantly achieve different defense objectives when compared with the state-of-the-art techniques.

References

  • [1] N. Carlini and D. Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. AIsec, 2017.
  • [2] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. Security and Privacy (S&P), 2017.
  • [3] P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh. Ead: elastic-net attacks to deep neural networks via adversarial examples. AAAI, 2018.
  • [4] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016.
  • [5] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. ICLR, 2015.
  • [6] S. Gu and L. Rigazio. Towards deep neural network architectures robust to adversarial examples. ICLR, 2015.
  • [7] W. He, B. Li, and D. Song. Decision boundary analysis of adversarial examples. ICLR.
  • [8] J. Kim and J. F. Canny. Interpretable learning for self-driving cars by visualizing causal attention. ICCV, 2017.
  • [9] A. Krizhevsky, V. Nair, and G. Hinton. The cifar-10 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html, 2014.
  • [10] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial machine learning at scale. ICLR, 2017.
  • [11] X. Ling, S. Ji, J. Zou, J. Wang, C. Wu, B. Li, and T. Wang. Deepsec: A uniform platform for security analysis of deep learning model. Security and Privacy (S&P), 2019.
  • [12] X. Ma, B. Li, Y. Wang, S. M. Erfani, S. Wijewickrema, M. E. Houle, G. Schoenebeck, D. Song, and J. Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. ICLR, 2018.
  • [13] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. ICLR, 2018.
  • [14] D. Meng and H. Chen. Magnet: a two-pronged defense against adversarial examples. CCS, 2017.
  • [15] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. CVPR, 2017.
  • [16] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. CVPR, 2016.
  • [17] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. Security and Privacy (S&P).
  • [18] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. Security and Privacy (S&P), 2016.
  • [19] A. S. Ross and F. Doshi-Velez. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. AAAI, 2018.
  • [20] D. Shen, G. Wu, and H.-I. Suk. Deep learning in medical image analysis. Annual review of biomedical engineering, 2017.
  • [21] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 2012.
  • [22] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. ICLR, 2014.
  • [23] F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel. Ensemble adversarial training: Attacks and defenses. ICLR, 2018.
  • [24] W. Xu, D. Evans, and Y. Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. NDSS, 2018.
  • [25] Z. Yuan, Y. Lu, Z. Wang, and Y. Xue. Droid-sec: deep learning in android malware detection. SIGCOMM, 2014.