Training Ensembles to Detect Adversarial Examples

12/11/2017 ∙ by Alexander Bagnall, et al. ∙ Ohio University 0

We propose a new ensemble method for detecting and classifying adversarial examples generated by state-of-the-art attacks, including DeepFool and C&W. Our method works by training the members of an ensemble to have low classification error on random benign examples while simultaneously minimizing agreement on examples outside the training distribution. We evaluate on both MNIST and CIFAR-10, against oblivious and both white- and black-box adversaries.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As is now widely understood, neural networks can easily be fooled by small but cleverly chosen input perturbations, so-called adversarial examples 

szegedy2013intriguing . We propose a new ensemble learning method that enables detection – in oblivious, black-, and white-box settings – of adversarial examples generated by state-of-the-art attacks including DeepFool moosavi2016deepfool , the basic iterative method kurakin2016adversarial , and C&W carlini2017towards .

In our method, multiple instances of a base model are trained jointly to minimize a cross-entropy loss. An additional loss term encourages the models in the ensemble to have highly varying predictions on examples not from the source data distribution (contrast with he2017adversarial , which studies ensembles of distinct defenses). The ensemble labels an input as adversarial when there is little consensus on that input. Our method is more sensitive than an undefended model to non-adversarial (random) input perturbations but can detect 68.1% of the adversarial examples generated on CIFAR-10 by C&W carlini2017adversarial , an attack method that was recently shown to be quite effective against existing defenses.

The rest of the paper is structured as follows: in Section 2 we describe a simple, computationally inexpensive, and attack-agnostic ensemble learning method for detecting and classifying adversarial examples; in Section 3 we experimentally evaluate ensemble classifiers trained using our method against the adversarial examples generated by the Fast Gradient Sign (FGS) method goodfellow2014explaining , the Basic Iterative method kurakin2016adversarial , DeepFool moosavi2016deepfool , and C&W carlini2017towards , on both MNIST and CIFAR-10. The paper concludes with a summary of the approach and ideas for future work.

2 Ensemble Method for Classification and Adversarial Detection

We propose to train an ensemble of models that label clean examples accurately while also disagreeing on randomly perturbed examples. At test time, the ensemble will be used for both adversarial detection and classification: the label achieving the maximum agreement on a test example is output, unless the agreement is too low, in which case the example is labeled as adversarial.

Let

be the matrix of parameters used by the softmax layer to compute the posterior probabilities corresponding to

classes, and let

be the 3-dimensional tensor of the softmax parameters for the entire ensemble. Let

be the vector of softmax outputs computed by the ensemble member

on an input example . If is the representation computed by the layer preceding the softmax, then . Furthermore, let be the randomly perturbed version of training example , obtained by adding perturbation values that are sampled uniformly at random from , where

is a hyperparameter that controls the

-norm of the perturbation vector. To achieve the dual objective of high accuracy on clean examples and disagreement on randomly perturbed examples, we define the cost function shown in Equation 1 below, with two components: is the standard cross-entropy error for clean example and its true label , averaged over all ensemble members; is the mean agreement among ensemble members, where agreement between two members is captured through the dot product of their softmax output vectors.

(1)

The hyperparameter

controls the trade-off between accuracy on clean examples and disagreement on perturbed examples. For brevity, we omit the weight decay regularization. During training, the cost function is minimized using minibatch stochastic gradient descent. The cross-entropy term is calculated as a sum over all clean examples in the minibatch, while the agreement term is calculated over the perturbed version of the minibatch.

By minimizing both terms simultaneously, the ensemble members are encouraged to label clean examples accurately while disagreeing on randomly perturbed examples. This is similar in spirit to a classic technique in ensemble learning known as negative correlation learning liu1999ensemble in which a penalty term is used to minimize correlation of incorrect predictions on clean data, except that here we pay no heed to whether the predictions are right or wrong. Neural networks are often robust to small random noise, which means that the members may also be penalized for agreeing on the correct label. This may seem counterintuitive, but the effect is that the models learn to react differently to perturbed data while still performing well on clean data, thus encouraging diversity at their decision boundaries.

Detection. At test time, the outputs of all ensemble members are combined using a rank voting mechanism. Each member assigns the rank to the label it considers the most likely, rank to the second most likely, and so on. For each label, the ranks are summed across all members, and the smallest label rank is used as the ensemble disagreement.

For an ensemble trained using our method, a large ensemble disagreement is indicative of an input that lies outside of the data distribution. Correspondingly, we implement a simple rank-based criterion that rejects a test example as adversarial or outlier if and only if the ensemble disagreement is above a

rank threshold hyperparameter . We tune to be as low as possible while having a low false positive rate on clean validation data.

Classification. If an input data example is accepted by the rank thresholding mechanism, one could simply use the label with the lowest rank sum as the overall prediction of the ensemble. However, we find in practice that the results are improved by using a distribution summation rokach2009taxonomy that selects the classification label as , where is the probability of label in model . In general, the mechanism for combining outputs to produce the overall prediction need not be the same as that for detecting invalid inputs. This could be taken a step further, such that the entire ensemble is used only for the detection of invalid inputs, whereas inputs detected as valid are passed to a separate, perhaps more powerful classifier. We leave this idea for future investigation.

3 Experimental Results

This section answers the following questions:

  1. [labelindent=0pt,leftmargin=*,itemsep=0pt,parsep=0pt, topsep=0pt]

  2. How well does the ensemble method of Section 2 defend against known attacks? We analyze both classification accuracy (how well the defense classifies adversarial examples) and detection rate (how often the defense detects adversarial examples). Detection rate is a measure of the effectiveness of the defense when used solely as a detector, assuming that the false-positive rate is low. Classification accuracy measures how well our ensembles perform when used solely to classify inputs. We also measure classification accuracy on accepted inputs (those not detected as adversarial), a measure of the effectiveness of detection followed by classification.

  3. To what degree is the defense susceptible to random noise? The primary metric here is rate of false-positive detection on benign noise – how often the Section 2 detector classifies random noise as adversarial (as this rate increases, the detector becomes less useful in deployments).

Threat models. We measure the effectiveness of attacks in black- and white-box modes, as well as oblivious-mode in the case of C&W. White-box attacks have access to the model, its parameters, and its training data. Black-box attacks have access to the model and its training data, but not its parameters. By giving the black-box attacker access to the training data, we expect a stronger attack than the black-box model of carlini2017adversarial , in which the attacker and model are trained on two distinct datasets of comparable size and quality. Performing additional experiments against weaker black-box attackers without access to the target model’s training data is left for future work.

Attacks.

The fast gradient sign method (FGS) generates adversarial examples by taking a single linear step from the original image in the direction of the gradient of the objective function. The basic iterative method iteratively performs FGS in small steps to generate more precise examples, subject to a box constraint on perturbation magnitude. DeepFool attempts to find a minimal adversarial perturbation by linearizing the discriminant function around the current perturbed example and iterating until the current perturbation is sufficient to change the label. The C&W attack generates targeted adversarial examples using an objective that maximizes the margin between the target class logit and the logits of the other classes, while simultaneously minimizing the perturbation’s

norm.

Results from targeting . We first evaluate the defense against attacks which target only the cross-entropy term of the objective function. In the case of DeepFool and C&W, as discriminant function we used the sums of the unnormalized logit outputs of the ensemble members. As a by-product of maximizing the sum of logits for a wrong label, the examples inferred by C&W are expected to also obtain high ensemble agreement on that label. The parameter controlling maximum margin in C&W is chosen in each instance to result in comparable mean distortion across attacks.

Clean Noise FGS
Attack Acc. Det. Dist. Acc. Det. Dist. Acc. Det. Dist.
MNIST White-box 98.5/98.8 0.9 0 97.3/99.7 30.4 1.6 16.1/0.0 99.8 2.8
Black-box 17.5/0.0 98.9 2.8
CIFAR-10 White-box 81.6/83.9 4.6 0 76.9/83.9 39.2 0.6 10.8/13.6 99.8 1.7
Black-box 17.5/39.5 99.6 1.7
Table 1: Accuracy, Detection rate, and mean Distortion for Clean, Noise, and Fast Gradient Sign.
Basic iter. DeepFool C&W
Attack Acc. Det. Dist. Acc. Det. Dist. Acc. Det. Dist.
MNIST White-box 13.4/0.0 99.7 2.4 76.0/75.0 45.0 0.9 31.2/95.7 97.7 2.5
Black-box 22.5/0.1 98.9 2.4 96.7/97.6 8.2 0.8 21.3/33.0 76.0 2.4
Oblivious 81.6/82.4 3.4 2.2
CIFAR-10 White-box 8.0/2.0 48.8 1.0 37.5/38.6 42.6 0.03 7.2/6.3 68.1 2.3
Black-box 16.9/6.8 49.2 1.1 57.5/68.2 36.2 0.04 9.5/6.9 63.6 2.4
Oblivious 48.4/50.0 6.4 1.3
Table 2: Accuracy, Detection rate, and mean Distortion for Basic iterative, DeepFool, and C&W.

We fine-tune the noise hyperparameter of the ensemble method on validation sets put aside for both MNIST and CIFAR10, yielding and , respectively. The selection criterion is based on false positive rate and detection performance against a white-box FGS attack. Both ensembles are comprised of members, with rank threshold for MNIST and

for CIFAR-10. For MNIST, each ensemble member is a fully connected network with a single hidden layer (128 neurons). For CIFAR-10, we use a standard convolutional network drawn from TensorFlow 

tf-cifar10 .

Tables 1 and 2 show the performance of the ensembles on unperturbed (Clean), noisy (Noise), and adversarial inputs (FGS, Basic iterative, DeepFool, C&W), in terms of classification accuracy (Acc) on all/accepted adversarial examples, detection rate (Det), and the mean distortion (Dist) of the adversarial examples. For the Noise column, noisy examples are generated by applying benign random noise to the clean data set. For FGS and Basic iterative, and were used for MNIST and CIFAR-10, respectively. Random noise examples were generated with and , using the same algorithm that was used during training. Higher values of cause significant misclassification in the CIFAR-10 case, so such noise is not benign.

We see reasonable classification accuracy and false-positive rate on clean data for both ensembles. Sensitivity to random noise is increased, but classification accuracy on accepted noisy examples is significantly higher than usual. FGS and basic iterative are completely blocked with the exception of the basic iterative attack on CIFAR-10, which is moderately successful. The most successful attack is DeepFool, causing significant white-box classification error with only 42.6% detection and very small mean distortion. However, we find that the same attack reduces the classification accuracy of an undefended MNIST classifier to 1.32%, and CIFAR-10 to 12.02%. The oblivious C&W attack might also be considered successful, as it achieves similar classification error with very low detection rate, but the mean distortion is much higher. There appears to be a trade-off between classification accuracy and detection rate. Further analysis is required to determine the cause of this trend.

The C&W attack was recently shown carlini2017adversarial to be quite effective at generating adversarial examples, obtaining success rates close to 100% against ten detection methods. By setting in the C&W attack, we were able to perform an oblivious attack against our ensemble method which caused 9.6% classification accuracy and only 7.0% detection rate. However, the mean distortion was 3.3, which we consider to be very high. intuition about the detection method, so further analysis is required to fully understand its consequences.

Results from targeting . The FGS and Basic iterative attacks can be used to target and simultaneously, in order to both fool the ensemble and bypass the defense. Tables 3 and 4 show results of white- and black-box attacks that target both terms of the ensemble objective function simultaneously. Maximizing the first term causes misclassification, while maximizing the second causes high agreement. The expectation in this case is that most members of the ensemble agree on an incorrect label, thereby successfully causing misclassification while avoiding detection. The scaling parameter can be tuned to trade off between classification accuracy and detection.

We choose values for that illustrate its effect on the outcome of the attacks. The terms and are weighted equally when . As decreases, less weight is given to . Again, we observe a trade-off between classification accuracy on adversarial examples and detection. FGS is largely ineffective, but the basic iterative method achieves greater success when is chosen properly, resulting in 26.4% accuracy and only 27.1% detection in the white-box setting when .

Attack Acc. Det. Dist. Acc. Det. Dist. Acc. Det. Dist.
FGS White-box 89.9/98.5 94.2 2.8 88.2/98.4 94.5 2.8 19.1/0.0 99.8 2.8
Black-box 97.0/98.8 70.8 2.8 95.2/98.9 71.2 2.8 18.3/1.7 98.8 2.8
Basic iter. White-box 96.9/97.5 56.6 1.9 45.0/91.3 89.6 2.0 15.2/3.9 99.5 2.4
Black-box 96.9/98.5   6.9 1.4 93.2/98.7 12.5 1.4 24.2/12.4 98.6 2.4
Table 3: Accuracy, Detection rate, and mean Distortion for MNIST attack on .
Attack Acc. Det. Dist. Acc. Det. Dist. Acc. Det. Dist.
FGS White-box 35.9/80.6 99.4 1.7 20.6/43.5 99.5 1.7 11.7/10.8 99.6 1.7
Black-box 33.2/74.5 99.1 1.7 20.1/46.4 99.4 1.7 18.8/37.5 99.5 1.7
Basic iter. White-box 57.0/59.4   7.9 1.0 26.4/32.2 27.1 1.0   8.9/3.3 47.2 1.0
Black-box 53.2/56.3   9.3 1.1 22.2/16.7 43.2 1.1 17.4/7.7 48.2 1.1
Table 4: Accuracy, Detection rate, and mean Distortion for CIFAR-10 attack on .

4 Conclusion and Future Work

We propose a new ensemble learning method111The source code is available at https://github.com/bagnalla/ensemble_detect_adv. for detecting adversarial examples that is both attack-agnostic and computationally inexpensive. We evaluate its effectiveness against four known attacks (FGS, basic iterative, DeepFool and C&W) in oblivious, black-box, and white-box settings. In future work, we plan to incorporate adversarial re-training into our ensemble method, and to experiment with separate models for detection and classification.

References