As is now widely understood, neural networks can easily be fooled by small but cleverly chosen input perturbations, so-called adversarial examplesszegedy2013intriguing . We propose a new ensemble learning method that enables detection – in oblivious, black-, and white-box settings – of adversarial examples generated by state-of-the-art attacks including DeepFool moosavi2016deepfool , the basic iterative method kurakin2016adversarial , and C&W carlini2017towards .
In our method, multiple instances of a base model are trained jointly to minimize a cross-entropy loss. An additional loss term encourages the models in the ensemble to have highly varying predictions on examples not from the source data distribution (contrast with he2017adversarial , which studies ensembles of distinct defenses). The ensemble labels an input as adversarial when there is little consensus on that input. Our method is more sensitive than an undefended model to non-adversarial (random) input perturbations but can detect 68.1% of the adversarial examples generated on CIFAR-10 by C&W carlini2017adversarial , an attack method that was recently shown to be quite effective against existing defenses.
The rest of the paper is structured as follows: in Section 2 we describe a simple, computationally inexpensive, and attack-agnostic ensemble learning method for detecting and classifying adversarial examples; in Section 3 we experimentally evaluate ensemble classifiers trained using our method against the adversarial examples generated by the Fast Gradient Sign (FGS) method goodfellow2014explaining , the Basic Iterative method kurakin2016adversarial , DeepFool moosavi2016deepfool , and C&W carlini2017towards , on both MNIST and CIFAR-10. The paper concludes with a summary of the approach and ideas for future work.
2 Ensemble Method for Classification and Adversarial Detection
We propose to train an ensemble of models that label clean examples accurately while also disagreeing on randomly perturbed examples. At test time, the ensemble will be used for both adversarial detection and classification: the label achieving the maximum agreement on a test example is output, unless the agreement is too low, in which case the example is labeled as adversarial.
Letclasses, and let
be the 3-dimensional tensor of the softmax parameters for the entire ensemble. Let
be the vector of softmax outputs computed by the ensemble memberon an input example . If is the representation computed by the layer preceding the softmax, then . Furthermore, let be the randomly perturbed version of training example , obtained by adding perturbation values that are sampled uniformly at random from , where
is a hyperparameter that controls the-norm of the perturbation vector. To achieve the dual objective of high accuracy on clean examples and disagreement on randomly perturbed examples, we define the cost function shown in Equation 1 below, with two components: is the standard cross-entropy error for clean example and its true label , averaged over all ensemble members; is the mean agreement among ensemble members, where agreement between two members is captured through the dot product of their softmax output vectors.
controls the trade-off between accuracy on clean examples and disagreement on perturbed examples. For brevity, we omit the weight decay regularization. During training, the cost function is minimized using minibatch stochastic gradient descent. The cross-entropy term is calculated as a sum over all clean examples in the minibatch, while the agreement term is calculated over the perturbed version of the minibatch.
By minimizing both terms simultaneously, the ensemble members are encouraged to label clean examples accurately while disagreeing on randomly perturbed examples. This is similar in spirit to a classic technique in ensemble learning known as negative correlation learning liu1999ensemble in which a penalty term is used to minimize correlation of incorrect predictions on clean data, except that here we pay no heed to whether the predictions are right or wrong. Neural networks are often robust to small random noise, which means that the members may also be penalized for agreeing on the correct label. This may seem counterintuitive, but the effect is that the models learn to react differently to perturbed data while still performing well on clean data, thus encouraging diversity at their decision boundaries.
Detection. At test time, the outputs of all ensemble members are combined using a rank voting mechanism. Each member assigns the rank to the label it considers the most likely, rank to the second most likely, and so on. For each label, the ranks are summed across all members, and the smallest label rank is used as the ensemble disagreement.
For an ensemble trained using our method, a large ensemble disagreement is indicative of an input that lies outside of the data distribution. Correspondingly, we implement a simple rank-based criterion that rejects a test example as adversarial or outlier if and only if the ensemble disagreement is above arank threshold hyperparameter . We tune to be as low as possible while having a low false positive rate on clean validation data.
Classification. If an input data example is accepted by the rank thresholding mechanism, one could simply use the label with the lowest rank sum as the overall prediction of the ensemble. However, we find in practice that the results are improved by using a distribution summation rokach2009taxonomy that selects the classification label as , where is the probability of label in model . In general, the mechanism for combining outputs to produce the overall prediction need not be the same as that for detecting invalid inputs. This could be taken a step further, such that the entire ensemble is used only for the detection of invalid inputs, whereas inputs detected as valid are passed to a separate, perhaps more powerful classifier. We leave this idea for future investigation.
3 Experimental Results
This section answers the following questions:
How well does the ensemble method of Section 2 defend against known attacks? We analyze both classification accuracy (how well the defense classifies adversarial examples) and detection rate (how often the defense detects adversarial examples). Detection rate is a measure of the effectiveness of the defense when used solely as a detector, assuming that the false-positive rate is low. Classification accuracy measures how well our ensembles perform when used solely to classify inputs. We also measure classification accuracy on accepted inputs (those not detected as adversarial), a measure of the effectiveness of detection followed by classification.
To what degree is the defense susceptible to random noise? The primary metric here is rate of false-positive detection on benign noise – how often the Section 2 detector classifies random noise as adversarial (as this rate increases, the detector becomes less useful in deployments).
Threat models. We measure the effectiveness of attacks in black- and white-box modes, as well as oblivious-mode in the case of C&W. White-box attacks have access to the model, its parameters, and its training data. Black-box attacks have access to the model and its training data, but not its parameters. By giving the black-box attacker access to the training data, we expect a stronger attack than the black-box model of carlini2017adversarial , in which the attacker and model are trained on two distinct datasets of comparable size and quality. Performing additional experiments against weaker black-box attackers without access to the target model’s training data is left for future work.
The fast gradient sign method (FGS) generates adversarial examples by taking a single linear step from the original image in the direction of the gradient of the objective function. The basic iterative method iteratively performs FGS in small steps to generate more precise examples, subject to a box constraint on perturbation magnitude. DeepFool attempts to find a minimal adversarial perturbation by linearizing the discriminant function around the current perturbed example and iterating until the current perturbation is sufficient to change the label. The C&W attack generates targeted adversarial examples using an objective that maximizes the margin between the target class logit and the logits of the other classes, while simultaneously minimizing the perturbation’snorm.
Results from targeting . We first evaluate the defense against attacks which target only the cross-entropy term of the objective function. In the case of DeepFool and C&W, as discriminant function we used the sums of the unnormalized logit outputs of the ensemble members. As a by-product of maximizing the sum of logits for a wrong label, the examples inferred by C&W are expected to also obtain high ensemble agreement on that label. The parameter controlling maximum margin in C&W is chosen in each instance to result in comparable mean distortion across attacks.
We fine-tune the noise hyperparameter of the ensemble method on validation sets put aside for both MNIST and CIFAR10, yielding and , respectively. The selection criterion is based on false positive rate and detection performance against a white-box FGS attack. Both ensembles are comprised of members, with rank threshold for MNIST andtf-cifar10 .
Tables 1 and 2 show the performance of the ensembles on unperturbed (Clean), noisy (Noise), and adversarial inputs (FGS, Basic iterative, DeepFool, C&W), in terms of classification accuracy (Acc) on all/accepted adversarial examples, detection rate (Det), and the mean distortion (Dist) of the adversarial examples. For the Noise column, noisy examples are generated by applying benign random noise to the clean data set. For FGS and Basic iterative, and were used for MNIST and CIFAR-10, respectively. Random noise examples were generated with and , using the same algorithm that was used during training. Higher values of cause significant misclassification in the CIFAR-10 case, so such noise is not benign.
We see reasonable classification accuracy and false-positive rate on clean data for both ensembles. Sensitivity to random noise is increased, but classification accuracy on accepted noisy examples is significantly higher than usual. FGS and basic iterative are completely blocked with the exception of the basic iterative attack on CIFAR-10, which is moderately successful. The most successful attack is DeepFool, causing significant white-box classification error with only 42.6% detection and very small mean distortion. However, we find that the same attack reduces the classification accuracy of an undefended MNIST classifier to 1.32%, and CIFAR-10 to 12.02%. The oblivious C&W attack might also be considered successful, as it achieves similar classification error with very low detection rate, but the mean distortion is much higher. There appears to be a trade-off between classification accuracy and detection rate. Further analysis is required to determine the cause of this trend.
The C&W attack was recently shown carlini2017adversarial to be quite effective at generating adversarial examples, obtaining success rates close to 100% against ten detection methods. By setting in the C&W attack, we were able to perform an oblivious attack against our ensemble method which caused 9.6% classification accuracy and only 7.0% detection rate. However, the mean distortion was 3.3, which we consider to be very high. intuition about the detection method, so further analysis is required to fully understand its consequences.
Results from targeting . The FGS and Basic iterative attacks can be used to target and simultaneously, in order to both fool the ensemble and bypass the defense. Tables 3 and 4 show results of white- and black-box attacks that target both terms of the ensemble objective function simultaneously. Maximizing the first term causes misclassification, while maximizing the second causes high agreement. The expectation in this case is that most members of the ensemble agree on an incorrect label, thereby successfully causing misclassification while avoiding detection. The scaling parameter can be tuned to trade off between classification accuracy and detection.
We choose values for that illustrate its effect on the outcome of the attacks. The terms and are weighted equally when . As decreases, less weight is given to . Again, we observe a trade-off between classification accuracy on adversarial examples and detection. FGS is largely ineffective, but the basic iterative method achieves greater success when is chosen properly, resulting in 26.4% accuracy and only 27.1% detection in the white-box setting when .
4 Conclusion and Future Work
We propose a new ensemble learning method111The source code is available at https://github.com/bagnalla/ensemble_detect_adv. for detecting adversarial examples that is both attack-agnostic and computationally inexpensive. We evaluate its effectiveness against four known attacks (FGS, basic iterative, DeepFool and C&W) in oblivious, black-box, and white-box settings. In future work, we plan to incorporate adversarial re-training into our ensemble method, and to experiment with separate models for detection and classification.
-  Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. arXiv preprint arXiv:1705.07263, 2017.
-  Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 39–57. IEEE, 2017.
TensorFlow Tutorials: Convolutional Neural Networks.https://www.tensorflow.org/tutorials/deep_cnn#cifar-10_model/, 2017. [Online; accessed November 4, 2017].
-  Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
-  Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song. Adversarial example defenses: Ensembles of weak defenses are not strong. arXiv preprint arXiv:1706.04701, 2017.
-  Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
-  Yong Liu and Xin Yao. Ensemble learning via negative correlation. Neural networks, 12(10):1399–1404, 1999.
-  Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In , pages 2574–2582, 2016.
-  Lior Rokach. Taxonomy for characterizing ensemble methods in classification tasks: A review and annotated bibliography. Computational Statistics & Data Analysis, 53(12):4046–4072, 2009.
-  Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.