The Odds are Odd: A Statistical Test for Detecting Adversarial Examples

02/13/2019 ∙ by Kevin Roth, et al. ∙ 18

We investigate conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack. These statistics can be easily computed and calibrated by randomly corrupting inputs. They exploit certain anomalies that adversarial attacks introduce, in particular if they follow the paradigm of choosing perturbations optimally under p-norm constraints. Access to the log-odds is the only requirement to defend models. We justify our approach empirically, but also provide conditions under which detectability via the suggested test statistics is guaranteed to be effective. In our experiments, we show that it is even possible to correct test time predictions for adversarial attacks with high accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks have been used with great success for perceptual tasks such as image classification

(Simonyan & Zisserman, 2014; LeCun et al., 2015) or speech recognition (Hinton et al., 2012). While they are known to be robust to random noise, it has been shown that the accuracy of deep nets can dramatically deteriorate in the face of so-called adversarial examples (Biggio et al., 2013; Szegedy et al., 2013; Goodfellow et al., 2014), i.e. small perturbations of the input signal, often imperceptible to humans, that are sufficient to induce large changes in the model output.

A plethora of methods have been proposed to find adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2014; Kurakin et al., 2016; Moosavi Dezfooli et al., 2016; Sabour et al., 2015). These often transfer across different architectures, enabling black-box attacks even for inaccessible models (Papernot et al., 2016; Kilcher & Hofmann, 2017; Tramèr et al., 2017). This apparent vulnerability is worrisome as deep nets start to proliferate in the real-world, including in safety-critical deployments.

The most direct and popular strategy of robustification is to use adversarial examples as data augmentation during training (Goodfellow et al., 2014; Kurakin et al., 2016; Madry et al., 2017). This improves robustness against specific attacks, yet does not address vulnerability to more cleverly designed counter-attacks (Athalye et al., 2018; Carlini & Wagner, 2017a). This raises the question of whether one can protect models with regard to a wider range of possible adversarial perturbations.

A different strategy of defense is to detect whether or not the input has been perturbed, by detecting characteristic regularities either in the adversarial perturbations themselves or in the network activations they induce (Grosse et al., 2017; Feinman et al., 2017; Xu et al., 2017; Metzen et al., 2017; Carlini & Wagner, 2017a). In this spirit, we propose a method that measures how feature representations and log-odds change under noise: If the input is adversarially perturbed, the noise-induced feature variation tends to have a characteristic direction, whereas it tends not to have any specific direction if the input is natural. We evaluate our method against strong iterative attacks and show that even an adversary aware of the defense cannot evade our detector.

In summary, we make the following contributions:

  • We propose a statistical test for the detection and classification of adversarial examples.

  • We establish a link between adversarial perturbations and inverse problems, providing valuable insights into the feature space kinematics of adversarial attacks.

  • We conduct extensive performance evaluations as well as a range of experiments to shed light on aspects of adversarial perturbations that make them detectable.

2 Related Work

Iterative adversarial attacks. Adversarial perturbations are small specifically crafted perturbations of the input, typically imperceptible to humans, that are sufficient to induce large changes in the model output. Let

be a probabilistic classifier with logits

and let . The goal of the adversary is to find an -norm bounded perturbation , where controls the attack strength, such that the perturbed sample gets misclassified by the classifier . Two of the most iconic iterative adversarial attacks are:

Projected Gradient Descent (Madry et al., 2017) aka Basic Iterative Method (Kurakin et al., 2016):

(1)

where the second and third line refer to the - and -norm variants respectively, is the projection operator onto the set , is a small step-size, is the target label and

is a suitable loss function. For untargeted attacks

and the sign in front of is flipped, so as to ascend the loss function.

Carlini-Wagner attack (Carlini & Wagner, 2017b):

(2)

where is an objective function, defined such that if and only if , e.g.  (see Section V.A in (Carlini & Wagner, 2017b) for a list of objective functions with this property) and denotes the data domain, e.g. . The constant trades off perturbation magnitude (proximity) with perturbation strength (attack success rate) and is chosen via binary search.

Detection.

The approaches most related to our work are those that defend a machine learning model against adversarial attacks by detecting whether or not the input has been perturbed, either by detecting characteristic regularities in the adversarial perturbations themselves or in the network activations they induce

(Grosse et al., 2017; Feinman et al., 2017; Xu et al., 2017; Metzen et al., 2017; Song et al., 2017; Li & Li, 2017; Lu et al., 2017; Carlini & Wagner, 2017a).

Notably, Grosse et al. (2017) argue that adversarial examples are not drawn from the same distribution as the natural data and can thus be detected using statistical tests. Metzen et al. (2017) propose to augment the deep classifier net with a binary “detector” subnetwork that gets input from intermediate feature representations and is trained to discriminate between natural and adversarial network activations. Feinman et al. (2017)

suggest to detect adversarial examples either via kernel density estimates in the feature space of the last hidden layer or via dropout uncertainty estimates of the classfier’s predictions, which are meant to detect if inputs lie in low-confidence regions of the ambient space.

Xu et al. (2017) propose to detect adversarial examples by comparing the model’s predictions on a given input with the model’s predictions on a squeezed version of the input, such that if the difference between the two exceeds a certain threshold, the input is considered to be adversarial.

Adversarial Examples.

It is still an open question whether adversarial examples exist because of intrinsic flaws of the model or learning objective or whether they are solely the consequence of non-zero generalization error and high-dimensional statistics

(Gilmer et al., 2018; Schmidt et al., 2018; Fawzi et al., 2018), with adversarially robust generalization simply requiring more data than classical generalization (Schmidt et al., 2018).

We note that our method works regardless of the origin of adversarial examples. Even if adversarial examples are not the result of intrinsic flaws, they still induce characteristic regularities in the feature representations of a neural net, e.g. under noise, and can thus be detected.

3 Identifying and Correcting Manipulations

3.1 Perturbed Log-Odds

We work in a multiclass setting, where pairs of inputs and class labels are generated from a data distribution . The input may be subjected to an adversarial perturbation such that , forcing a misclassification. A well-known defense strategy against such manipulations is to voluntarily corrupt inputs by noise before processing them. The rationale is that by adding noise , one may be able to recover the original class, if is sufficiently large. For this to succeed, one typically utilizes domain knowledge in order to construct meaningful families of random transformations, as has been demonstrated, for instance, in (Xie et al., 2017; Athalye & Sutskever, 2017)

. Unstructured (e.g. white) noise, on the other hand, does typically not yield practically viable tradeoffs between probability of recovery and overall accuracy loss.

We thus propose to look for more subtle statistics that can be uncovered by using noise as a probing instrument and not as a direct means of recovery. We will focus on probabilistic classifiers with a logit layer of scores as this gives us access to continuous values. For concreteness we will explicitly parameterize logits via

with class-specific weight vectors

on top of a feature map realized by a (trained) deep network. Note that typically . We also define pairwise log-odds between classes and , given input

(3)

We are interested in the noise-perturbed log-odds with , where , if ground truth is available, e.g. during training, or , during testing.

Note that the log-odds may behave differently for different class pairs, as they reflect class confusion probabilities that are task-specific and that cannot be anticipated a priori

. This can be addressed by performing a Z-score standardization across data points

and perturbations . For each fixed class pair define:

(4)

In practice, all of the above expectations are computed by sample averages over training data and noise instantiations.

3.2 Log-Odds Robustness

The main idea pursued in this paper is that the robustness properties of the perturbed log-odds statistics are different, dependent on whether is naturally generated or whether it is obtained through an (unobserved) adversarial manipulation, .

Firstly, note that it is indeed very common to use (small-amplitude) noise during training as a way to robustify models or to use regularization techniques which improve model generalization. In our notation this means that for , it is a general design goal – prior to even considering adversarial examples – that with high probability , i.e. that log-odds with regard to the true class remain stable under noise. We generally may expect to be negative (favoring the correct class) and slightly increasing under noise, as the classifier may become less certain.

Secondly, we posit that for many existing deep learning architectures, common adversarial attacks find perturbations that are not robust, but that overfit to specifics of . We elaborate on this conjecture below by providing empirical evidence and theoretical insights. For the time being, note that if this conjecture can be reaonably assumed, then this opens up ways to design statistical tests to identify adversarial examples and possibly even to infer the true class label, which is particularly useful for test time attacks.

Consider the case of a test time attack, where we suspect an unknown perturbation has been applied such that . If the perturbation is not robust w.r.t. the noise process, then this will yield , meaning that noise will partially undo the effect of the adversarial manipulation and directionally revert the log-odds towards the true class in a way that is statistically captured in the perturbed log-odds. Figure 1 (lower left corner) shows this reversion effect.

Figure 1: Change of logit scores when adding noise to an adversarially perturbed example . Light red dots: . Other red dots: , with color coding of noise amplitude (light small, dark large). Light blue dot: . The candidate class in the green box is selected by Equation 6 and the plot magnified in the lower left.

Let us provide more empirical evidence that such an effect can indeed be observed. Figure 2 shows an experiment performed on the CIFAR10 data set with 10 classes (cf. Section 5). The histograms of standardized log-odds show a good spearation between clean data and manipulated data points .

[width=0.9trim=0.110pt 0.10pt 0.10pt 0.10pt,clip]ext_jpg/0

Figure 2: Weight-difference alignment histograms aggregated over all data points in the training set. Blue represents natural data, orange represents adversarially perturbed data. Columns correspond to predicted labels , rows to candidate classes .

3.3 Statistical Test

We propose to use the expected perturbed log-odds as statistics to test whether classified as should be thought of as a manipulated example of (true) class or not. To that extent, we define thresholds , which guarantee a maximal false detection rate (of say 1%), yet maximize the true positive rate of identifying adversarial examples. We then flag an example as (possibly) manipulated, if

(5)

otherwise it is considered clean.

3.4 Corrected Classification

For test time attacks, it may be relevant not only to detect manipulations, but also to correct them on the spot. The simplest approach is to define a new classifier via

(6)

Here we have set , which sets the correct reference point consistent with Equation 5.

A somewhat more sophisticated approach is to build a second level classifier on top of the perturbed log-odds statistics. We have performed experiments with training a logistic regression classifier for each class

on top of the standardized log-odds scores , , . We have found this to further improve classification accuracy, especially in cases where several Z-scores are comparably far above the threshold. See Section 7.1 in the Appendix for further details.

4 Feature Space Analysis

4.1 Optimal Feature Space Manipulation

The feature space view allows us to characterize the optimal direction of manipulation for an attack targetting some class . Obviously the log-odds only depend on a single direction in feature space, namely .

Proposition 1.

For constraint sets that are closed under orthogonal projections, the optimal attack in feature space takes the form for some .

Proof.

Assume is optimal. We can decompose , where . achieves the same change in log-odds as and is also optimal. ∎

Proposition 2.

If s.t.  and , then .

Proof.

Follows directly from -linearity of log-odds. ∎

Now, as we treat the deep neural network defining as black box device, it is difficult to state whether a (near-)optimal feature space attack can be carried out by manipulating in the input space via . However, we will use some DNN phenomenology as a starting point for making reasonable assumptions that can advance our understanding.

4.2 Pre-Image Problems

The feature space view suggests to search for a pre-image of the optimal manipulation or at least a manipulation such that is small. Such pre-image problems are well-studied in the field of robotics as inverse kinematics problems. A naïve approach would be to linearize at and use the Jacobian,

(7)

Iterative improvements could then be obtained by inverting (or pseudo-inverting) , but are known to be plagued by instabilities. A popular alternative is the so-called Jacobian transpose method (Buss, 2004; Wolovich & Elliott, 1984; Balestrino et al., 1984). This can be motivated by a simple observation

Proposition 3.

Given an input as well as a target direction in feature space. Define and assume that . Then there exist an (small enough) such that is a better pre-image in that .

Proof.

Follows from Taylor expansion of . ∎

It turns out that by the chain rule, we get for any loss

defined in terms of features ,

(8)

With the soft-max loss and in case of one gets

(9)

This shows that a gradient-based iterative attack is closely related to solving the pre-image problem for finding an optimal feature perturbation via the Jacobian transpose method.

4.3 Approximate Rays and Adversarial Cones

If an adversary could directly control the feature space representation, optimal attack vectors can always be found along the ray . As the adversary has to work in input space, this may only be possible in approximation: optimal manipulations may not lie on an exact ray and may not be perfectly co-linear with . However, experimentally, we have found that an optimal perturbation typically defines a ray in input space, (), yielding a feature-space trajectory for which the rate of change along is nearly constant over a relevant range of (see Figures 3  &  9). As tangents are given by

(10)

this means that . Although may fluctuate along feature space directions orthogonal to , making it not a perfect ray, the key characteristics is that there is steady progress in changing the relevant log-odds. While it is obvious that the existence of such rays plays in the hand of an adversary, it remains an open theoretical question to eluciate properties of the model architecture, causing such vulnerabilities.

As adversarial directions are expected to be suscpetible to angular variations (otherwise they would be simple to find and pointing at a general lack of model robustness), we conjecture that geometrically optimal adversarial manipulations are embedded in a cone-like structure, which we call adversarial cone. Experimental evidence for the existence of such cones is visualized in Figure 5. It is a virtue of the commutativity of applying the adversarial and random noise that our statistical test can reliably detect such adversarial cones.

5 Experimental Results


Dataset Model Test set accuracy
(clean / pgd)
CIFAR10 CNN7 93.8% / 3.91%
WResNet 96.2% / 2.60%
CNN4 73.5% / 14.5%
ImageNet Inception V3 76.5% / 7.2%
ResNet 101 77.0% / 7.2%
ResNet 18 69.2% / 6.5%
VGG11(+BN) 70.1% / 5.7%
VGG16(+BN) 73.3% / 6.1%
Table 1: Baseline test set accuracies on clean and PGD-perturbed examples for all models we considered.

5.1 Datasets, Architectures & Training Methods

In this section, we provide experimental support for our theoretical propositions and we benchmark our detection and correction methods on various architectures of deep neural networks trained on the CIFAR10 and ImageNet datasets. For CIFAR10, we compare a WideResNet implementation from Madry et al. (2017)

, a 7-layer CNN with batch normalization and a vanilla 4-layer CNN, details can be found in the Appendix. In the following, if nothing else is specified, we use the 7-layer CNN as a default platform, since it has good test set accuracy at relatively low computational requirements. For ImageNet, we use a selection of models from the torchvision package

(Marcel & Rodriguez, 2010), including Inception V3, ResNet101 and VGG16.

As a default attack strategy we use an -norm constrained PGD white-box attack. The attack budget was chosen to be the smallest value such that most examples are successfully attacked. For CIFAR10 this is , for ImageNet . We experimented with a number of different PGD iterations and found that the corrected classification accuracy is nearly constant across the entire range from 10 up to 1000 attack iterations. The result of this experiment can be found in Figure 10 in the Appendix. For the remainder of this paper, we thus fixed the number of iterations to be 20. Table 1 shows test set accuracies for all models on both clean and adversarial samples.

We note that the detection algorithm (based on Equation 5) is completely attack agnostic, while the logistic classifier based correction algorithm is trained on adversarially perturbed training samples, see Section 7.1 in the Appendix for further details. The second-level logistic classifier is the only stage where we explicitly include an adversarial attack model. While this could in principle lead to overfitting to the particular attack considered, we empirically show that the correction algorithm performs well under attacks not seen during training, see Section 5.6, as well as specifically designed counter-attacks, see Section 5.7.

[width=0.46trim=0.0550pt 0.0650pt 0.040pt 0.0670pt,clip]ext_pdf/1   

[width=0.46trim=0.0550pt 0.0650pt 0.0380pt 0.0670pt,clip]ext_pdf/15
Figure 3: (Left) Norm of the induced feature space perturbation along adversarial and random directions. (Right) Weight-difference alignment. For the adversarial direction, the alignment with the weight-difference between the true and adversarial class is shown. For the random direction, the largest alignment with any weight-difference vector is shown.

5.2 Detectability of Adversarial Examples

Before we evaluate our method, we present empirical support for the claims made in Sections 3 and 4.

Induced feature space perturbations. We compute (i) the norm of the induced feature space perturbation along adversarial and random directions (the expected norm of the noise is set to be approximately equal to the expected norm of the adversarial perturbation). We also compute (ii) the alignment between the induced feature space perturbation and certain weight-difference vectors. For the adversarial direction, we compute the alignment with the weight-difference vector between the true and adversarial class. For the random direction, the largest alignment with any weight-difference vector is computed.

The results are reported in Figure 3. The plot on the left shows that iterative adversarial attacks induce feature space perturbations that are significantly larger than the feature space perturbations induced by random noise. Similarly, the plot on the right shows that the alignment of the attack-induced feature space perturbation is significantly larger than the alignment of the noise-induced feature space perturbation. Combined, this indicates that adversarial examples lie in particular directions in input space in which small perturbations cause atypically large feature space perturbations along the weight-difference direction .

Distance to decision boundary. Next, we investigate whether adversarial examples are closer or farther from the decision boundary compared to their unperturbed counterpart. The purpose is to test whether adversarial examples could be detectable for the trivial reason that they are lying closer to the decision boundary than natural examples.

To this end, we measure (i) the logit cross-over when linearly interpolating between an adversarially perturbed example and its natural counterpart, i.e. we measure

s.t. , where . We also measure (ii) the average -norm of the DeepFool perturbation , required to cross the nearest decision boundary, for a given interpolant (the DeepFool attack tries to find the shortest path to the nearest decision boundary111We additionally augment DeepFool by a binary search to hit the decision boundary precisely.). With the second experiment we want to measure whether the adversarial example is closer to any decision boundary, not necessarily the one between the natural and adversarial example in part (i).

distance to db

[width=0.57trim=0.070pt 0.060pt 0.10pt 0.110pt,clip]ext_pdf/8
   relative offset to logit cross-over point
Figure 4: Average distance to the decision boundary when interpolating from natural examples to adversarial examples. The horizontal axis shows the relative offset of the interpolant  to the logit cross-over point located at the origin . For each interpolant , the distance to the nearest decision boundary is computed. The plot shows that natural examples are slightly closer to the decision boundary () than adversarial examples ().

Our results confirm that adversarial examples are not closer to the decision boundary than their natural counterparts. The mean logit cross-overs is at . Similarly, as shown in Figure 4, the mean -distance to the nearest decision boundary is for adversarial examples, compared to for natural examples. Hence, adversarial examples are even slightly farther from the decision boundary.

We can thus rule out the possibility that adversarial examples can be detected because of a discrepancy in distance to the decision boundary.

Proximity to nearest neighbor. We measure the ratio of the ‘distance between the adversarial and the corresponding unperturbed example’ to the ‘distance between the adversarial example and the nearest other neighbor (in either training or test set)’, i.e. we compute over a number of samples in the test set, for various - & -bounded PGD attacks (with ).

We consistently find that the ratio is sharply peaked around a value much smaller than one. E.g. for -PGD attack with we get , while for the corresponding -PGD attack we obtain . Further values can be found in Table 7 in the Appendix. We note that similar findings have been reported before (Tramèr et al., 2017).

Hence, “perceptually similar” adversarial samples are much closer to the unperturbed sample than to any other neighbor in the training or test set.

We would therefore naturally expect that adversarial examples tend to be shifted to the unperturbed sample rather than any other neighbor when convolved with random noise. Although adding noise is generally not sufficient to cross the decision boundary, e.g. to restore the original class, the noise-induced feature variation is more likely to shift to the original class than any other neighboring class.

Figure 5:

Adversarial cone. The plot shows an averaged 2D projection of the classifier’s softmax prediction for the natural class in an ambient space hyperplane spanned by the adversarial perturbation (on the vertical axis) and randomly sampled orthogonal vectors (on the horizontal axis). The natural sample is located one-third from the top, the adversarial sample one third from the bottom on the vertical axis through the middle of the plot. See Section 

5.2 for a mathematical description.

Adversarial Cones. To visualize the ambient space neighborhood around natural and adversarially perturbed samples, we plot the averaged 2D projection of the classifier’s prediction for the natural class in a hyperplane spanned by the adversarial perturbation and randomly sampled orthogonal vectors, i.e. we plot for with along the horizontal and along the vertical axis, where denotes the softmax of and denotes expectation over random vectors with approximately equal norm.

Interestingly, the plot reveals that adversarial examples live in a conic neighborhood, i.e. the adversarial sample is statistically speaking “surrounded” by the natural class, as can be seen from the gray rays confining the adversarial cone. This confirms our proximity results and illustrates why the noise-induced feature variation tends to have a direction that is indicative of the natural class when the sample is adversarially perturbed. See also Figure 9 in the Appendix.

Suboptimality & robustness to random noise. By virtue of the commutativity of applying the adversarial and random noise , the view that the adversarial perturbation is not robust to random noise, is dual to the view that is a suboptimal perturbation for the natural sample . To investigate this, we compute (i) the softmax predictions when adding noise to the adversarial example and (ii) the noise-induced weight-difference alignment for natural and adversarial examples.

The results are reported in Figure 6. The plot on the left shows that the probability of the natural class increases significantly faster than the probability of the highest other class when adding noise with a small to intermediate magnitude to the adversarial example. Note, however, that the probability of the natural class never climbs to be the highest probability of all classes, which is why simple addition of noise to an adversarial example does not recover the natural class in general. The plot on the right shows that the noise-induced weight-difference alignment is significantly larger for the adversarial example than for the natural example. This illustrates that the noise manages to partially undo the effect of the adversarial manipulation and directionally revert the features towards the true class. Combined, these results provide a direct justification to detect adversarial examples via the test statistic presented in Section 3.3.

[width=0.437trim=0.070pt 0.060pt 0.10pt 0.110pt,clip]ext_pdf/16   

[width=0.47trim=0.0550pt 0.070pt 0.0280pt 0.070pt,clip]ext_pdf/7
Figure 6: (Left) Classifier predictions when adding random noise to the adversarial example. (Right) Noise-induced weight-difference alignment along and respectively. For the adversarial example, the alignment with the weight-difference vector between the true and adversarial class is shown. For the natural example, the largest alignment with any weight-difference vector is shown.

5.3 Detection rates and classification accuracies

In the remainder of this section we present the results of various performance evaluations. The reported detection rates measure how often our method classifies a sample as being adversarial (corresponding to the False Positive Rate if the sample is clean and to the True Positive Rate if it was perturbed). We also report accuracies for the predictions made by the logistic classifier based correction method.

Tables 2 and 3 report the detection rates of our statistical test and accuracies of the corrected predictions. Our method manages to detect nearly all adversarial samples, seemingly getting better as models become more complex. All the while the false positive rate stays around . Further222Due to computational constraints, we focus on the CIFAR10 models in the remainder of this paper., after correcting, we manage to reclassify almost all of the detected adversarial samples to their respective source class successfully, resulting in test set accuracies on adversarial samples within of the respective test set accuracies on clean samples. Also note that due to the low false positive rate, the drop in performance on clean samples is negligible.


Dataset Model Detection rate
(clean / pgd)
CIFAR10 CNN7 0.8% / 95.0%
WResNet 0.2% / 99.1%
CNN4 1.4% / 93.8%
ImageNet Inception V3 1.9% / 99.6%
ResNet 101 0.8% / 99.8%
ResNet 18 0.6% / 99.8%
VGG11(+BN) 0.5% / 99.9%
VGG16(+BN) 0.3% / 99.9%
Table 2: Detection rates of our statistical test.

Dataset Model Accuracy
(clean / pgd)
CIFAR10 CNN7 93.6% / 89.5%
WResNet 96.0% / 92.7%
CNN4 71.0% / 67.6%
Table 3: Accuracies of the corrected classifier.

5.4 Effective strength of adversarial perturbations.

We measure how the detection and reclassification accuracy of our method depends on the attack strength. To this end, we define the effective Bernoulli- strength of -bounded adversarial perturbations as the attack success rate when each entry of the perturbation is individually accepted with probability and set to zero with probability . For we obtain the usual adversarial misclassification rate of the classifier . We naturally expect weaker attacks to be less effective but also harder to detect than stronger attacks.

The results are reported in Figure 7. We can see that the uncorrected accuracy of the classifier decreases monotonically as the attack strength increases, both in terms of the attack budget as well as in term of the fraction of accepted perturbation entries. Meanwhile, the detection rate of our method increases at such a rate that the corrected classifier manages to compensate for the decay in uncorrected accuracy, across the entire range considered.

detection rate / accuracy

[width=0.48trim=0.050pt 0.0850pt 0.020pt 0.0730pt,clip]ext_pdf/5

detection rate / accuracy

[width=0.48trim=0.050pt 0.0850pt 0.020pt 0.0730pt,clip]ext_pdf/14
  Bernoulli attack strength q
Figure 7: Detection and reclassification accuracy as a function of attack strength. The uncorrected classifier accuracy decreases as the attack strength increases, both in terms of the attack budget as well as in terms of the fraction of accepted perturbation entries. Meanwhile, the detection rate of our method increases at such a rate that the corrected classifier manages to compensate for the decay in uncorrected accuracy.

5.5 Comparing to Adversarial Training

For comparison, we also report test set and white-box attack accuracies for adversarially trained models. Madry et al. (2017)’s WResNet was available as an adversarially pretrained variant, while the other models were adversarially trained as outlined in the Appendix 7.1. The results for the best performing classifier are shown in Table 4. We can see that adversarial training does not compare favorably to our method, as the accuracy on adversarial samples is significantly lower while the drop in performance on clean samples is considerably larger.


Adversarially Accuracy
trained model (clean / pgd)
CNN7 82.2% / 44.4%
WResNet 87.3% / 55.2%
CNN4 68.2% / 40.4%
Table 4: Test set accuracies on clean and PGD-perturbed examples for adversarially trained models.

5.6 Defending against unseen attacks

Next, we evaluate our method on adversarial examples created by attacks that are different from the -constrained PGD attack used to train the second-level logistic classifier. The rationale is that the log-odds statistics of the unseen attacks could be different from the ones used to train the logistic classifier. We thus want to test whether it is possible evade correct reclassification by switching to a different attack. As alternative attacks we use an -constrained PGD attack as well as the Carlini-Wagner attack.

The baseline accuracy of the undefended CNN7 on adversarial examples from the -PGD attack is and for the Carlini-Wagner attack. Table 5 shows detection rates and corrected accuracies after our method is applied. As can be seen, there is only a slight decrease in performance, i.e. our method remains capable of detecting and correcting most adversarial examples of the previously unseen attacks.


Attack Detection rate Accuracy
(clean / attack) (clean / attack)
1.0% / 96.1% 93.3% / 92.9%
CW 4.8% / 91.6% 89.7% / 77.9%
Table 5: Detection rates and reclassification accuracies on adversarial samples from attacks that have not been used to train the second-level logistic classifier.

5.7 Defending against defense-aware attacks

Finally, we evaluate our method in a setting where the attacker is fully aware of the defense, in order to see if the defended network is susceptible to cleverly designed counter-attacks. Since our defense is built on random sampling from noise sources that are under our control, the attacker will want to craft perturbations that perform well in expectation under this noise. The optimality of this strategy in the face of randomization-based defenses was established in Carlini & Wagner (2017a). Specifically, we compute the expected adversarial attack over a noise neighborhood around , i.e. , with the same noise source as used for detection.

The undefended accuracies under this attack for the models under consideration are: CNN7 , WResNet and CNN4 . Table 6 shows the corresponding detection rates and accuracies after defending with our method. Compared to Section 5.6, the drop in performance is larger, as we would expect for a defense-aware counter-attack, however, both the detection rates and the accuracies remain remarkably high compared to the undefended network.


Model Detection rate Accuracy
(clean / attack) (clean / attack)
CNN7 2.8% / 75.5% 91.2% / 56.6%
WResNet 4.5% / 71.4% 91.7% / 56.0%
CNN4 4.1% / 81.3% 69.0% / 56.5%
Table 6: Detection rates and reclassification accuracies on clean and adversarial samples from the defense-aware attacker.

6 Conclusion

We have shown that adversarial examples exist in cone-shaped regions in very specific directions from their corresponding natural samples. Based on this, we design a statistical test of a given sample’s log-odds’ robustness to noise that can infer with high accuracy if the sample is natural or adversarial and recover its original class label, if necessary. Further research into the properties of network architectures is necessary to explain the underlying cause of this phenomenon. It remains an open question which current model families follow this paradigm and whether criteria exist which can certify that a given model is immunizable via this method.

Acknowledgements

We would like to thank Sebastian Nowozin, Aurelien Lucchi, Gary Becigneul, Jonas Kohler and the dalab team for insightful discussions and helpful comments.

References

7 Appendix

7.1 Experiments.

Further details regarding the implementation:

Details on the models used. All models on ImageNet are taken as pretrained versions from the torchvision333https://github.com/pytorch/vision python package. For CIFAR10, both CNN7444https://github.com/aaron-xichen/pytorch-playground as well as WResNet555https://github.com/MadryLab/cifar10_challenge are available on GitHub as pretrained versions. The CNN4 model is a standard deep convolutional network with layers of 32, 32, 64 and 64 channels, each using

filters and each layer being followed by a ReLU nonlinearity and

MaxPooling. At the end is a single fully connected softmax classifier.

Training procedures.

We used pretrained versions of all models except CNN4, which we trained for 50 epochs with RMSProp and a learning rate of 0.0001. For adversarial training, we trained for 50, 100 and 150 epochs using mixed batches of clean and corresponding adversarial (PGD) samples, matching the respective training schedule and optimizer settings of the clean models and then we chose the best performing classifier. The exception to this is the WResNet model, which has an already provided adversarially trained version.

Setting the thresholds. The thresholds are set such that our statistical test achieves the highest possible detection rate (aka True Positive Rate) at a prespecified False Positive Rate of less than ( for Sections 5.6 and 5.7), computed on a hold-out set of natural and adversarially perturbed samples.

Determining attack strengths. For the adversarial attacks we consider, we can choose multiple parameters to influence the strength of the attack. Usually, as attack strength increases, at some point there is a sharp increase in the fraction of samples in the dataset where the attack is successful. We choose our attack strength such that it is the lowest value that is after this increase, which means that it is the lowest value such that the attack is still able to successfully attack most of the datapoints. Note that weaker attacks generate adversarial samples that are closer to the original samples, which makes them harder to detect than excessively strong attacks.

Noise sources. Adding noise provides a non-atomic view, probing the classifiers output in an entire neighborhood around the input. In practice we sample noise from a mixture of different sources: Uniform, Bernoulli and Gaussian noise with different magnitudes. The magnitudes are sampled from a log-scale. For each noise source and magnitude, we draw 256 samples as base for noisy versions of the incoming datapoints, though we have no observed a large drop in performance using only the single best combination of noise source and magnitude and using less samples, which speeds up the wall time used to classify a single sample by an order of magnitude. For detection, we test the sample in question against the distribution of each noise source, then we take a majority vote as to whether the sample should be classified as adversarial.

Plots.

All plots containing shaded areas have been repeated over the dataset. In this case, the line indicates the mean measurement and the shaded area represents one standard deviation around the mean.

Wall time performance. Since for each incoming sample at test time, we have to forward propagate a batch of noisy versions through the model, the time it takes to classify a sample in a robust manner using our method scales linearly with compared to the same model undefended. The rest of our method has negligible overhead. At training time, we essentially have to do the same thing to the training dataset, which, depending on its size and the number of desired noise sources, can take a while. But for a given model and dataset, it has to be computed only once and the computed statistics can then be stored.

7.2 Logistic classifier for reclassification.

Instead of selecting class according to Eq. (6), we found that training a simple logistic classifier that gets as input all the Z-scores for can further improve classification accuracy, especially in cases where several Z-scores are comparably far above the threshold. Specifically, for each class label , we train a separate logsitic regression classifier such that if a sample of predicted class is detected as adversarial, we obtain the corrected class label as . These classifiers are trained on the same training data that is used to collect the statistics for detection. Two points worth noting: First, as the classifiers are trained using adversarial samples from a particular adversarial attack model, they might not be valid for adversarial samples from other attack models. However, we observe experimentally that our classifiers (trained using PGD) generalize well to other attacks. Second, building a classifier in order to protect a classifier might seem tautological, because this metaclassifier could now become the target of an adversarial attack itself. However, this does not apply in our case, as the inputs to our classifier are (i) low-dimensional (there are just

weight-difference alignments for any given sample), (ii) a product of sampled noise and therefore random variables and (iii) the classifier itself is shallow. All of these make it much harder to specifically attack this classifier. Further, in Section 

5.7 we show that our method still performs reasonably well even if the adversary is aware of the defense.

7.3 Additional results mentioned in the main text.

Figure 8: Weight-difference alignments. Different plots correspond to different classes . The light red dot shows an adversarially perturbed example without noise. The other red dots show the adversarially perturbed example with added noise. Color shades reflect noise magnitude: light small, dark large magnitude. The light blue dot indicates the corresponding natural example without noise. The candidate class in the upper-left corner is selected. See Figure 1 for an explanation.
[width=0.62trim=0.070pt 0.060pt 0.10pt 0.110pt,clip]ext_pdf/3
Figure 9: Classifier predictions along the ray from natural to adversarial example and beyond. For the untargeted attack shown here, the probability of the source class stays low, even at the distance to the adversarial example.
[width=0.6trim=0.050pt 0.080pt 0.030pt 0.070pt,clip]ext_pdf/10
Figure 10: Clean / corrected detection rate / accuracy vs number of PGD iterations.

PGD
Table 7: Proximity to nearest neighbor. The table shows the ratio of the ‘distance between the adversarial and the corresponding unperturbed example’ to the ‘distance between the adversarial example and the nearest other neighbor (in either training or test set)’, i.e. .

7.4 ROC Curves.

Figure 11 shows how our method performs against a PGD attack under different settings of thresholds .

CNN7 CNN7

accuracy on pgd samples

[width=0.45trim=0.060pt 0.0850pt 0.020pt 0.0730pt,clip]ext_pdf/12

true positive detection rate

[width=0.45trim=0.060pt 0.0850pt 0.020pt 0.0730pt,clip]ext_pdf/11
  accuracy on clean samples false positive detection rate
WResNet WResNet

accuracy on pgd samples

[width=0.45trim=0.060pt 0.0850pt 0.020pt 0.0730pt,clip]ext_pdf/4

true positive detection rate

[width=0.45trim=0.060pt 0.0850pt 0.020pt 0.0730pt,clip]ext_pdf/13
  accuracy on clean samples false positive detection rate
CNN4 CNN4

accuracy on pgd samples

[width=0.45trim=0.060pt 0.0850pt 0.020pt 0.0730pt,clip]ext_pdf/9

true positive detection rate

[width=0.45trim=0.060pt 0.0850pt 0.020pt 0.0730pt,clip]ext_pdf/0
  accuracy on clean samples false positive detection rate
Figure 11: ROC-curves. Test set accuracies (top row) and false / true positive detection rate on clean and PGD samples for a range of choices of thresholds .