Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

02/01/2018 ∙ by Anish Athalye, et al. ∙ 0

We identify obfuscated gradients as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat optimization-based attacks, we find defenses relying on this effect can be circumvented. For each of the three types of obfuscated gradients we discover, we describe indicators of defenses exhibiting this effect and develop attack techniques to overcome it. In a case study, examining all defenses accepted to ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 8 defenses relying on obfuscated gradients. Using our new attack techniques, we successfully circumvent all 7 of them.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

Code Repositories

obfuscated-gradients

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In response to the susceptibility of neural networks to adversarial examples

(Szegedy et al., 2013; Biggio et al., 2013), there has been significant interest recently in constructing defenses to increase the robustness of neural networks. While progress has been made in understanding and defending against adversarial examples in the white-box setting, where the adversary has complete access to the network, a complete solution has not yet been found.

As benchmarking against iterative optimization-based attacks (e.g., (Kurakin et al., 2016a; Madry et al., 2018; Carlini & Wagner, 2017c)) has become standard practice in evaluating defenses, new defenses have arisen that appear to be robust against these powerful optimization-based attacks.

We identify one common reason why many defenses provide apparent robustness against iterative optimization attacks: obfuscated gradients, a term we define as a special case of gradient masking (Papernot et al., 2017). Without a good gradient, where following the gradient does not successfully optimize the loss, iterative optimization-based methods cannot succeed. We identify three types of obfuscated gradients:

  • Shattered gradients are nonexistent or incorrect gradients caused either intentionally through non-differentiable operations or unintentionally through numerical instability.

  • Stochastic gradients are gradients that depend on test-time entropy unavailable to the attacker.

  • Vanishing/exploding gradients in very deep or recurrent computation with long-term dependencies result in an unusable gradient.

We propose new techniques to overcome obfuscated gradients caused by these three phenomenon. We address gradient shattering with a new attack technique we call Backward Pass Differentiable Approximation, where we approximate derivatives by computing the forward pass normally and compute the backward pass through a differentiable approximation of the function. We compute gradients of randomized defenses by applying Expectation Over Transformation (Athalye et al., 2017). We solve vanishing/exploding gradients through reparameterization and optimize over a space where gradients do not explode/vanish.

To investigate the prevalence of obfuscated gradients and understand the applicability of these attack techniques, we use the ICLR 2018 non-certified defenses that argue white-box robustness as a case study. We find that obfuscated gradients are a common occurrence, with 7 of 8 defenses relying on this phenomenon. Applying the new attack techniques we develop, we overcome obfuscated gradients and circumvent 6 of them completely, and 1 partially. Along with this, we offer an analysis of the evaluations performed in the papers.

Additionally, we hope to provide researchers with a common baseline of knowledge, description of attack techniques, and common evaluation pitfalls, so that future defenses can avoid falling vulnerable to these same attacks.

To promote reproducible research, we release our re-implementation of each of these defenses, along with implementations of our attacks for each. 111 Anonymized for submission; see supplementary material.

2 Preliminaries

2.1 Notation

We consider a neural network used for classification where

represents the probability image

corresponds to label

. We classify images, represented as

for a 3-color image of width and height . We use to refer to layer of the neural network, and the composition of layers through . We denote the classification of the network as where the true label of image is written .

2.2 Adversarial Examples

Given an image and classifier , an adversarial example (Szegedy et al., 2013) satisfies two properties: is small for some distance metric , and . That is, for images, and appear visually similar but is classified incorrectly.

For this paper we use the and distortion metrics to measure similarity. Two images which have a small distortion under either of these metrics will appear visually identical. We report distance in the normalized space, so that a distortion of corresponds to , and distance as the total root-mean-square distortion normalized by the total number of pixels.

2.3 Datasets & Models

We evaluate these defenses on the same datasets on which they claim robustness.If a defense argues security on MNIST and any other dataset, we only circumvent the defense on the larger dataset. On MNIST and CIFAR-10, we evaluate defenses over the entire test set and generate untargeted adversarial examples. On ImageNet, we evaluate over 1000 randomly selected images in the test set, and construct

targeted adversarial examples with randomly selected target classes. Generating targeted adversarial examples is a strictly harder problem which we believe is also a more meaningful metric, especially for this dataset.

We use standard models for each dataset. For MNIST we use a standard 5-layer convolutional neural network which reaches

accuracy. On CIFAR-10 we train a wide resnet (Zagoruyko & Komodakis, 2016; He et al., 2016) to accuracy. For ImageNet we use the InceptionV3 (Szegedy et al., 2016) network which reaches top-1 and top-5 accuracy.

2.4 Threat Models

Prior work considers adversarial examples in a number of threat models that can be broadly classified into two categories: white-box and black-box. In this paper, we consider defenses designed for the white-box setting, where the adversary has full access to the neural network classifier (architecture and weights) and defense, but not test-time randomness (only the distribution).

2.5 Attack Methods

We construct adversarial examples with iterative optimization-based methods. At a high level, for a given instance , these attacks attempt to search for a such that either minimizing , or maximizing the classification loss on . To generate bounded adversarial examples we use Projected Gradient Descent (PGD) confined to a specified ball; for , we use the Lagrangian relaxation of Carlini & Wagner (2017c). The specific choice of optimizer is far less important than choosing to use iterative optimization-based methods (Madry et al., 2018).

3 Obfuscated Gradients

A defense is said to cause gradient masking if it “does not have useful gradients” for generating adversarial examples (Papernot et al., 2017), and is known to be an incomplete defense to adversarial examples (Papernot et al., 2017; Tramèr et al., 2018). Despite this, we observe that 7 of the ICLR defenses rely on this effect explicitly.

To contrast from existing defenses which cause gradient masking by learning to break gradient descent (e.g., by learning to make the gradients point the wrong direction (Tramèr et al., 2018)), we refer to the case where defenses are designed in such a way that the constructed defense necessarily causes gradient masking as obfuscated gradients. We discover three ways in which defenses obfuscate gradients; we briefly define and discuss each of them.

Shattered Gradients are caused when a defense is non-differentiable, introduces numeric instability, or otherwise causes a gradient to be nonexistent or incorrect. Defenses that cause gradient shattering often do so unintentionally, by introducing operations that are differentiable but where following the gradient does not maximize classification loss.

Stochastic Gradients

are caused by randomized defenses, where either the network itself is randomized or the input is randomized before being fed to the classifier, and so the gradients become randomized. This causes methods using a single sample of the randomness to incorrectly estimate the true gradient.

Exploding & Vanishing Gradients are often caused by defenses that consist of multiple iterations of neural network evaluation, feeding the output of one computation as the input of the next. This type of computation, when unrolled, can be viewed as an extremely deep neural network evaluation, which can cause vanishing/exploding gradients.

3.1 Identifying Obfuscated & Masked Gradients

As we observe, some defenses intentionally (by design) break gradient descent and cause obfuscated gradients. However, other defenses also (by design) break gradient descent, even if it was not the intention of the designer. We discuss below characteristic behaviors of defenses which have caused this to occur. While these behaviors may not perfectly characterize all cases of masked gradients, we find that every defense we are aware of that masks gradients exhibits at least one of these behaviors.

One-step attacks perform better than iterative attacks.

Iterative optimization-based attacks applied in a white-box setting are strictly stronger than single-step attacks and should give strictly superior performance. If single-step methods give performance superior to iterative methods, it is likely that the iterative attack is becoming stuck in its optimization search at a local minimum.

Black-box attacks are better than white-box attacks.

The black-box threat model is a strict subset of the white-box threat model, and so attacks in the white-box setting should perform better; however, if a defense is obfuscating gradients, then often times black-box attacks which do not use the gradient will perform better than white-box attacks (Papernot et al., 2017).

Unbounded attacks do not reach 100% success.

With unbounded distortion, any classifier should have robustness to adversarial examples (as long as the classifier is not a constant function). If attacks do not reach success, this indicates the defense is defeating the attack in a subtle manner and may not actually be increasing robustness.

Random sampling finds adversarial examples.

If brute-force random search (e.g., randomly sampling or more points) within some -ball successfully finds adversarial examples when gradient methods do not, the defense is likely to be masking gradients.

4 Attack Techniques

Generating adversarial examples through optimization-based methods requires useful gradients obtained through backpropagation 

(Rumelhart et al., 1986). Many defenses therefore either intentionally or unintentionally cause gradient descent to fail because of obfuscated gradients caused by gradient shattering, stochastic gradients, or vanishing/exploding gradients. We discuss a number of techniques that we develop to overcome obfuscated gradients.

4.1 Backward Pass Differentiable Approximation

Shattered gradients, caused either unintentionally, e.g. by numerical instability, or intentionally, e.g. by using non-differentiable operations, result in nonexistent or incorrect gradients. To attack defenses where gradients are not readily available, we introduce a technique we call Backward Pass Differentiable Approximation (BPDA) 222The BPDA approach can be used on an arbitrary network, even if it is already differentiable, to obtain a more useful gradient..

4.1.1 Special Case: Straight-Through Estimator

As a special case, we first discuss what amounts to the straight-through estimator (Bengio et al., 2013) applied to constructing adversarial examples.

Many non-differentiable defenses can be expressed as follows: given a pre-trained classifier , construct a preprocessor and let the secured classifier where the preprocessor satisfies (e.g., such a may perform image denoising to remove the adversarial perturbation, as in Guo et al. (2018)). If is smooth and differentiable, then computing gradients through the combined network is often sufficient to circumvent the defense (Carlini & Wagner, 2017b). However, recent work has constructed functions which are neither smooth nor differentiable, and therefore can not be backpropagated through to generate adversarial examples with a white-box attack.

We introduce a new attack that we call Backward Pass Differentiable Approximation. Because is constructed with the property that , we can approximate its derivative as the derivative of the identity function: . Therefore, we can approximate the derivative of at the point as:

This allows us to compute gradients and therefore mount a white-box attack. Conceptually, this attack is simple. We perform forward propagation through the neural network as usual, but on the backward pass, we replace with the identity function. In practice, the implementation can be expressed in an even simpler way: we approximate by evaluating at the point . This gives us an approximation of the true gradient, and while not perfect, is sufficiently useful that when averaged over many iterations of gradient descent still generates an adversarial example.

4.1.2 Generalized Attack: BPDA

While the above attack is effective for a simple class of networks expressible as when , it is not fully general. We now generalize the above approach.

Let be a neural network, and let be a non-differentiable layer. To approximate , we first find a differentiable approximation such that . Then, we can approximate by performing the forward pass through (and in particular, computing a forward pass through ), but on the backward pass, replacing with . Note that we perform this replacement only on the backward pass.

As long as the two functions are similar, we find that the slightly inaccurate gradients still prove useful in constructing an adversarial example.

We have found applying BPDA is often necessary: replacing with on both passes either is completely ineffective (e.g., with Song et al. (2018)) or many times less effective (e.g. with Buckman et al. (2018)).

4.2 Attacking Randomized Classifiers

Stochastic gradients arise when using randomized transformations to the input before feeding it to the classifier or when using a stochastic classifier. When using optimization-based attacks on defenses that employ these techniques, it is necessary to estimate the gradient of the stochastic function.

Expectation over Transformation.

For defenses that employ randomized transformations to the input, we apply Expectation over Transformation (EOT) (Athalye et al., 2017) to correctly compute the gradient over the expected transformation to the input.

When attacking a classifier that first randomly transforms its input according to a function sampled from a distribution of transformations , EOT optimizes the expectation over the transformation . The optimization problem can be solved by gradient descent, noting that , differentiating through the classifier and transformation, and approximating the expectation with samples at each gradient descent step.

4.3 Reparameterization

We solve vanishing/exploding gradients by reparameterization. Assume we are given a classifier where performs some optimization loop to transform the input to a new input . Often times, this optimization loop means that differentiating through , while possible, yields exploding or vanishing gradients.

To resolve this, we make a change-of-variable for some function such that for all , but is differentiable. For example, if projects samples to some manifold in a specific manner, we might construct to return points exclusively on the manifold. This allows us to compute gradients through and thereby circumvent the defense.

5 Case Study: ICLR 2018 Defenses

As a case study for evaluating the prevalence of obfuscated gradients, we study the ICLR 2018 non-certified defenses that argue robustness in a white-box threat model. We find that seven of these eight defenses relies on this phenomenon to argue security, and we demonstrate that our techniques can completely circumvent six of those (and partially circumvent one) that rely on obfuscated gradients. We omit two defenses with provable security claims (Aditi Raghunathan, 2018; Aman Sinha, 2018) and one that only argues black-box security (Tramèr et al., 2018). We include one paper, Ma et al. (2018), that was not proposed as a defense per se, but suggests a method to detect adversarial examples.

There is an asymmetry in attacking defenses versus constructing robust defenses: to show a defense can be bypassed, it is only necessary to demonstrate one way to do so; in contrast, a defender must show no attack succeeds.

Defense Dataset Distance Accuracy
Buckman et al. (2018) CIFAR ()
Ma et al. (2018) CIFAR ()
Guo et al. (2018) ImageNet ()
Dhillon et al. (2018) CIFAR ()
Xie et al. (2018) ImageNet ()
Song et al. (2018) CIFAR ()
Samangouei et al. (2018) MNIST ()
Madry et al. (2018) CIFAR ()
Table 1: Summary of Results: Seven of eight defense techniques accepted to ICLR 2018 cause obfuscated gradients and are vulnerable to our attacks. (Defenses denoted with also propose combining adversarial training; we report here the defense alone, see §5 for full numbers.)

Table 1 summarizes our results. Of the 8 accepted papers, 7 rely on obfuscated gradients. Two of these defenses argue robustness on ImageNet, a much harder task than CIFAR-10; and one argues robustness on MNIST, a much easier task than CIFAR-10. As such, comparing defenses across datasets is difficult.

5.1 Non-obfuscated Gradients

5.1.1 Adversarial Training

Defense Details.

Originally proposed by Szegedy et al. (2013), adversarial training is a conceptually simple process: train on adversarial examples until the model learns to classify them correctly. Given training data

and loss function

, standard training chooses network weights as

Adversarial training instead chooses an -ball and solves the min-max formulation

To approximately solve this formulation, Madry et al. (2018) solve the inner maximization problem by generating adversarial examples the training data using projective gradient descent.

Discussion.

The evaluation the authors perform for this defense tests for all of the characteristic behaviors of obfuscated gradients that we list. Additionally, we believe this approach does not cause obfuscated gradients: our experiments with iterative optimization-based attacks do succeed with some probability (but do not invalidate the claims made in the paper). However, we note that (1) adversarial retraining has been shown to be difficult at ImageNet scale (Kurakin et al., 2016b), and (2) training exclusively on adversarial examples provides only limited robustness to adversarial examples under other distortion metrics.

5.2 Gradient Shattering

5.2.1 Thermometer Encoding

Defense Details.

In contrast to prior work (Szegedy et al., 2013) which viewed adversarial examples as “blind spots” in neural networks, Goodfellow et al. (2014b) argue that the reason adversarial examples exist is that neural networks behave in a largely linear manner. The purpose of thermometer encoding is to break this linearity.

Given an image , for each pixel color , the -level thermometer encoding is a

-dimensional vector where

For example, for a 10-level thermometer encoding, we have . Training networks using thermometer encoding is identical to normal training.

Due to the discrete nature of thermometer encoded values, it is not possible to directly perform gradient descent on a thermometer encoded neural network. The authors therefore construct Logit-Space Projected Gradient Ascent (LS-PGA) as an attack over the discrete thermometer encoded inputs. Using this attack, the authors perform the adversarial training of

Madry et al. (2018) on thermometer encoded networks.

On CIFAR-10, just performing thermometer encoding was found to give accuracy within under distortion. By performing adversarial training with steps of LS-PGA, robustness increased to .

Discussion.

While the intention behind this defense is to break the local linearity of neural networks, we find that this defense in fact causes gradient shattering. This can be observed through their black-box attack evaluation: adversarial examples generated on a standard adversarially trained model transfer to a thermometer encoded model reducing the accuracy to , well below the robustness to the white-box iterative attack.

Evaluation.

We use the BPDA approach from Section 4.1.2, where we let . Observe that if we define

then

so we can let and replace the backwards pass with the function .

LS-PGA reduces model accuracy to on a thermometer-encoded model trained without adversarial training (bounded by ). In contrast, we achieve model accuracy with the lower (and with ). This shows no measurable improvement from standard models, trained without thermometer encoding.

When we attack a thermometer-encoded adversarially trained model 333That is, a thermometer encoded model that is trained using the approach of (Madry et al., 2018)., we are able to reproduce the accuracy at claim against LS-PGA. However, our attack reduces model accuracy to . This is significantly weaker than the original Madry et al. (2018) model that does not use thermometer encoding. Because this model is trained against the (comparatively weak) LS-PGA attack, it is unable to adapt to the stronger attack we present above.

Figure 1: Model accuracy versus distortion (under ). Adversarial training increases robustness to at ; thermometer encoding by itself provides limited value, and when coupled with adversarial training performs worse than adversarial training alone.

5.2.2 Input Transformations

Defense Details.

Guo et al. (2018) propose five input transformations to counter adversarial examples.

As a baseline, the authors evaluate image cropping and rescaling, bit-depth reduction, and JPEG compression as baseline transformations. Then the authors suggest two new transformations:

  • Randomly drop pixels, and restore them by performing

    total variance minimization

    .

  • Image quilting: Reconstruct images by replacing all patches with patches from “clean” images, using minimum graph cuts in overlapping boundary regions to remove edge artifacts.

The authors explore different combinations of input transformations along with different underlying ImageNet classifiers, including adversarially trained models. They find that input transformations provide protection even with a vanilla classifier, providing varying degrees of robustness for varying transformations and perturbation budgets.

Discussion.

The authors find that a ResNet-50 classifier provides a varying degree of accuracy for each of the five proposed input transformationsunder the strongest attack with a normalized dissimilarity of , with the strongest defenses achieving over top-1 accuracy. We observe similar results when evaluating an InceptionV3 classifier.

The authors do not succeed in white-box attacks, crediting lack of access to test-time randomness as “particularly crucial in developing strong defenses” (Guo et al., 2018)444This defense may be stronger in a threat model where the adversary does not have complete information about the exact quilting process used (Personal communication with authors).

Evaluation.

It is possible to bypass each defense independently. We circumvent image cropping and rescaling with a direct application of Expectation Over Transformation (Athalye et al., 2017). To circumvent bit-depth reduction, JPEG compression, total variance minimization, and image quilting, we use BPDA to approximate the backward pass with the identity function. With our attack, model accuracy drops to for the strongest defense under the smallest perturbation budget considered in Guo et al. (2018), a root-mean-square perturbation of (corresponding to a “normalized” perturbation as defined in Guo et al. (2018) of ).

5.2.3 Local Intrinsic Dimensionality (LID)

LID is a general-purpose metric that measures the distance from an input to its neighbors. Ma et al. (2018) propose using LID to characterize properties of adversarial examples. The authors emphasize that this classifier is not intended as a defense against adversarial examples 555Personal communication with authors., however the authors argue that it is a robust method for detecting adversarial examples that is not easy to evade.

Analysis Overview.

Instead of actively attempting to attack the detection method, we find that LID is not able to detect high confidence adversarial examples (Carlini & Wagner, 2017a) generated oblivious to the details defense. A full discussion of this defense is given in Supplement Section  A.

5.3 Stochastic Gradients

5.3.1 Stochastic Activation Pruning (SAP)

Defense Details.

SAP (Dhillon et al., 2018)

introduces randomness into the evaluation of a neural network to defend against adversarial examples. SAP randomly drops some neurons of each layer

to 0 with probability proportional to their absolute value. That is, SAP essentially applies dropout at each layer where instead of dropping with uniform probability, nodes are dropped with a weighted distribution. Values which are retained are scaled up (as is done in dropout) to retain accuracy. Applying SAP decreases clean classification accuracy slightly, with a higher drop probability decreasing accuracy, but increasing robustness. We study various levels of drop probability and find they lead to similar robustness numbers.

Discussion.

The authors only evaluate SAP by taking a single step in the gradient direction (Dhillon et al., 2018). While taking a single step in the direction of the gradient is effective on non-randomized neural networks, when randomization is used, computing the gradient with respect to one sample of the randomness is ineffective.

Evaluation.

To resolve this difficult, we estimate the gradients by computing the expectation over instantiations of randomness. At each iteration of gradient descent, instead of taking a step in the direction of we move in the direction of where each invocation is randomized with SAP. We have found that choosing provides useful gradients. We additionally had to resolve a numerical instability when computing gradients: this defense caused computing a backward pass to cause exploding gradients due to division by numbers very close to 0. We resolve this by clipping gradients or through stable numerical techniques.

With these approaches, we are able to reduce SAP model accuracy to at , and at . If we consider an attack successful only when an example is classified incorrectly times out of (and consider it correctly classified if it is ever classified as the correct label), model accuracy is below with .

5.3.2 Mitigating through Randomization

Defense Details.

(Xie et al., 2018) propose to defend against adversarial examples by adding a randomization layer before the input to the classifier. For a classifier that takes a input, the defense first randomly rescales the image to a image, with

, and then randomly zero-pads the image so that the result is

. The output is then fed to the classifier.

Discussion.

The authors consider three attack scenarios: vanilla attack (an attack on the original classifier), single-pattern attack (an attack assuming some fixed randomization pattern), and ensemble-pattern attack (an attack over a small ensemble of fixed randomization patterns). The authors strongest attack reduces InceptionV3 model accuracy to top-1 accuracy (over images that were originally classified correctly).

The authors dismiss a stronger attack over larger choices of randomness, stating that it would be “computationally impossible” (emphasis ours) and that such an attack “may not even converge” (Xie et al., 2018).

Evaluation.

We find the authors ensemble attack overfits to the ensemble with fixed randomization. We bypass this defense by applying EOT (Athalye et al., 2017), optimizing over the (in this case, discrete) distribution of transformations and minimizing .

We approximate the gradient of the above by sampling and differentiating through the transformation. Using this attack, even if we consider the attack successful only when an example is classified incorrectly times out of , we can reduce the accuracy of the classifier from to with a maximum perturbation of .

5.4 Vanishing & Exploding Gradients

5.4.1 PixelDefend

Defense Details.

Song et al. (2018) propose using a PixelCNN generative model to project a potential adversarial example back onto the data manifold before feeding it into a classifier. The authors argue that adversarial examples mainly lie in the low-probability region of the training distribution. PixelDefend “purifies” adversarially perturbed images by projecting them back onto the data manifold through the use of a PixelCNN generative model, and then it feeds the resulting image through an unmodified classifier. PixelDefend uses a greedy decoding procedure to approximate finding the highest probability example within an -ball of the input image.

Discussion.

The authors evaluate PixelDefend on CIFAR-10 over various classifiers, perturbation budgets, and attacks. With a maximum perturbation of on CIFAR-10, PixelDefend claims accuracy (with a vanilla ResNet classifier). The authors dismiss the possibility of end-to-end attacks on PixelDefend due to the difficulty of differentiating through an unrolled version of PixelDefend due to vanishing gradients and computation cost.

Evaluation.

We sidestep the problem of computing gradients through an unrolled version of PixelDefend by approximating gradients with BPDA, approximating the backward pass with the identity function, and we successfully mount an end-to-end attack using this technique 666In place of a PixelCNN, due to the availability of a pre-trained model, we use a PixelCNN++ (Salimans et al., 2017) and discretize the mixture of logistics to produce a 256-way softmax over possible pixel values. . With this attack, we can reduce the accuracy of a naturally trained classifier which achieves accuracy to with a maximum perturbation of . Combining Madry et al. (2018) with PixelDefend provides no additional robustness over just using the adversarially trained classifier.

5.4.2 Defense-GAN

Defense-GAN (Samangouei et al., 2018) uses a Generative Adversarial Network (Goodfellow et al., 2014a) to project samples onto the manifold of the generator before classifying them. That is, the intuition behind this defense is nearly identical to PixelDefend using a GAN instead of a PixelCNN. We therefore summarize results here and present the full details in Supplement Section  B.

Analysis Overview.

DefenseGAN is not effective on CIFAR; so we must evaluate on MNIST. We find that adversarial examples exist on the manifold as described by the generator. However, while this attack would defeat a perfect projector mapping to it’s nearest point , the actual imperfect projection approach taken by Defense-GAN will not identify these points. We therefore construct a second attack using BPDA to evade Defense-GAN, although at only a success rate under reasonable perturbation bounds.

6 Discussion

Having demonstrated attacks on these seven defenses, we now take a step back and discuss the method of evaluating a defense against adversarial examples.

The papers we study use a variety of approaches in evaluating robustness of the proposed defenses. Synthesizing these, we list what we believe to be the most important points to keep in mind while building and evaluating defenses. Much of what we describe below has been given in prior work (Carlini & Wagner, 2017a; Madry et al., 2018); we repeat these points here and offer our own perspective for completeness.

6.1 Defining a (realistic) threat model

A threat model specifies the conditions under which the defense is aiming to be secure: a precise threat model allows for a precise understanding of the setting under which the defense is meant to work. Prior work has used words including white-box, grey-box, black-box, no-box to describe slightly different threat models, often overloading the same word.

Instead of attempting to, yet again, redefine the vocabulary, we enumerate the various aspects of a defense that might be revealed to the adversary or held secret to the defender:

  • Model architecture and Model weights.

  • Training algorithm and Training data.

  • For defenses that involve randomness, whether the adversary knows the exact sequence of random values that will be chosen, or only the distribution.

  • If assuming the adversary does not know the model architecture and weights, if query access is allowed. If so, if the model output is the logits, probability vector, or predicted label (i.e., arg max).

While there are some aspects of a defense that might be held secret, threat models should not contain unrealistic constraints. We believe any compelling threat model should at the very least grant knowledge of the model architecture, training algorithm, and allow query access.

We do not believe it is meaningful to restrict the computational power of an adversary. If two defenses are equally robust but generating adversarial examples on one takes one second and another takes ten seconds, the robustness has not increased.

6.2 Making specific, testable claims

Specific, testable claims in a clear threat model precisely convey the claimed robustness of a defense. For example, a defense might claim accuracy on adversarial examples of distortion at most , or might claim that mean distortion to adversarial examples increases by a factor of two from a baseline model to a secured model (in which case, the baseline should also be clearly defined).

A defense can never achieve complete robustness against unbounded attacks: with unlimited distortion any image can be converted into any other, yielding “success”.

A defense being specified completely, with all hyperparameters given, is a prerequisite for claims to be testable. Releasing source code and a pre-trained model along with the paper describing a specific threat model and robustness claims is perhaps the most useful method of making testable claims. Four of the defenses we study made complete source code available 

(Madry et al., 2018; Ma et al., 2018; Guo et al., 2018; Xie et al., 2018).

6.3 Evaluating against adaptive attacks

A secure defense is robust not only against existing attacks but also against all possible attacks within the specified threat model. While certified defenses succeed in reasoning about all possible attacks, it is often challenging to do so. However, actively evaluating a defense with new defense-aware attacks crafted specifically to circumvent the defense helps justify claims of security.

An adaptive attack is one that is constructed after a defense has been completely specified, where the adversary takes advantage of knowledge of the defense and is only restricted by the threat model. If a defense can be circumvented by an adaptive attack, giving a way to prevent that specific attack (e.g. by tweaking a hyperparameter) does not imply robustness. If a defense is modified after an evaluation, an adaptive attack is one that considers knowledge of the new defense. In this way, concluding an evaluation with a final adaptive attack can be seen as analogous to evaluating a model on the test data.

7 Conclusion

Constructing defenses to adversarial examples requires defending against not only existing attacks but also future attacks that may be developed. In this paper, we identify obfuscated gradients, a phenomenon exhibited by certain defenses that makes standard gradient-based methods fail to generate adversarial examples. We develop three attack techniques to bypass three different types of obfuscated gradients. To evaluate the applicability of our techniques, we use the ICLR 2018 defenses as a case study, circumventing seven of eight accepted defenses.

More generally, we hope that future work will be able to avoid relying on obfuscated gradients for perceived robustness and use our evaluation approach to detect when this occurs. Defending against adversarial examples is an important area of research and we believe performing a thorough evaluation is a critical step that can not be overlooked.

Acknowledgements

We are grateful to Aleksander Madry, Andrew Ilyas, and Aditi Raghunathan for helpful comments on an early draft of this paper. We thank Bo Li, Xingjun Ma, Laurens van der Maaten, Aurko Roy, Yang Song, and Cihang Xie for useful discussion and insights on their defenses.

This work was partially supported by the National Science Foundation through award CNS-1514457, Qualcomm, and the Hewlett Foundation through the Center for Long-Term Cybersecurity.

References

Appendix A Local Intrinsic Dimensionality

Defense Details.

The Local Intrinsic Dimensionality (Amsaleg et al., 2015) “assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors” (Ma et al., 2018). The authors present evidence that the LID is significantly larger for adversarial examples generated by existing attacks than for normal images, and they construct a classifier that can distinguish these adversarial images from normal images. Again, the authors indicate that LID is not intended as a defense and only should be used to explore properties of adversarial examples. However, it would be natural to wonder whether it would be effective as a defense, so we study its robustness; our results confirm that it is not adequate as a defense. The method used to compute the LID relies on finding the nearest neighbors, a non-differentiable operation, rendering gradient descent based methods ineffective.

Let be a mini-batch of clean examples. Let denote the distance (under metric ) between sample and its -th nearest neighbor in (under metric ). Then LID can be approximated by

where is a defense hyperparameter the controls the number of nearest neighbors to consider. The authors use the distance function

to measure the distance between the th activation layers. The authors compute a vector of LID values for each sample:

Finally, they compute the

over the training data and adversarial examples generated on the training data, and train a logistic regression classifier to detect adversarial examples. We are grateful to the authors for releasing their complete source code.

Discussion.

While LID is not a defense itself, the authors assess the ability of LID to detect different types of attacks.

Through solving the formulation

the authors attempt to determine if the LID metric is a good metric for detecting adversarial examples. Here, is a function that can be minimized to reduce the LID score. However, the authors report that this modified attack still achieves success. Because Carlini and Wagner’s attack is unbounded, any time the attack does not reach success indicates that the attack became stuck in a local minima. When this happens, it is often possible to slightly modify the loss function and return to attack success (Carlini & Wagner, 2017b).

In this case, we observe the reason that performing this type of adaptive attack fails is that gradient descent does not succeed in optimizing the LID loss, even though the LID computation is differentiable. Computing the LID term involves computing the -nearest neighbors when computing . Minimizing the gradient of the distance to the current -nearest neighbors is not representative of the true direction to travel in for the optimal set of -nearest neighbors. As a consequence, we find that adversarial examples generated with gradient methods when penalizing for a high LID either (a) are not adversarial; or (b) are detected as adversarial, despite penalizing for the LID loss.

Evaluation.

We now evaluate what would happen if a defense would directly apply LID to detect adversarial examples. Instead of performing gradient descent over a term that is difficult to differentiate through, we have found that generating high confidence adversarial examples (Carlini & Wagner, 2017a) (completely oblivious to to the detector) is sufficient to fool this detector. We obtain from the authors their detector trained on both the Carlini and Wagner’s attack and train our own on the Fast Gradient Sign attack, both of which were found to be effective at detecting adversarial examples generated by other methods. By generating high-confidence adversarial examples minimizing distortion, we are able to reduce model accuracy to success within . LID reports these adversarial examples are benign at a rate (unmodified test data is flagged as benign with a rate).

This evaluation demonstrates that the LID metric can be circumvented, and future work should carefully evaluate if building a detector relying on LID is robust to adversarial examples explicitly targeting such a detector. This work also raises questions whether a large LID is a fundamental characteristic of all adversarial examples, or whether it is a by-product of certain attacks.

Appendix B Defense-GAN

Defense Details.

The defender first trains a Generative Adversarial Network with a generator that maps samples from a latent space (typically ) to images that look like training data. Defense-GAN takes a trained classifier , and to classify an input , instead of returning , returns . To perform this projection to the manifold, the authors take many steps of gradient descent starting from different random initializations.

Defense-GAN was not shown to be effective on CIFAR-10. We therefore evaluate it on MNIST (where it was argued to be secure).

Discussion.

In Samangouei et al. (2018), the authors construct a white-box attack by unrolling the gradient descent used during classification. Despite an unbounded perturbation size, Carlini and Wagner’s attack only reaches misclassification rate on the most vulnerable model and under on the strongest. This leads us to believe that unrolling gradient descent breaks gradients.

Evaluation.

We find that adversarial examples do exist on the data manifold as described by the generator . However, Defense-GAN does not completely project to the projection of the generator, and therefore often does not identify these adversarial examples actually on the manifold.

We therefore present two evaluations. In the first, we assume that Defense-GAN were to able to perfectly project to the data manifold, and give a construction for generating adversarial examples. In the second, we take the actual implementation of Defense-GAN as it is, and perform BPDA to generate adversarial examples with success under reasonable bounds.


Evaluation A. Performing the manifold projection is nontrivial as an inner optimization step when generating adversarial examples. To sidestep this difficulty, we show that adversarial examples exist directly on the projection of the generator. That is, we construct an adversarial example so that is small and .

To do this, we solve the re-parameterized formulation

We initialize (also found via gradient descent). We train a WGAN using the code the authors provide (Gulrajani et al., 2017), and a MNIST CNN to accuracy.

Figure 2: Images on the MNIST test set. Row 1: Clean images. Row 2: Adversarial examples on an unsecured classifier. Row 3: Adversarial examples on Defense-GAN.

We run for 50k iterations of gradient descent for generating each adversarial example; this takes under one minute per instance. The unsecured classifier requires a mean distortion of (per-pixel normalized, un-normalized) to fool. When we mount our attack, we require a mean distortion of , an increase in distortion of ; see Figure 2 for examples of adversarial examples. The reason our attacks succeed with success without suffering from vanishing or exploding gradients is that our gradient computation only needs to differentiate through the generator once.

Concurrent to our work, Ilyas et al. (2017) also develop a nearly identical approach to Defense-GAN; they also find it is vulnerable to the attack we outline above, but increase the robustness further with adversarial training. We do not evaluate this extended approach.


Evaluation B. The above attack does not succeed on Defense-GAN. While the adversarial examples are directly on the projection of the Generator, the projection process will actually move it off the projection.

To mount an attack on the approximate projection process, we use the BPDA attack regularized for distortion. Our attack approach is identical to that of PixelDefend, except we replace the manifold projection with a PixelCNN with the manifold projection by gradient descent on the GAN. Under these settings, we succeed at reducing model accuracy to with a maximum normalized distortion of for successful attacks.