Adversarial examples are images with tiny, imperceptible perturbations that fool a classifier into predicting the wrong labels with high confidence.denotes the input to some classifier, which is a natural example and has label . A variety of constructions [9, 14, 20, 25] can generate an adversarial example to make the classifier label it . This is interesting, because is so small that we would expect to be labelled .
Adversarial examples are a persistent problem of classification neural networks, and of many other classification schemes. Adversarial examples are easy to construct[30, 22, 3], and there are even universal adversarial perturbations . Adversarial examples are important for practical reasons, because one can construct physical adversarial examples, suggesting that neural networks in current status are unusable in some image classification applications (imagine a small physical modification that could reliably get a stop sign classified as a go faster sign [25, 16]). Adversarial examples are important for conceptual reasons too, because an explanation of why adversarial examples are easy to construct could cast some light on the inner life of neural networks. The absence of theory means it is hard to defend against adversarial examples (for example, distillation was proposed as a defense , but was later shown to not work ).
Adversarial example constructions (, line search along the gradient ; LBFGS on an appropriate cost ; DeepFool ) all rely on the gradient of the network, but it is known that using the gradient of another similar network is sufficient , so concealing the gradient does not work as a defense for current networks. An important puzzle is that networks that generalize very well remain susceptible to adversarial examples . Another important puzzle is that examples that are adversarial for one network tend to be adversarial for another as well [30, 15, 27]. Some network architectures appear to be robust to adversarial examples , which still need more empirical verification. At least some adversarial attacks appear to apply to many distinct networks .
We denote the probability distribution of examples by. At least in the case of vision, has support on some complicated subset of the input space, which is known as the “manifold” of “real images”. Nguyen et al. show how to construct examples that appear to be noise, but are confidently classified as objects . This construction yields lies outside the support of , so the classifier’s labeling is unreliable because it has not seen such examples. However, most adversarial examples “look like” images to humans, such as figure 5 in , so they are likely to lie within the support of .
One way to build a network that is robust to adversarial examples is to train networks with enhanced training data (adding adversarial samples ); this approach faces difficulties, because the dimension of the images and features in networks means an unreasonable quantity of training data is required. Alternatively, we can build a network that detects and rejects an adversarial sample. Metzen et al. show that, by attaching a detection subnetwork that observes the state of the original classification network, one can tell whether it has been presented with an adversarial example or not . However, because the gradients of their detection subnetwork are quite well behaved, the joint system can be attacked (Type II attack) easily in both their and our experiments. Both their and our experiments also show that their detection subnetwork is easily fooled by adversarial samples produced by attacking methods which are not used in detector training process.
Our method focuses on codes produced by quantizing individual ReLUs in particular layers of the classification network (“patterns of activation”), and proceed from the hypothesis:
Adversarial attacks work by producing different patterns of activation in late stage ReLUs to those produced by natural examples.
These patterns lie outside the family for which the softmax layer would be reliable. This hypothesis suggests that: (a) the presence of an adversarial example can be detected (as in Metzenet al. ); (b) such detectors can be made very difficult to defeat (unlike Metzen et al. ; section 5); (c). such detectors should be good at generalization for different adversarial attacks (unlike Metzen et al. ); (d) transfer attacks work because an example that generates unfamiliar patterns in one network tends to generate unfamiliar patterns in other networks too; (e) transfer attacks could be defended as well (section 5).
Contributions: Section 2 describes our SafetyNet architecture, which consists of the original classifier network and a detector that rejects adversarial examples. A type I attack on SafetyNet consists of a standard adversarial example crafted to be (a) similar to a natural image; (b) misclassified by the original network. A type II attack consists of an example that is crafted to be (a) similar to a natural image; (b) misclassified; and (c) not rejected by SafetyNet. We show that SafetyNet is robust to both types of attacks and generalize well. Concealing the gradients is highly effective for SafetyNet, and it produces a black box that is strongly resistant to the best attacks we have been able to construct. This is in sharp contrast to all other known methods [25, 17].
In section 5, we demonstrate SceneProof, a robust and reasonably effective proof that an image is an image of a real scene (a “real” image; contrast a “fake” image, which is not an image of a real scene). We identify images of real scenes by checking a match between the image and a depth map, which is hard to manipulate. We show that SceneProof is (a) accurate and (b) strongly resistant to attacks that try to get manipulated scenes identified as authentic scenes.
In section 6, we propose a model that explains why our approach works, and it also demonstrates that SafetyNet is difficult to attack in principle.
2 SafetyNet: Spotting Adversarial Examples
SafetyNet consists of the original classifier, and an adversary detector which looks at the internal state of the later layers in the original classifier, as in Figure 1. If the adversary detector declares that an example is adversarial, then the sample is rejected.
2.1 Detecting Adversarial Examples
The adversary detector needs to be hard to attack. We force an attacker to solve a hard discrete optimization problem. For a layer of ReLUs at a high level in the classification network, we quantize each ReLU at some set of thresholds to generate a discrete code (binarized code in the case of one threshold). Our hypothesis1 suggests that different code patterns appear for natural examples and adversarial examples. We use an adversary detector that compares a code produced at test time with a collection of examples, meaning that an attacker must make the network produce a code that is acceptable to the detector (which is hard; section 5). The adversary detector in SafetyNet uses an RBF-SVM on binary or quaternary codes (activation patterns) to find adversarial examples.
We denote a code by . The RBF-SVM classifies by
In this objective function, when is small, the detector produces essentially no gradient unless the attacking code is very close to a positive example . Our quantization process makes the detector more robust and the gradients even harder to get. Experiments show that this form of gradient obfuscation is quite robust, and that confusing the detector is very difficult without access to the RBF-SVM, and still difficult even when access is possible. Experiments in section 5 and theory in section 6 confirm that the optimization problem is hard.
2.2 Attacking Methods
We use the following standard and strong attacks , with various choice of hyper-parameters, to test the robustness of the systems. Each attack searches for a nearby which changes the class of the example and does not create visual artifacts. We use these methods to produce both type I attack (fool the classifier) and type II attack (fool the classifier and sneak past the detector).
Fast Sign method: Goodfellow et al  described this simple method. The applied perturbation is the direction in image space which yields the highest increase of the linearized cost under norm. It uses a hyper-parameter to govern the distance between adversarial and original image.
Iterative methods: Kurakin et al.  introduced an iteration version of the fast sign method, by applying it several times with a smaller step size and clipping all pixels after each iteration to ensure that results stay in the neighborhood of the original image. We apply two versions of this method, one where the neighborhood is in norm and another where it is in norm.
DeepFool method: Moosavi-Dezfooli et al.  introduced the DeepFool adversary, which is able to choose which class an example is switched to. DeepFool iteratively perturbs an image , linearizes the classifier around and finds the closest class boundary. The minimal step according to the distance from to traverse this class boundary is determined and the resulting point is used as . The algorithm stops once changes the class of the actual classifier. We use a powerful version of DeepFool.
Transfer method: Papernot et al.  described a way to attack a black-box network. They generated adversarial samples using another accessible network, which performs the same task, and used these adversarial samples to attack the black-box network. This strategy has been notably reliable.
2.3 Type I Attacks Are Detected
Accuracy: Our SafetyNet can detect adversarial samples with high accuracy on CIFAR-10 
and ImageNet-1000. For classification networks, we used a 32-layer ResNet  for CIFAR-10 and a VGG19 network  for ImageNet-1000. Figures 2 shows the detection accuracy of our Binarized RBF-SVM detector on the x5 layer of ResNet for Cifar10 and on the fc7 layer of VGG19 trained for ImageNet-1000. Adversarial samples are generated by Iterative-L2, Iterative-Linf, DeepFool-L2 and FastSign methods. Figure 2 compares our RBF-SVM detection results with the detector subnetwork results of . The RoC for our detector for Cifar-10 and ImageNet-1000 appears in Figure 3.
Our results show: When our detector is tested on the same adversary as it is trained on, its performance is similar to the detector subnetwork , even though our detector works on quantized activation patterns while the detector subnetwork works on original continuous activation patterns. DeepFool is a strong attack. Increasing the number of categories in the problem makes it easier for DeepFool to produce an undetected adversarial example, likely because it becomes easier to exploit local classification errors without producing strange ReLU activations. If DeepFool is required to produce a label outside the top-5 for the original example, the attack is much weaker.
Generalization across attacks: Generally, a detector cannot know at training time what attacks will occur at test time. We test generalization across attacks by training a detector on one class of attack, then testing with other classes of attack. Figure 2 shows that our RBF-SVM generalizes across attacks more reliably than a detector subnetwork. We believe this is because the representation presented to the RBF-SVM has been aggressively summarized (by quantization), so that the classifier is not distracted by subtle but irrelevant features. Note this kind of generalization is not guaranteed just by using a neural network; for example, Table 7 shows networks trained on normal quality JPEG images are confounded by low quality JPEG test images.
|Non Attack||Type I Attack||Type II Attack|
|Method||FT||TF||TT reject||FT||TF||TT reject||FT||TF||TT reject|
|Non Attack Data||9.7%||0%||9.4%||N/A||N/A||N/A||N/A||N/A||N/A|
|Unfamiliar Data Average||17.3%||0%||0%||N/A||N/A||N/A||N/A||N/A||N/A|
|Gradient Descent Attack||N/A||N/A||N/A||9.9%||5.0%||6.1%||16.3%||3.7%||6.2%|
|Transfer Attack Average||N/A||N/A||N/A||4.6%||9.4%||33.6%||7.9%||9.8%||26.6%|
3 Rejecting by Classification Confidence
Our experiments demonstrate that there is a trade-off between classification confidence and detection easiness for adversarial examples. Adversarial examples with high confidence in wrong classification labels tend to have more abnormal activation patterns, so they are easier to be detected by detectors. While adversarial examples with low classification confidence in wrong labels are harder to be detected. For example, attacks like DeepFool add small and just enough perturbations to change the classification label, so these adversarial examples are sometimes hard to detect. However, these adversarial examples could not assign high classification confidence to the wrong label. If they perform more iterations and increase the wrong class classification confidence, our detector could detect them much easier.
Experiments also show that Type II attacks on our quantized SVM detector together with the classifier produce adversarial examples with low confidence. All these experiments mean that we can use classification confidence as a detection criteria, and it could help us increase the detector’s detection ability and decrease the potential to be attacked by Type II attacks.
The classification confidence in our experiments is measured by the ratio of the example’s second highest classification confidence to the highest classification confidence. For example, if an image has 60% probability to be a dog and 15% probability to be a cat, our classification confidence is 0.25. We reject examples with classification confidence ratio bigger than a threshold, which means the classifier is unsure about the classification.
The classification confidence rejection results for non attack images and various Type II attack adversarial examples are included in Table 2 for Cifar-10 and Table 3 for ImageNet-1000. Both tables show that rejecting by classification confidence rejects few non attack images while hugely increase the rejection of Type II attack adversarial examples. The benefits of rejecting by classification confidence is also demonstrated in the Type II attacks section.
|Statistics||Non Attack||L0 (II)||L2 (II)||Fast (II)||DeepFool (II)|
|Statistics||Non Attack||L0 (II)||L2 (II)||Fast (II)||DeepFool (II)||DeepFool5 (II)|
4 Type II Attacks fail
A type II attack involves a search for an adversarial example that will be (a) mislabelled and (b) not detected. We perform the gradient descent based Type II attacks for Cifar-10 and ImageNet-1000 with SVM detector, and compare to detection subnetwork . Because the gradients of detection subnetwork are better formed, it should be easier to attack with Type II gradient descent attacks.
In our experiments for Cifar-10 and ImageNet-1000, we use different gradient descent based Type II attacks (L0, L2, Fast, DeepFool and top-5 DeepFool) to attack the detector and classifier at the same time. In the main paper, gradient descent based Type II attacks on SceneProof dataset use L2 LBFGS method.
The summary for Type II attacks on Cifar-10 could be found in Table 4. The numbers reported in the table are the percentages of adversarial examples that are both misclassified and undetected (lower is better). Without classification confidence rejection, quantized SVM detector and detection subnetwork perform similar under Type II attacks for L0, L2 and Fast methods, and quantized SVM detector performs significantly better under DeepFool Type II attacks. With classification confidence rejection, quantized SVM detector is very hard to attack and performs better than detection subnetwork on almost all attacking methods. The classification confidence rejection increases at maximum false rejection on non attack images. The detailed percentages of Type II attacks on Cifar-10 could be found in Table 10.
The summary for Type II attacks on ImageNet-1000 could be found in Table 5.The table arrangement is same to Table 4, and DeepFool5 is top-5 DeepFool attack. Quantized SVM detector consistently performs better than detection subnetwork for various attacking methods and for both with classification confidence rejection and without. It’s very difficult to perform Type II attacks on quantized SVM detector with rejection. The classification confidence rejection increases at maximum false rejection on non attack images. The detailed percentages of Type II attacks on ImageNet-1000 could be found in Table 11.
|Method||L0 (II)||L2 (II)||Fast (II)||DeepFool (II)|
|m-SVM Det - R||9.86||7.32||3.41||8.32|
|Subnet Det - R||19.69||11.57||1.19||35.39|
|Method||L0 (II)||L2 (II)||Fast (II)||DeepFool (II)||DeepFool5 (II)|
|m-SVM Det - R||23.19||15.05||8.26||2.32||15.52|
|Subnet Det - R||52.56||26.66||12.16||4.49||21.99|
5 Application: SceneProof
SceneProof is a model application of our SafetyNet, because it would not work with a network that is subject to adversarial examples. We would like Alice to be able to prove to Bob that her photo is real without the intervention of a team of experts, and we’d like Bob to have high confidence in the proof. This proof needs to operate at large scales (i.e. anyone could produce a proof while taking a picture), and automatically.
Current best methods to identify fake images require careful analysis of vanishing points , illumination angles , and shadows  (reviews in [8, 7]). Such analyses are difficult to conduct at large scales or automatically. RGB image editing is easy, with very powerful tools available. We construct a proof by capturing an RGBD image (easily accessible with consumer depth sensors), which changes the security aspect because it’s quite hard to edit a depth map convincingly and those edits need to be consistent with the image. The proof of realness is achieved by a classifier that checks both image and depth and determines whether they are consistent. Such a system works if (a) the classifier is acceptably accurate (i.e. it can determine whether the pair is real or not accurately); (b) it can detect a variety of adversarial manipulations of depth or image or both (i.e. type I attacks fail) ; and (c) type II attacks generally fail. We achieve this by using the SafetyNet architecture.
We are mainly concerned with attacks label “fake” images “real”. Natural attacks on our system are: produce a depth map for an RGB image using some regression method to obtain an RGBD image (regression); manipulate RGBD image by inserting new objects; take an RGBD image labeled “fake” and manipulate it to be labeled “real” (type I adversarial); take an RGBD image labeled “fake” and manipulate it to be labeled “real” in a way that fools SafetyNet’s adversary detector (type II adversarial). There is a wide range of available regression/adversarial attacks, and our system needs to be robust to various methods which might be used to prepare the regression/adversarial attack.
Real test data is easily obtained. We use the raw Kinect captures of LivingRoom and Bedroom from NYU v2 dataset . However, fake data requires care. To evaluate generalization over different attacks, we omit some “regression” methods from the training data and use them only in test. “Regression” methods used in both train and test are: random swaps of depth and image planes; single image predicted depth ; rectangle cropped region insertion and random shifted or scaled misaligned depth and image. “Regression” methods used only in test are: all zero depth values; nearest neighbor down-sample and up-sampled images and depths; low quality JPEG compressed images and depths; Middlebury stereo RGBD dataset  and Sintel RGBD dataset (which should be classified “fake” because they are renderings). Refer to Figure 4 for dataset and attacks.
|Test example type||Classifier Acc||B||A||B A||AB||AB, T|
|Natural RGBD, False||91.8%||15.2%||17.1%||14.3%||18.8%||19.6%|
|Natural RGBD, True||97.7%||10.1%||11.6%||9.2%||12.7%||10.8%|
|Adversarial RGBD, False||33.1%||89.1%||88.6%||87.3%||90.4%||88.9%|
|Adversarial RGBD, True||15.3%||81.3%||81.0%||79.1%||83.3%||83.7%|
|Test example type||Classifier Acc||B||A||B A||AB||AB,T|
|zero D channel||76.5%||6.5%||25.6%||6.1%||26.0%||82.0%|
|low quality JPEG||36.4%||80.1%||79.2%||77.2%||82.2%||81.8%|
|Sintel RGBD ||27.6%||45.3%||51.7%||39.7%||57.2%||61.4%|
|Middlebury RGBD ||24.0%||39.7%||40.3%||33.4%||46.6%||47.8%|
Type I attacks on SafetyNet fail: Type I attacks on SceneProof using a familiar adversary (i.e. one used to train the detector) fail. We report results for two detectors A (applied to fc7 of VGG19) and B (applied to fc6 of VGG19) in Table 6. Type I attacks on SceneProof using an unfamiliar adversary (i.e. one not used to train the detector) generally fail. We report results for two detectors A (applied to fc7 of VGG19) and B (applied to fc6 of VGG19) in Table 7.
A type II attack must both fool the classifier and sneak past the detector. We distinguish between two conditions. In non-blackbox case, the internals of the SafetyNet system is accessible to the attacker. Alternatively, the network may be a black box, with internal states and gradients concealed. In this case, attackers must probe with inputs and gather outputs, or build another approximate network as in .
Type II attacks on accessible SafetyNet fail:
a type II attack involves a search for an adversarial example that will be (a) mislabelled and (b) not detected. This search is made difficult by the quantization procedure and by the narrow basis functions in the RBF-SVM, so we smooth the quantization operation and the RBF-SVM kernel operation. Smoothing is essential to make the search tractable, but can significantly misapproximate SafetyNet (which is what makes attacks hard). Our smoothing attack uses a sigmoid function with parameterto simulate the quantization process. We also help the search process by increasing the size of the RBF parameter to form smoother gradients. Even after smoothing the objective function, attacks tend to fail, likely because it is hard to make an effective tradeoff between easy search and approximation. Table 8 includes Type I and Type II, blackbox and non-blackbox attacking results on SceneProof dataset. Our SafetyNet is the most robust architecture to various attacks.
Type II attacks on black box SafetyNet fail: Assume the state of SafetyNet is concealed. We follow [24, 19] by building attacks on various alternative networks, then transferring these network’s adversarial samples. These attacks fail for our SafetyNet, refer to Table 8. In contrast to SafetyNet, the detector subnetwork of  is generally susceptible to type II attacks in both blackbox and non-blackbox settings. This is because of quantization process and detection subnetwork’s classification boundary problem .
|Method||Ori||Subnet Det||Det A||Det ABC|
|FT||TF||FT||TF||TT reject||FT||TF||TT reject||FT||TF||TT reject|
|Non Attack Data||16.3%||0.6%||8.4%||0%||10.2%||9.7%||0%||9.4%||8.4%||0%||9.9%|
|Gradient Descent (I)||32.8%||55.3%||13.4%||9.5%||6.0%||9.9%||5.0%||6.1%||8.4%||0.3%||6.3%|
|VGG FastSign TF (I)||30.6%||2.8%||14.9%||2.2%||54.1%||7.5%||2.5%||44.1%||6.6%||1.9%||47.2%|
|ResNet GradDesc TF (I)||28.9%||36.7%||15.3%||22.4%||33.2%||3.6%||13.4%||29.1%||2.7%||11.9%||30.3%|
|ResNet FastSign TF (I)||22.2%||29.1%||7.6%||15.1%||29.8%||2.8%||12.2%||27.5%||2.2%||11.6%||27.8%|
|Type I Average||28.6%||30.9%||12.8%||12.3%||30.8%||6.0%||8.3%||26.7%||5.0%||6.4%||27.9%|
|Gradient Descent (II)||32.8%||55.3%||26.3%||21.9%||11.9%||16.3%||3.7%||6.2%||13.2%||2.6%||9.6%|
|VGG Finetune TF (II)||20%||3.1%||17.1%||0%||43.5%||17.2%||0%||45.6%||17.2%||0%||48.4%|
|VGG Subnet Det TF (II)||16.3%||0.6%||13.7%||0%||15.6%||10.3%||0%||12.5%||9.1%||0%||13.1%|
|ResNet Finetune TF (II)||15.6%||40.3%||8.5%||31.3%||29.3%||1.3%||27.2%||20.6%||0.3%||25%||21.0%|
|ResNet Subnet Det TF (II)||23.8%||29.7%||17.6%||19.3%||29.8%||2.8%||12.2%||27.5%||2.2%||11.6%||27.5%|
|Type II Average||21.7%||25.8%||16.6%||14.5%||26.0%||9.6%||8.6%||22.5%||8.4%||7.84%||23.9%|
6 Theory: Bars and P-domains
We construct one possible explanation for adversarial examples that successfully explains (a) the phenomenology and (b) why SafetyNet works. In this explanation, we assume the network uses ReLU and weight decay, because they are representative, make it easier to explain, and likely to extend to other conditions with some modifications. We have a network with layers of ReLU’s, and study , the values at the output of the ’th layer of ReLUs. This is a piecewise linear function of . Such functions break up the input space into cells, at whose boundaries the piecewise linear function changes (i.e. is only ). Now assume that for some there exist p-domains (union of cells) in the input space such that: (a) there are no or few examples in the p-domain; (b) the measure of under is small; (c) is large inside and small outside . We will always use the term “p-domain” to refer to domains with these properties. We think that the total measure of all p-domains under is small.
By construction, ReLU networks can represent such p-domains. We construct a p-domain using a basis function with small support. denote a ReLU applied to . We have basic bar function .
where has support when and has peak value . For an index set with cardinality
and vectors, , we write bar function as
where has support when . Figure 5 illustrates these functions. It is clear that a CNN can encode bars and weighted sums of bars, and that for at least every could in principle be a bar function. Appropriate choices of , and choose the location and support of the bar and so can produce bars which have low measure under . Now the functions presented to the softmax layer are a linear combination of the . This means that with choice of weight and parameters, a bar can appear at this level, and create a p-domain.
We expect such p-domains to have several important properties. Adversarial fertility: P-domains can be used to make adversarial examples by choosing a point in a p-domain close to . Because there are no or few examples in the p-domain, the loss may not cause the classifier to control the maximum value attained by in this p-domain; and the large range of values inside the p-domain can be used to change the values in layers upstream of , by moving the example around the p-domain. Generalization-neutral: The requirement that p-domains have small measure in means that both train and test examples are highly unlikely to lie in p-domains. A system with p-domains could generalize well without being immune to adversarial examples. Some subset of p-domains are likely findable by LBFGS. Consider the gradient of with respect to in two cells separated by a boundary, where some ReLU changes state, weight decay encourages a relatively small change in gradient over these boundaries. If cells neighboring a p-domain have no or few examples in them, we can expect that the gradient change within cell is small too and a second order approximation of could be reliable. We also expect cells to be small, so search and entering a p-domain are possible and requires crossing multiple cell boundaries, which means many changes in ReLU activation. This argument suggests p-domains present odd patterns of ReLU activation, particularly in p-domains where some of the are large in the absence of examples.
Why p-domains could exist: As Zhang et al. point out, the number of training examples available to a typical modern network is small compared to the relative capacity of deep networks . For example, excellent training error is obtainable for randomly chosen image labels . We expect that will have a number of cells that is exponential in the dimension of , ensuring that the vast majority of cells lack any example. However, the weight decay term is not sufficient to ensure that
is zero in these cells. Overshoot by stochastic gradient descent, caused by poor scaling in the loss, is the likely reason thathas support in these cells. Szegedy et al. demonstrate that, in practice, ReLU layers can have large norm as linear operators, despite weight decay (see , sec. 4.3), so large values in p-domains are plausible. This large norm is likely to be the result of overshoot. Recall that the value of is determined by the product of numerous weights, so in some locations in , the value of could be large, which is a result of multiple layer norms interacting poorly.
An alternative to attacking by search using smoothed RBF gradients is as follows. One might pass an example through the main classifier, determine what code it had, then seek an adversarial example that produces that code (and so must fool the RBF-SVM). We sketch a proof that the optimization problem is extremely difficult. Choose some threshold . We use for the function that binarizes its argument with . Assume we have at least one unit that encodes a weighted sum of bar functions. We wish to create an adversarial example that (a) meets criteria for being adversarial and (b) ensures that takes a prescribed value (either one or zero). The feasible set for this constraint can be disconnected (a sum of the bump functions of Figure 5 (right)), and so need not be convex, implying that the optimization problem is intractable. As a simple example, the following constraint set is disconnected for
We have described a method to produce a classifier that identifies and rejects adversarial examples. Our SafetyNet is able to reject adversarial examples that come from attacking methods not seen in training data. We have shown that it is hard to produce an example that (a) is mislabeled and (b) is not detected as adversarial by SafetyNet. We have sketched one possible reason that SafetyNet works, and is hard to attack. Many interesting problems are opened by our work, and we provides lots of insights into the mechanism that neural network works.
SaferNet: There might be some better architecture than our SafetyNet, whose objective function is harder to optimize. The ideal case would be an architecture that forces the attacker to solve a hard discrete optimization problem which does not naturally admit smoothing.
Neural network pruning:
Our work suggests that networks behave poorly for input space regions where no data has been seen. We speculate that this behavior could be discouraged by a post-training pruning process, which removes neurons, paths or activation patterns not touched by training data.
Explicit management of overshoot during training: we have explained adversarial examples using p-domains, which is the result of poor damping of weights during training. We speculate that constructing adversarial examples during training, by identifying locations where this damping problem occurs and exploiting structural insights into network behavior, could control the adversarial sample problem (rather than just using adversarial examples as training data).
This work is supported in part by ONR MURI Award N00014-16- 1-2007, in part by NSF under Grant No. NSF IIS- 1421521, and in part by a Google MURA award.
9 Supporting Materials
9.1 SceneProof Dataset
Our SceneProof dataset is processed from NYU Depth v2 raw captures, Sintel Synthetic RGBD dataset and Middlebury Stereo dataset. The dataset is split into part I and part II. Part I contains NYU natural image & depth pairs, along with manipulated unnatural scenes (swap depth, insert region, predicted depth, scale & shift depth), refer to Figure 6. It is used to train our classifier and work as test data part I. Part II contains unnatural scenes manipulated by other methods (set depth channel to zero, down sample and then up-sample both RGBD channels, aggressively compress the JPG RGBD images), and image & depth pairs from synthetic dataset and stereo dataset, refer to Figure 7. Part II is used as test data part II to test the generalization ability of our SceneProof network, and check the reactions of our detectors to unseen unnatural inputs. A good detector need to tend to reject unfamiliar data type, which does not exist in training data, because it is hard for classifier to do right classifications on unseen data types. In real application scenarios, it needs to be a human computer hybrid system where computer provides suspicious cases and human makes final decisions. Table 9 includes the dataset constitution, and we plan to release the dataset for academia usages.
9.2 Type II Attacks on Cifar-10 and ImageNet-1000
In this section, we include the detailed percentages of Type II attacks on Cifar-10 could be found in Table 10, and the detailed percentages of Type II attacks on ImageNet-1000 could be found in Table 11.
|Training||Testing I||Testing II|
|low quality JPG||N/A||N/A||1449|
|L0 (II)||L2 (II)||Fast (II)||DeepFool (II)|
|m-SVM Det - R||=||31.79||28.74||42.23||28.17||30.87||44.32||0.41||3.34|
|Subnet Det - R||=||16.25||22.30||28.02||32.56||7.53||67.10||1.15||2.61|
|L0 (II)||L2 (II)||Fast (II)||DeepFool (II)||DeepFool5 (II)|
|m-SVM Det - R||=||0.00||0.00||2.43||2.27||53.06||9.19||0.00||0.00||0.00||0.00|
|m-SVM Det - R||=||16.03||5.77||30.86||22.97||19.63||15.61||0.00||0.00||0.00||0.00|
D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black.
A naturalistic open source movie for optical flow evaluation.
In A. Fitzgibbon et al. (Eds.), editor,
European Conf. on Computer Vision (ECCV), Part IV, LNCS 7577, pages 611–625. Springer-Verlag, Oct. 2012.
-  N. Carlini and D. Wagner. Defensive distillation is not robust to adversarial examples.
-  N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. arXiv preprint arXiv:1608.04644, 2016.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei.
Imagenet: A large-scale hierarchical image database.
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009.
-  D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems, pages 2366–2374, 2014.
-  H. Farid. Exposing photo manipulation with inconsistent reflections. ACM Trans. Graph., 31(1):4, 2012.
-  H. Farid. Photo forensics. MIT Press, 2016.
-  H. Farid. How to detect faked photos. American Scientist, 2017.
-  I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
-  E. Kee, J. F. O’Brien, and H. Farid. Exposing photo manipulation with inconsistent shadows. ACM Transactions on Graphics (ToG), 32(3):28, 2013.
-  A. Krizhevsky. Learning multiple layers of features from tiny images. 2009.
-  D. Krotov and J. J. Hopfield. Dense associative memory is robust to adversarial inputs. arXiv preprint arXiv:1701.00939, 2017.
-  A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
-  Y. Liu, X. Chen, C. Liu, and D. Song. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016.
-  J. Lu, H. Sibai, E. Fabry, and D. Forsyth. No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv preprint arXiv:1707.03501, 2017.
-  J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267, 2017.
-  T. Miyato, S.-i. Maeda, M. Koyama, K. Nakae, and S. Ishii. Distributional smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677, 2015.
-  S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. arXiv preprint arXiv:1610.08401, 2016.
-  S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574–2582, 2016.
-  P. K. Nathan Silberman, Derek Hoiem and R. Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012.
-  A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 427–436, 2015.
-  A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR, 2015.
-  N. Papernot, P. McDaniel, and I. Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016.
-  N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:1602.02697, 2016.
-  N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In Security and Privacy (SP), 2016 IEEE Symposium on, pages 582–597. IEEE, 2016.
-  N. Papernot, P. D. McDaniel, and I. J. Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016.
-  D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nešić, X. Wang, and P. Westling. High-resolution stereo datasets with subpixel-accurate ground truth. In German Conference on Pattern Recognition, pages 31–42. Springer, 2014.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
-  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals.
Understanding deep learning requires rethinking generalization.In ICLR 2016, 2016.