Deep neural networks (DNNs), while enjoying tremendous success in recent years, suffer from serious vulnerabilities to adversarial attacks (Szegedy et al., 2014)
. For example, in computer vision applications, an attacker can add visually imperceptible perturbations to an image and mislead a DNN model into making arbitrary predictions. When the attacker has complete knowledge of a DNN model, these perturbations can be computed by using the gradient information of the model, which guides the adversary in discovering vulnerable regions of the input space that would most drastically affect the model output(Goodfellow et al., 2014; Papernot et al., 2016a). But even in a black-box scenario, where the attacker does not know the exact network architecture, one can use a substitute model to craft adversarial perturbations that are transferable to the target model (Papernot et al., 2017). To make this even more troubling, it is possible to print out physical 2D or 3D objects to fool recognition systems in realistic settings (Sharif et al., 2016; Athalye and Sutskever, 2017).
The threat of adversarial attack casts a shadow over deploying DNNs in security and safety-critical applications like self-driving cars. To better understand and fix the vulnerabilities, there is a growing body of research on defending against various attacks and making DNN models more robust (Papernot et al., 2016c; Bhagoji et al., 2017; Metzen et al., 2017). However, the progress of defense research has been lagging behind the attack side so far. Moreover, research on defense rarely focuses on practicality and scalability, both essential for real-world deployment. For example, total variation denoising and image quilting are image preprocessing techniques that have potential in mitigating adversarial perturbations to some extent (Chuan Guo, 2018), but they incur significant computational overhead, calling into question how feasibly they can be used in practical applications, which often require fast, real-time defense (Evtimov et al., 2017; Eykholt et al., 2017).
1.1. Our Contributions and Impact
1. Compression as Fast, Practical, Effective Defense. We contribute the major idea that compression — a central concept that underpins numerous successful data mining techniques — can offer powerful, scalable, and practical protection for deep learning models against adversarial image perturbations in real-time. Motivated by our observation that many attack strategies aim to perturb images in ways that are visually imperceptible to the naked eye, we show that systematic adaptation of the widely available JPEG compression technique can effectively compress away such pixel “noise”, especially since JPEG is particularly designed to reducing image details that are imperceptible to humans. (Section 3.1)
2. Shield: Multifaceted Defense Framework. Building on our foundational compression idea, we contribute the novel Shield defense framework that combines randomization, vaccination and ensembling into a fortified multi-pronged protection:
[itemsep=1pt, topsep=3pt, partopsep=0pt, leftmargin=15pt]
We exploit JPEG’s flexibility in supporting varying compression levels to develop strong ensemble models that span a spectrum of compression levels;
We show that a model can be “vaccinated” by training on compressed images, increasing its robustness towards compression transformation for both adversarial and benign images;
Shield employs stochastic quantization that compresses different regions of an image using randomly sampled compression levels, making it harder for the adversary to estimate the transformation performed.
Shield does not require any change in the model architecture, and can recover significant amount of model accuracy lost to adversarial instances, with little effect on the accuracy for benign instances. Shield stands for Secure Heterogeneous Image Ensemble with Local Denoising. To the best of our knowledge, our multi-faceted defense approach has yet been challenged. (Sections 3.2 & 3.3)
3. Extensive Evaluation Against Major Attacks. We perform extensive experiments using the full ImageNet benchmark dataset with 50K images, demonstrating that our approach is fast, effective and scalable. Our approaches eliminate up to 94% of black-box attacks and 98% of gray-box attacks delivered by some of the most recent, strongest attacks, such as Carlini-Wagner’s L2 (Carlini and Wagner, 2017) and DeepFool (Moosavi-Dezfooli et al., 2016). (Section 4)
4. Impact to Intel and Beyond.
This work is making multiple positive impacts on Intel’s research and product development plans. Introduced with the Sandy Bridge CPU microarchitecture, Intel’s Quick Sync Video (QSV) technology dedicates a hardware core for high-speed video processing, performs JPEG compression up to 24x faster than TensorFlow implementations, paving the way for real-time defense in safety-critical applications, such as autonomous vehicles. This research has sparked insightful discussion among research and development teams at Intel, on the priority ofsecure deep learning that necessitates tight integration of practical defense strategies, software platforms and hardware accelerators. We believe our work will accelerate the industry’s emphasis on this important topic. To ensure reproducibility of our results, we have open-sourced our code on GitHub (https://github.com/poloclub/jpeg-defense). (Section 5)
2. Background: Adversarial Attacks
Our work focuses on defending against adversarial attacks on deep learning models. This section provides background information for readers new to the adversarial attack literature.
Given a trained classifierand an instance , the objective of an adversarial untargeted attack is to compute a perturbed instance such that and for some distance function and . Popular choices of are Euclidean distance , and Chebychev distance . A targeted attack is similar, but is required to induce a classification for a specific target class , i.e., . In both cases, depending on whether the attacker has full knowledge of or not, the attack can be further categorized into white-box attack and black-box
attack. The latter is obviously harder for the attacker since less information is known about the model, but has been shown to be possible in practice by relying on the property of transferability from a substitute model to the target model when both of them are DNNs trained using gradient backpropagation(Szegedy et al., 2014; Papernot et al., 2017).
The seminal work by Szegedy et al. (Szegedy et al., 2014) proposed the first effective adversarial attack on DNN image classifiers by solving a box-constrained L-BFGS optimization problem and showed that the computed perturbations to the images were indistinguishable to the human eye — a rather troublesome property for people trying to identify adversarial images. This discovery has gained tremendous interest, and many new attack algorithms have been invented (Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2016; Moosavi Dezfooli et al., 2017; Papernot et al., 2016a) and applied to other domains such as malware detection (Grosse et al., 2016; Hu and Tan, 2017)2016b)
, and reinforcement learning(Lin et al., 2017; Huang et al., 2017). Below, we describe the major, well-studied attacks in the literature, against which we will evaluate our approach.
Carlini-Wagner’s (CW-L2) (Carlini and Wagner, 2017) is an optimization-based attack that adds a relaxation term to the perturbation minimization problem based on a differentiable surrogate of the model. They pose the optimization as minimizing:
where controls the confidence with which an image is misclassified by the DNN, and
is the output from the logit layer (last layer before the softmax function is applied for prediction) of.
DeepFool (DF) (Moosavi-Dezfooli et al., 2016) constructs an adversarial instance under an constraint by assuming the decision boundary to be hyperplanar. The authors leverage this simplification to compute a minimal adversarial perturbation that results in a sample that is close to the original instance but orthogonally cuts across the nearest decision boundary. In this respect, DF is an untargeted attack. Since the underlying assumption about the decision boundary being completely linear in higher dimensions is an oversimplification of the actual case, DF keeps reiterating until a true adversarial instance is found. The resulting perturbations are harder for humans to detect compared to perturbations introduced by other attacks.
Iterative Fast Gradient Sign Method (I-FGSM) (Kurakin et al., 2016) is the iterative version of the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014), which is a fast algorithm that computes perturbations subject to an constraint. FGSM
simply takes the sign of the gradient of loss functionw.r.t. the input ,
where is the set of parameters of the model and is the true label of the instance. The parameter controls the magnitude of per-pixel perturbation. I-FGSM iteratively applies FGSM in each iteration after clipping the values appropriately at each step:
3. Proposed Method: Compression as Defense
In this section, we present our compression-based approach for combating adversarial attacks. In Section 3.1, we begin by describing the technical reasons why compression can remove perturbation. As compression would modify the distribution of the input space by introducing some artifacts, in Section 3.2, we propose to “vaccinate” the model by training it with compressed images, which increases its robustness towards compression transformation for both adversarial and benign images. Finally, in Section 3.3, we present our multifaceted Shield defense framework that combines random quantization, vaccination and ensembling into a fortified multi-pronged defense, which, to the best of our knowledge, has yet been challenged.
3.1. Preprocessing Images using Compression
Our main idea on rectifying the prediction of a trained model , with respect to a perturbed input , is to apply a preprocessing operation that brings back closer to the original benign instance , which implicitly aims to make . Constructing such a is application dependent. For the image classification problem, we show that JPEG compression is a powerful preprocessing defense technique. JPEG compression mainly consists of the following steps:
Convert the given image from RGB to (chrominance + luminance) color space.
Perform spatial subsampling of the chrominance channels, since the human eye is less susceptible to these changes and relies more on the luminance information.
Transform blocks of the channels to a frequency domain representation using Discrete Cosine Transform (DCT).
Perform quantization of the blocks in the frequency domain representation according to a quantization table which corresponds to a user-defined quality factor for the image.
The last step is where the JPEG algorithm achieves the majority of compression at the expense of image quality. This step suppresses higher frequencies more since these coefficients contribute less to the human perception of the image. As adversarial attacks do not optimize for maintaining the spectral signature of the image, they tend to introduce more high frequency components which can be removed at this step. This step also renders the preprocessing stage non-differentiable, which makes it non-trivial for an adversary to optimize against, allowing only estimations to be made of the transformation (Shin and Song, 2017). We show in our evaluation (Section 4.2) that JPEG compression effectively removes adversarial perturbation across a wide range of compression levels.
3.2. Vaccinating Models with Compressed Images
As DNNs are typically trained on high quality images (with little or compression), they are often invariant to the artifacts introduced by the preprocessing of JPEG at high-quality settings. This is especially useful in an adversarial setting as our pilot study has shown that applying even mild compression removes the perturbations introduced by some attacks (Das et al., 2017). However, applying too much compression could reduce the model accuracy on benign images.
We propose to “vaccinate” the model by training it with compressed images, especially those at lower JPEG qualities, which increases the model’s robustness towards compression transformation for both adversarial and benign images. With vaccination, we can apply more aggressive compression to remove more adversarial perturbation. In our evaluation (Section 4.3), we show the significant advantage that our vaccination strategy provides, which offers a lift of more than 7 absolute percentage points in model accuracy for high-perturbation attacks.
3.3. Shield: Multifaceted Defense Framework
To leverage the effectiveness of JPEG compression as a preprocessing technique along with the benefit of vaccinating with JPEG images, we propose a stochastic variant of the JPEG algorithm that introduces randomization to the quantization step, making it harder for the adversaries to estimate the preprocessing transformation.
Figure 2 illustrates our proposed strategy, where we vary the quantization table for each block in the frequency domain to correspond to a random quality factor from a provided set of qualities, such that the compression level does not remain uniform across the image. This is equivalent to breaking up the image into disjoint blocks, compressing each block with a random quality factor, and putting the blocks together to re-create the final image. We call this method Stochastic Local Quantization (SLQ). As the adversary is free to craft images with varying amounts of perturbation, our defense should offer protection across a wide spectrum. Thus, we selected the set of qualities as our randomization candidates, uniformly spanning the range of JPEG qualities from 1 (most compressed) to 100 (least compressed).
Comparing our stochastic approach to taking a simple average over JPEG compressed images, our method allows for maintaining the original semantics of the image in the blocks compressed to higher qualities, while performing more localized denoising in the blocks compressed to lower qualities. In the case of simple average, all perturbations may not be removed at higher qualities and they might simply dominate the other components participating in the average, still posing to be adversarial. Introducing localized stochasticity reduces this expectation.
In our evaluation (Section 4.3), we will show that by using the spectrum of JPEG compression levels with our stochastic approach, our model can simultaneously attain a high accuracy on benign images, while being more robust to adversarial perturbations — a strong benefit that using a single JPEG quality cannot provide. Our method is further fortified by using an ensemble of vaccinated models individually trained on the set of qualities picked for randomization. We show in Section 4.3 how our method can achieve high model accuracies, comparable to those of much larger ensembles, but is significantly faster.
In this section, we show that our approach is scalable, effective and practical in removing adversarial image perturbations. For our experiments, we consider the following scenarios:
4.1. Experiment Setup
We performed experiments on the full validation set of the ImageNet benchmark image classification dataset (Krizhevsky et al., 2012), which consists of 1,000 classes, totaling 50,000 images. We show the performance of each defense on the ResNet-v2 50 model obtained from the TF-Slim module in TensorFlow. We construct the attacks using the popular CleverHans package111https://github.com/tensorflow/cleverhans, which contains implementations from the authors of the attacks.
For Carlini-Wagner-L2 (CW-L2), we set its parameter , a common value used in studies (Chuan Guo, 2018), as larger values (higher confidence) incur prohibitively high computation cost.
DeepFool (DF) is a non-parametric attack that optimizes the amount of perturbation required to misclassify an image.
For FGSM and I-FGSM, we vary from 0 to 8 in steps of 2.
We compare JPEG compression and Shield with two popular denoising techniques that have potential in defending against adversarial attacks (Xu et al., 2018; Chuan Guo, 2018). Median filter (MF) collapses a small window of pixels into a single value, and may drop some of the adversarial pixels in the process. Total variation denoising (TVD) aims to reduce the total variation in an image, and may undo the artificial noise injected by the attacks. We vary the parameters of each method to evaluate how their values affect defense performance.
For JPEG compression, we vary the compression level from quality 100 (least compressed) to 20 (greatly compressed), in decrements of 10.
For median filter (MF), we use window sizes of 3 (smallest possible) and 5. We tested larger window sizes (e.g., 7), which led to extremely poor model accuracies, thus were ruled out as parameter candidates.
For total variation denoising (TVD), we vary its weight parameter from 10 through 40, in increments of 10. Reducing the weight of TVD further (e.g., 0.3) produces blurry images that lead to poor model accuracy.
4.2. Defending Gray-Box Attacks with Image Preprocessing
In this section, we investigate the setting where an adversary gains access to all parameters and weights of a model that is trained on benign images, but is unaware of the defense strategy. This constitutes a gray-box attack on the overall classification pipeline.
We show the results of applying JPEG compression at various qualities on images attacked with Carlini-Wagner-L2 (CW-L2) and DeepFool (DF) in Figure 3, and on images attacked with I-FGSM and FGSM in Figure 4.
Combating Carlini-Wagner-L2 (CW-L2) & DeepFool (DF). Although CW-L2 and DF, both considered strong attacks, are highly effective at lowering model accuracies, Figure 3 shows that even applying mild JPEG compression (i.e., using higher JPEG qualities) can recover much of the lost accuracy. Since both methods optimize for a lower perturbation to fool the model, the noise introduced by these attacks is imperceptible to the human eye and lies in the high frequency spectrum, which is destroyed in the quantization step of the JPEG algorithm. Shield performs well, and comparably, for both attacks. We do not arbitrarily scale the perturbation magnitude of either attack as in (Chuan Guo, 2018), as doing so would violate the attacks’ optimization criteria.
Combating I-FSGM & FGSM. As shown in Figure 4, JPEG compression also achieves success in countering I-FGSM and FGSM attacks, which introduce higher magnitudes of perturbation.
As the amount of perturbation increases, the accuracies of models without any protection (gray dotted curves in Figure 4) rapidly falls beneath 19%. JPEG recovers significant portions of the lost accuracies (purple curves); its effectiveness also gradually and expectantly declines as perturbation becomes severe. Applying more compression generally recovers more accuracy (e.g., dark purple curve, for JPEG quality 20), but at the cost of losing some accuracy for benign images. Shield (orange curve) offers a desirable trade-off, achieving good performance under severe perturbation while retaining accuracies comparable to the original models. Applying less compression (light purple curves) performs well with benign images but is not as effective when perturbation increases.
|No Attack||CW-L2 ()||DF||I-FGSM ()||FGSM ()|
|Shield [20, 40, 60, 80]||72.11||71.85||71.88||65.63||59.29|
Effectiveness and Runtime Comparison against Median Filter (MF) and Total Variation Denoising (TVD). We compare JPEG compression and Shield with MF and TVD, two popular denoising techniques, because they too have potential in defending against adversarial attacks (Xu et al., 2018; Chuan Guo, 2018). Like JPEG, both MF and TVD are parameterized. Table 1 summarizes the performance of all the image preprocessing techniques under consideration. While all techniques are able to recover accuracies from CW-L2 and DF, both strongly optimized attacks with lower perturbation strength, the best performing settings are from JPEG (bold font in Table 1). When faced with large amount of perturbation generated by the I-FGSM and FSGM attacks, Shield benefits from the combination of Stochastic Local Quantization, vaccination, and ensembling, outperforming all other techniques.
As developing practical defense is our primary goal, effectiveness, while important, is only one part of our desirable solution. Another critical requirement is that our solution be fast and scalable. Thus, we also compare the runtimes of the image processing techniques. Our comparison focuses on the most computationally intensive parts of each technique, ignoring irrelevant overheads (e.g., disk I/O) common to all techniques. All runtimes are averaged over 3 runs, using the full 50k ImageNet validation images, on a dedicated desktop computer equipped with an Intel i7-4770K quad-core CPU clocked at 3.50GHz, 4x8GB RAM, 1TB SSD of Samsung 840 EVO-Series and 2x3TB WD 7200RPM hard disk, running Ubuntu 14.04.5 LTS and Python 2.7. We used the fastest, most popular Python implementations of the image processing techniques. We used JPEG and MF from Pillow 5.0, and TVD from scikit-image.
As shown in Figure 5, JPEG is the fastest, spending no more than 107 seconds to compress 50k images (at JPEG quality 80). It is at least 22x faster than TVD, and 14x faster than median filter. We tested the speed of the TensorFlow implementation of Shield, which also compresses all images at high speed, taking only 150s.
4.3. Black-Box Attack with Vaccination and Ensembling
We now turn our attention to the setting where an adversary has knowledge of the model being used but does not have access to the model parameters or weights. More concretely, we vaccinate the ResNet-v2 50 model by retraining on the ImageNet training set and preprocessing the images with JPEG compression while training. This setup constitutes a black-box attack, as the attacker only has access to the original model but not the vaccinated model being used.
We denote the original ResNet-v2 50 model as , which the adversary has access to. By retraining on images of a particular JPEG compression quality , we transform to , e.g., for JPEG-20 Vaccination, we retrain on JPEG-compressed images at quality 20 and obtain
. When retraining the ResNet-v2 50 models, we used stochastic gradient descent (SGD) with a learning rate of, with a decay of 94% over iterations. We conducted the retraining on a GPU cluster with 12 NVIDIA Tesla K80 GPUs. In this manner, we obtain 8 models from quality 20 through quality 90 in increments of 10 (), to cover a wide spectrum of JPEG qualities. Figure 6 shows the results of model vaccination against FGSM attacks, whose parameter ranges from 0 (no perturbation) to 8 (severe perturbation), in steps of 2. The plots show that retraining the model helps recover even more model accuracy than using JPEG preprocessing alone (compare the unvaccinated gray dotted curve vs. the vaccinated orange and purple curves in Figure 6). We found that a given model performed best when tested with JPEG-compressed images of the same quality , which was expected.
We test these models in an ensemble with two different voting schemes. The first ensemble scheme, denoted as , corresponds to each model casting a vote on every JPEG quality from . This has a total cost of 64 votes, from which we derive the majority vote. In the second scheme, denoted by , each model votes only on , the JPEG quality it was trained on. This incurs a cost of 8 votes.
Table 2 compares the accuracies (against FGSM) and computation costs of these two schemes with those of Shield, which also utilizes an ensemble () with a total of 4 votes. Shield achieves very similar performance as compared to the vaccinated models, at half the cost when compared to . Hence, Shield offers a favorable trade-off in terms of scalability with minimal effect on accuracy.
4.4. Transferability in Black-Box Setting
In this setup, we evaluate the transferability of attacked images generated using ResNet-v2 50 on ResNet-v2 101 and Inception-v4. The attacked images are preprocessed using JPEG compression and Stochastic Local Quantization. In Table 3, we show that JPEG compression as a defense does not significantly reduce model accuracies on low perturbation attacks like DF and CW-L2. For higher-perturbation attacks, the accuracy of Inception-v4 lowers by a maximum of 10%.
|Inc-v4 (80.2%)||RN-v2 101 (77.0%)|
4.5. NIPS 2017 Competition Results
In addition to the experiment results shown above, we also participated in the NIPS 2017 competition on Defense Against Adversarial Attack using a version of our approach that did not include stochastic local quantization and vaccination to defend against attacks “in the wild.” With only an ensemble of three JPEG compression qualities (90, 80, 70), our entry received a silver badge in the competition, ranking 16th out of more than 100 submissions.
5. Significance and Impact
This work has been making multiple positive impacts on Intel’s research and product development plans. In this section, we describe such impacts in detail, and also describe how they may more broadly influence deep learning and cybersecurity. We then discuss our work’s scope, limitations, and additional practical considerations.
5.1. Software and Hardware Integration Milestones
As seen in Section 4, JPEG compression is much faster than other popular preprocessing techniques; even commodity implementations from Pillow are fast. However, in order to be deployed into a real defense pipeline, we need to evaluate its computational efficiency with tighter software and hardware integration. Fortunately, JPEG compression is a widely-used and mature technique that can be be easily deployed in various platforms, and due to its widespread usage, we can use off-the-shelf optimized software and hardware for such testing. One promising milestone we reached, utilized Intel’s hardware Quick Sync Video (QSV) technology: a hardware core dedicated and optimized for video encoding and decoding. It was introduced with Sandy Bridge CPU microarchitecture and exists currently in various Intel platforms. From our experiments, JPEG compression by Intel QSV is up to 24 times faster than the Pillow and TensorFlow implementations when evaluated on the same ImageNet validation set of 50,000 images. This computational efficiency is desirable for applications that need real-time defense, such as autonomous vehicles. In the future, we plan to explore the feasibility of our approach on more hardware platforms, such as the Intel Movidius Compute Stick222https://developer.movidius.com, which is a low power USB-based deep learning inference kit.
5.2. New Computational Paradigm: Secure Deep Learning
This research has sparked insightful discussion with teams of Intel QSV, Intel Deep Learning SDK, and Intel Movidius Compute Stick. This work not only educates industry regarding concepts and defenses of adversarial machine learning, but also provides opportunities to advance deep learning software and hardware development to incorporate adversarial machine learning defenses. For example, almost all defenses incur certain levels of computational overhead. This may be due to image preprocessing techniques (Chuan Guo, 2018; Luo et al., 2015), using multiple models for model ensembles (Strauss et al., 2017), the introduction of adversarial perturbation detectors (Metzen et al., 2017; Xu et al., 2018), or the increase in training time for adversarial training (Goodfellow et al., 2014). However, while hardware and system improvement for fast deep learning training and inference remains an active area of research, secure machine learning workloads still receive relatively less attention, suggesting room for improvement. We believe this will accelerate the positive shift of thinking in the industry in the near future, from addressing problems like “How do we build deep learning accelerators?” to problems such as “How do we build deep learning accelerators that are not only fast but also secure?”. Understanding such hardware implications are important for microprocessor manufacturers, equipment vendors and companies offering cloud computing services.
5.3. Scope and Limitations
In this work, we focus on systematically studying the benefit of compression on its own. As myriads of newer and stronger attack strategies are continuously discovered, limitations in existing, single defenses are revealed. Our approach is not a panacea to defend all possible (future) attacks, and we do not expect or intend for it to be used in isolation of other techniques. Rather, our methods should be used together with other defense techniques, to potentially develop even stronger defense. Using multi-layered protection is a proven, long-standing defense strategy that has been pervasive in security research and in practice (Tamersoy et al., 2014; Chen et al., 2017). Fortunately, since our methods are a preprocessing technique, it is easy to integrate them into many defense pipelines.
6. Related Work
Due to intriguing theoretical properties and practical importance, there has been a surge in the number of papers in the past few years attempting to find countermeasures against adversarial attacks. These include detecting adversarial examples before performing classification (Metzen et al., 2017; Feinman et al., 2017), modifying network architecture and the underlying primitives used (Gu and Rigazio, 2014; Krotov and Hopfield, 2017; Ranjan et al., 2017), modifying the training process (Goodfellow et al., 2014; Papernot et al., 2016c), and using preprocessing techniques to remove adversarial perturbations (Dziugaite et al., 2016; Bhagoji et al., 2017; Luo et al., 2015; Chuan Guo, 2018). The preprocessing approach is most relevant to our work. Below, we describe two methods in this category—median filter and total variation denoising, which we compared against in Section 4. We then discuss some recent attacks that claim to break preprocessing defenses.
6.1. Image Preprocessing as Defense
Median Filter. This method uses a sliding window over the image and replaces each pixel with the median value of its neighboring pixels to spatially smooth the image. The size of the the sliding window controls the smoothness, for example, a larger window size produces blurrier images. This technique has been used in multiple prior defense works (Chuan Guo, 2018; Xu et al., 2018).
Total Variation Denoising. The method is based on the principle that images with higher levels of (adversarial) noise tend to have larger total variations: the sum of the absolute difference between adjacent pixel values. Denoising is performed by reducing the total variation while keeping the denoised image close to the original one. A weighting parameter is used as a trade-off between the level of total variation and the distance from the original image. Compared with median filter, this method is more effective at removing adversarial noise while preserving image details (Chuan Guo, 2018).
6.2. Attacks against Preprocessing Techniques
One of the reasons why adding preprocessing steps increases attack difficulty is that many preprocessing operations are non-differentiable, thus restricting the feasibility of gradient-based attacks. In JPEG compression, the quantization in the frequency domain is a non-differentiable operation.
Shin and Song (Shin and Song, 2017) propose a method that approximates the quantization in JPEG with a differentiable function. They also optimize the perturbation over multiple compression qualities to ensure an adversarial image is robust at test time. However, the paper only reports preliminary results on 1000 images. It is also unclear whether their attack is effective against our more advanced Shield method, which introduces more randomization to combat against adversarial noise.
Backward Pass Differentiable Approximation (Athalye et al., 2018) is another potential approach to bypass non-differentiable preprocessing techniques. To attack JPEG preprocessing, it performs forward propagation through the JPEG compression and DNN combination but ignores the compression operation during the backward pass. This is based on the intuition that the compressed image should look similar to the original one, so the operation can be approximated by the identity function.
In this paper, we highlighted the urgent need for practical defense for deep learning models that can be readily deployed. We drew inspiration from JPEG image compression, a well-known and ubiquitous image processing technique, and placed it at the core of our new deep learning model defense framework: Shield. Since many attack strategies aim to perturb image pixels in ways that are visually imperceptible, the Shield defense framework utilizes JPEG compression to effectively “compress away” such pixel manipulation. Shield immunizes DNN models from being confused by compression artifacts by “vaccinating” a model: re-training it with compressed images, where different compression levels are applied to generate multiple vaccinated models that are ultimately used together in an ensemble defense. Furthermore, Shield adds an additional layer of protection by employing randomization at test time by compressing different regions of an image using random compression levels, making it harder for an adversary to estimate the transformation performed. This novel combination of vaccination, ensembling and randomization makes Shield a fortified multi-pronged protection, while remaining fast and successful without requiring knowledge about the model. We conducted extensive, large-scale experiments using the ImageNet dataset, and showed that our approaches eliminate up to 94% of black-box attacks and 98% of gray-box attacks delivered by the recent, strongest attacks. To ensure reproducibility of our results, we have open-sourced our code on GitHub.
This material is based in part upon work supported by the National Science Foundation under Grant Numbers IIS-1563816, CNS-1704701, and TWC-1526254. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This research is also supported in part by gifts from Intel, Google, Symantec, Yahoo! Labs, eBay, Amazon, and LogicBlox.
- Athalye et al. (2018) Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. arXiv preprint arXiv:1802.00420 (2018).
- Athalye and Sutskever (2017) Anish Athalye and Ilya Sutskever. 2017. Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397 (2017).
- Bhagoji et al. (2017) Arjun Nitin Bhagoji, Daniel Cullina, and Prateek Mittal. 2017. Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers. arXiv preprint arXiv:1704.02654 (2017).
- Carlini and Wagner (2017) Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on. IEEE, 39–57.
- Chen et al. (2017) Shang-Tse Chen, Yufei Han, Duen Horng Chau, Christopher Gates, Michael Hart, and Kevin A Roundy. 2017. Predicting Cyber Threats with Virtual Security Products. In Proceedings of the 33rd Annual Computer Security Applications Conference. ACM, 189–199.
- Chuan Guo (2018) Moustapha Cisse Laurens van der Maaten Chuan Guo, Mayank Rana. 2018. Countering Adversarial Images using Input Transformations. International Conference on Learning Representations (2018). https://openreview.net/forum?id=SyJ7ClWCb
- Das et al. (2017) Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E Kounavis, and Duen Horng Chau. 2017. Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:1705.02900 (2017).
- Dziugaite et al. (2016) Gintare Karolina Dziugaite, Zoubin Ghahramani, and Daniel M Roy. 2016. A study of the effect of JPG compression on adversarial images. arXiv preprint arXiv:1608.00853 (2016).
- Evtimov et al. (2017) Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. 2017. Robust physical-world attacks on machine learning models. arXiv preprint arXiv:1707.08945 (2017).
- Eykholt et al. (2017) Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Dawn Song, Tadayoshi Kohno, Amir Rahmati, Atul Prakash, and Florian Tramer. 2017. Note on Attacking Object Detectors with Adversarial Stickers. arXiv preprint arXiv:1712.08062 (2017).
- Feinman et al. (2017) Reuben Feinman, Ryan R Curtin, Saurabh Shintre, and Andrew B Gardner. 2017. Detecting Adversarial Samples from Artifacts. arXiv preprint arXiv:1703.00410 (2017).
- Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. In ICLR.
- Grosse et al. (2016) Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. 2016. Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435 (2016).
- Gu and Rigazio (2014) Shixiang Gu and Luca Rigazio. 2014. Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068 (2014).
- Hu and Tan (2017) Weiwei Hu and Ying Tan. 2017. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN. arXiv preprint arXiv:1702.05983 (2017).
- Huang et al. (2017) Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. 2017. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284 (2017).
et al. (2012)
Alex Krizhevsky, Ilya
Sutskever, and Geoffrey E Hinton.
Imagenet classification with deep convolutional neural networks. InAdvances in neural information processing systems. 1097–1105.
- Krotov and Hopfield (2017) Dmitry Krotov and John J Hopfield. 2017. Dense Associative Memory is Robust to Adversarial Inputs. arXiv preprint arXiv:1701.00939 (2017).
- Kurakin et al. (2016) Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016).
- Lin et al. (2017) Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, and Min Sun. 2017. Tactics of Adversarial Attack on Deep Reinforcement Learning Agents. arXiv preprint arXiv:1703.06748 (2017).
- Luo et al. (2015) Yan Luo, Xavier Boix, Gemma Roig, Tomaso Poggio, and Qi Zhao. 2015. Foveation-based mechanisms alleviate adversarial examples. arXiv preprint arXiv:1511.06292 (2015).
- Metzen et al. (2017) Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. 2017. On detecting adversarial perturbations. In ICLR.
- Moosavi Dezfooli et al. (2017) Seyed Mohsen Moosavi Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In CVPR.
- Moosavi-Dezfooli et al. (2016) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In CVPR.
- Papernot et al. (2017) Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical Black-Box Attacks Against Machine Learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (ASIA CCS ’17). 506–519.
- Papernot et al. (2016c) Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016c. Distillation as a defense to adversarial perturbations against deep neural networks. In IEEE Symposium on Security and Privacy. 582–597.
- Papernot et al. (2016a) Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016a. The Limitations of Deep Learning in Adversarial Settings. In IEEE European Symposium on Security and Privacy, EuroS&P 2016, Saarbrücken, Germany, March 21-24, 2016. 372–387.
Papernot et al. (2016b)
Patrick D. McDaniel, Ananthram Swami,
and Richard E. Harang. 2016b.
Crafting adversarial input sequences for recurrent neural networks. In2016 IEEE Military Communications Conference, MILCOM. 49–54.
- Ranjan et al. (2017) Rajeev Ranjan, Swami Sankaranarayanan, Carlos D Castillo, and Rama Chellappa. 2017. Improving Network Robustness against Adversarial Attacks with Compact Convolution. arXiv preprint arXiv:1712.00699 (2017).
et al. (2016)
Mahmood Sharif, Sruti
Bhagavatula, Lujo Bauer, and Michael K
Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. InProceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 1528–1540.
- Shin and Song (2017) Richard Shin and Dawn Song. 2017. JPEG-resistant Adversarial Images. NIPS 2017 Workshop on Machine Learning and Computer Security (2017).
- Strauss et al. (2017) Thilo Strauss, Markus Hanselmann, Andrej Junginger, and Holger Ulmer. 2017. Ensemble methods as a defense to adversarial perturbations against deep neural networks. arXiv preprint arXiv:1709.03423 (2017).
- Szegedy et al. (2014) Christian Szegedy, Google Inc, Wojciech Zaremba, Ilya Sutskever, Google Inc, Joan Bruna, Dumitru Erhan, Google Inc, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In ICLR.
- Tamersoy et al. (2014) Acar Tamersoy, Kevin Roundy, and Duen Horng Chau. 2014. Guilt by association: large scale malware detection by mining file-relation graphs. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 1524–1533.
- Xu et al. (2018) Weilin Xu, David Evans, and Yanjun Qi. 2018. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In Proceedings of the 2018 Network and Distributed Systems Security Symposium (NDSS).