With increased adoption of machine and deep learning models into critical systems, adversarial attacks against these models have proportionally become a growing concern. Adversarial examples have been demonstrated in a growing range of applications, not only in classification tasks, but also in malware detection and recognition of speech and audio
. In the physical world, models used in facial recognition systems and autonomous vehicle interpretations of traffic signs, road patterns, and pedestrians  are also susceptible to these distorted inputs. Adversarial attacks, such as fast gradient sign method (FGSM), iterative FGSM (I-FGSM), Carlini-Wagner’s L2 (CWL2) and DeepFool, take a broad range of approaches toward a similar goal: drifting a targeted model away its original intent through a series of malicious inputs. Attacks can be categorized as targeted or non-targeted, describing their intention of misleading a model toward a specific desired outcome or generally causing it to incorrectly interpret the input. Additionally, attacks may be white- or black- box, if the adversary has prior knowledge of the underlying model, training information, or system.
In this work, we focus on black-box non-targeted attacks to images, in an attempt to provide defense for general machine learning models against a range of attacks. We leverage prevalent defense measures, identifying approaches that take a global approach to removing distortions as well as recent localized approaches. Ultimately, we propose a new defense strategy, SAD, which uses regions of interest to strategically reduce adversarial distortions. Figure1 motivates the effectiveness of our proposed approach, comparing the saliency maps of an adversarial example image after application of bit-depth reduction, JPEG compression, SHIELD, and our proposed SAD model. Some defense techniques, including bit-depth reduction and JPEG compression, use a globalized approach to clean the inputs, while other techniques, such as SHIELD , use a more localized approach in reducing distortions. Both types of approach have demonstrated performance against adversarial attacks of image classification models, however it is difficult to ensure preservation of the data integrity.
While ROI region selection can be prohibitive on a perturbed image, some visual saliency models have been proven effective despite adversarial attack. . In Figure 2, an image from ECSSD dataset is shown with a generated saliency map, prior to and after a FGSM attack. Note the reduction in salient region identification in the upper leaves of the image, however the overall content remains correctly identified.
In this work, we propose a novel defense technique based on visual saliency. The proposed approach identifies a region of interest (ROI) and leverages a saliency map to apply targeted cleaning techniques. In demonstration of our proposed method, we evaluate the performance of state-of-the-art saliency models on established saliency data under the following conditions: original data, attacked by FGSM and DeepFool, and finally cleaned by four methods. We discuss the impact of the choice of saliency estimation approaches in the effectiveness of our defense solution, and recommend augmentations for future improvement.
2 Related Work
Among all the different cleaning techniques, we divide them into two major categories: globalized techniques and localized techniques. Globalized techniques, including methods such as bit-depth reduction and JPEG compression, have proven to be successful in reducing the effectiveness of adversarial attacks through the use of relatively simplistic approaches. Bit-depth reduction limits the color in an image, which reduces distortions and therefore the effectiveness of the adversarial attacks. However, while bit-depth reduction impacts general perturbations, it can also damage core features used to identify salient information. JPEG Compression can also be used to reduce the effectiveness of malicious input by compressing the image. This causes malicious inputs to get smoothed out, but at the same time introduces unwanted artifacts. These unwanted artifacts can have unexpected consequences in saliency generation. While each technique has a unique way of approaching the problem, they all reduce overall number of features within the data. However, these globalized techniques are predictable and thus can be easily circumvented. Related to globalized approaches, distillation has also been demonstrated by Papernot et al as a viable means of defending against adversarial examples in deep neural networks. Magnet takes a cryptography-based approach to defending against adversarial examples. Built for gray-box attacks, this defense is randomly selected from a set of precomputed methods at runtime. Beyond globalized approaches, there are more localized approaches, such as image quilting , watermarking and SHIELD . Inherent randomness in these techniques makes them difficult for the adversarial attacks to circumvent.
We present a unique method that reduces the effectiveness of adversarial attacks while preserving original content, and demonstrate its viability by examining the saliency of the images prior to attacks, under attacks, and after defenses.
3 Saliency-based Adversarial Defense (SAD)
In response to the need for a targeted defense measure against diverse adversarial inputs, we propose a Saliency-based Adversarial Defense (SAD) approach outlined in Figure 3. Our model estimates relevant regions of interest (ROI) in an input and strategically applies countermeasures against adversarial perturbations.
3.1 Model Description
In order to select the most relevant ROI, our proposed model leverages a model for visual saliency estimation. First, a saliency map is generated for the input image. Our implementation uses PiCANet  trained on the DUTS-TR dataset . Once a map is generated, JPEG compression is applied at differing qualities based on the saliency predictions.
In addition to the images to be processed, a list of compression level must be passed as a parameter to our model. This list will be denoted as , where denotes the th compression level. Much like SHIELD , each image processed is segmented into 88 windows. is used to denote the window at the th row and th column of the image. The saliency map, taken as a grey-scale image, is identically segmented into 88 windows. Each window in the saliency map is assigned a scalar value, from 0 to 255, based on the average saliency prediction of all pixels within the window. These scalar values, denoted as , are then divided by a threshold and used as an index into . The compression level for can be expressed as the following equation.
The goal of this approach is to reduce the effectiveness of an adversarial attack while minimizing damage done to the ROI.
This technique is similar in nature to SHIELD , with the primary difference being the replacement of a randomized compression algorithm with a saliency based compression algorithm. Figure 4 provides an example input, saliency prediction and output that demonstrates compressing the background significantly more than the salient parts of the image. In this example, the output has salient regions compressed at 90% while non-salient regions are compressed at 20%.
4 Experiments & Results
In this section, we first review the extensive setup of datasets, adversarial image generations, adversarial defenses, and evaluation metrics used in this work. We then specify the series of experiments performed, and demonstrate the performance of our proposed approach in the final section.
Two popular saliency datasets, ECSSD  and SALICON , were chosen for experimental setup, due to their prominence recent saliency research and their degree of difficulty. The Extended Complex Scene Saliency Dataset (ECSSD)  is comprised of complex scenes, presenting textures and structures common to real-world images. ECSSD  contains 1,000 intricate images and respective ground-truth saliency maps, created as an average of the labeling of five human participants. The Salicency in Context (SALICON)  is a similarly complex dataset, chosen in this work to provide a broader range of scenes and a larger number of samples. SALICON  contains a training set of 10,000 images and their respective ground-truth saliency maps as well as a validation set of 5,000 images with their corresponding ground-truths and was designed to purpose of evaluating current saliency models with natural scene images. In addition to ground truth saliency maps, this dataset provides fixation maps for analysis. For our experiments, we selected only the training set of SALICON  to evaluate the saliency models, for a total of 10,000 images on this dataset.
Two adversarial attacks were chosen in this work, for their prominence as well as diversity in approach. We leveraged FGSM  and DeepFool  attacks to evaluate the efficacy of our proposed countermeasure. The Fast Gradient Sign Method (FGSM)  was chosen as it is a more traditional attack which has proven to be effective in creating input images which are significantly misleading to popular convolutional frameworks. The inclusion of FGSM  allows us to test the effectiveness of saliency models and cleaning algorithms against a well-known and common adversarial attack. DeepFool  was chosen as it is considered a state-of-the-art attack against image-based classification models, with a more robust attack surface. As each of these approaches requires an objective function to consider in their attack, we chose a common VGG16
backbone and pretrained this model using the ImageNet dataset.
In defense against the adversarial examples, we selected three countermeasures in comparison with our proposed SAD approach: bit-depth Reduction, JPEG-compression, and SHIELD. . We chose these as a balance of global and localized defense techniques, for a robust comparison with our proposed approach. Both bit-depth reduction and JPEG-compression are established countermeasures to defend against adversarial attacks, effective in reducing the number of perturbations present within the images. For the purposes of our experiments, we used a 3-bit depth reduction and a compression level of 80 for JPEG-compression. The recent Secure Heterogeneous Image Ensemble with Localized Denoising (SHIELD)  uses a randomized compression levels to reduce the number of perturbations present within the images.
In order to evaluate the effectiveness of our defense, we put all images - original, adversarial, and ”cleaned” - through state-of-the-art saliency models, and evaluate the performance of each model. As these models have demonstrated top performance on these popular saliency datasets, we can establish how they are affected by the adversarial inputs. In this work, we selected three diverse models to generate saliency maps for the images: BASNet , CPD , and SalGAN . The Boundary-Aware Salient Object Detection model (BASNet)  uniquely leverages edges and bounding boxes to help establish a saliency map for an image. The Cascaded Partial Decoder (CPD)  model incorporates a holistic attention mechanism into the traditional encoder-decoder framework. The Saliency GAN (SalGAN)  model is a generative adversarial network approach, providing discriminator and generator models in adversarial training. It is important to note that SalGAN  was mainly designed with to generate saliency maps based on eye-fixations rather than a basic saliency map.
Finally, we leverage saliency metrics in this work as an evaluation of the effectiveness of our proposed defense. The MIT Saliency Benchmark  provides established metrics for saliency estimation models, on both binary saliency maps and fixation maps. For the purposes of this work, in application to only ground truth maps, we selected Earth Mover’s Distance (EMD), Pearson’s Correlation Coefficient (CC), Normalized Scanpath Saliency (NSS), KL-Divergence (KLD) and similarity score (SIM).
After establishing a baseline, we then took each dataset and performed separate FGSM and DeepFool attacks on the images. In uniform comparison, all attacks leveraged a VGG-16 common backbone. The same saliency models were used to generate saliency maps of each set of these attacked images.
Finally, all attacked images were cleaned using a series of adversarial attack defenses. We started by performing a bit-depth reduction on the attacked images, reducing the images to a 3-bit color representation. Next, we performed a JPEG compression on the attacked images, compressing the image quality by 80 percent. Once we finished the JPEG compression, we performed the state-of-the-art defense SHIELD on the attacked images. SHIELD functions takes JPEG compression but instead applying a uniform image compression level, the compression is applied in patches, randomly determining the quality reduction of the image. We took the new images from all the current cleaning techniques and then fed them into all of the saliency models to get their respective saliency maps to use for the metrics. Once we had all of the saliency maps, we then ran all of our metrics on the cleaned images to show how the cleaning techniques affected the saliency maps.
Finally, using the same experimental guidelines, we performed SAD on the attacked datasets. Testing was performed with 2 lists of compression qualities (20, 50, 70, 70, 80, 90) and (50, 70, 90). The cleaned SAD images were then run through the same metrics in order to make a direct comparison of our technique and other modern cleaning techniques.
Figure 5 provides a collection of images picked from ECSSD. The first two rows of the figure contain the original image and its respective ground truth. The third row contains the saliency map generated by BASNet from an FGSM adversarial example. In this figure, FGSM is shown to cause minor distortions to the saliency maps that were generated by BASNET. The following rows contain the resulting saliency map from the bit-depth reduction, JPEG-Compression, SHIELD and SAD respectively. These rows demonstrate the highlight the effects that each cleaning technique has on the saliency map generation.
Tables 2 and 3 show metric results of running BASNet and CPD respectively on the entire ECSSD dataset. In each of these cases we conclude that SAD performs significantly better on global attacks, such as FGSM, than localised attacks, such as DeepFool. This is because more distortions are present in the non-salient regions of global attacks, thus more distortions are removed overall. We posit that for localised attacks on ECSSD, while SAD performed worse than standard JPEG compression, the difference in performance is comparatively small. Figures 6 and 7 are min-max normalised graphs presented to visualise these results.
Table 4 shows metric results of running CPD on the SALICON dataset. In this case, because CPD does not perform well on this fixation based dataset, the overall results do not vary much between the original, attacked, and cleaned examples.
In general, DeepFool has little to no effect on saliency prediction as is illustrated by Figure 8. In this figure we see only slight difference between the original, attacked and cleaned saliency predictions. This result is further backed up by the metrics of DeepFool across all tables.
|Data: SALICON, Model: BASNet||EMD||CC||NSS||KLD||SIM|
|FGSM + Bit-depth Reduction||75.04985629||0.3049599528||0.2969438136||13.31941891||0.3179412484|
|FGSM + JPEG80 Compression||79.73553778||0.4079829454||0.3639315665||11.40826893||0.3880238533|
|FGSM + SHIELD||79.44973525||0.4092005491||0.3637762368||11.43848801||0.3876400888|
|FGSM + SAD (20 50 70 70 80 90)||79.25510666||0.4074067175||0.3620625138||11.51703453||0.3857473135|
|FGSM + SAD (50 70 90)||79.89621414||0.4122531116||0.3667055368||11.30664825||0.3907471597|
|DeepFool + Bit-depth Reduction||78.72886619||0.3303083181||0.3210006058||12.43772602||0.3442973197|
|DeepFool + JPEG80 Compression||83.44919726||0.421741128||0.3755041957||10.72119999||0.4052546024|
|DeepFool + SHIELD||83.24127967||0.4230029285||0.3759891391||10.71790504||0.4055115879|
|DeepFool + SAD (20 50 70 70 80 90)||83.10682231||0.4206542075||0.3742873669||10.78884315||0.403165251|
|DeepFool + SAD (50 70 90)||83.54616027||0.4241522551||0.3771335781||10.64822292||0.4069490135|
|Data: ECSSD, Model: BASNet||EMD||CC||NSS||KLD||SIM|
|FGSM + Bit-depth Reduction||41.21447617||0.5987574458||1.297375202||8.744916916||0.5463407636|
|FGSM + JPEG80 Compression||45.44040079||0.8403670192||1.823511362||3.267945766||0.8005516529|
|FGSM + SHIELD||45.26718383||0.8304385543||1.805399299||3.560398102||0.7886587977|
|FGSM + SAD (20 50 70 70 80 90)||45.35846763||0.8506878614||1.85140121||3.170518398||0.8131732941|
|FGSM + SAD (50 70 90)||45.93143812||0.8615031838||1.87408042||2.825253487||0.8248550296|
|DeepFool + Bit-depth Reduction||43.46246487||0.6593744755||1.430215001||7.15671587||0.6113178134|
|DeepFool + JPEG80 Compression||47.933426||0.9080747962||1.97224772||1.61575985||0.8803170323|
|DeepFool + SHIELD||47.71402123||0.8999755979||1.953188896||1.781389356||0.870287478|
|DeepFool + SAD (20 50 70 70 80 90)||47.33478719||0.9016960859||1.960625887||1.858531952||0.8717075586|
|DeepFool + SAD (50 70 90)||47.77355919||0.9041004777||1.961497784||1.685516238||0.8761977553|
|Data: ECSSD, Model: CPD||EMD||CC||NSS||KLD||SIM|
|FGSM + Bit-depth Reduction||37.93778128||0.5365927815||1.133271813||9.139689445||0.4919550121|
|FGSM + JPEG80 Compression||45.04148904||0.8208998442||1.786798239||3.14307785||0.7796351314|
|FGSM + SHIELD||44.63645556||0.8076060414||1.756557226||3.521080494||0.7643808722|
|FGSM + SAD (20 50 70 70 80 90)||44.82445048||0.816835165||1.78500545||3.504522562||0.7739418745|
|FGSM + SAD (50 70 90)||45.2943999||0.8305669427||1.812999964||3.0791049||0.7902354002|
|DeepFool + Bit-depth Reduction||40.7708216||0.62786448||1.342035532||6.702296734||0.5820772648|
|DeepFool + JPEG80 Compression||47.96485411||0.8997527361||1.956662774||1.369203806||0.869066|
|DeepFool + SHIELD||47.49587288||0.8849500418||1.92660892||1.676331162||0.8517687917|
|DeepFool + SAD (20 50 70 70 80 90)||47.21730354||0.8871904016||1.92954421||1.74879241||0.8548846841|
|DeepFool + SAD (50 70 90)||47.65713406||0.892139554||1.939523816||1.546016574||0.8613493443|
|Data: SALICON, Model: CPD||EMD||CC||NSS||KLD||SIM|
|FGSM + Bit-depth Reduction||71.81423414||0.3703564703||0.344774574||11.47888279||0.3569065928|
|FGSM + JPEG80 Compression||76.72632067||0.43930161||0.378226012||10.95398521||0.3932281733|
|FGSM + SHIELD||76.17508536||0.4379900992||0.3765870929||11.05368805||0.3907161355|
|FGSM + SAD (20 50 70 70 80 90)||76.01439047||0.4356130958||0.3746709526||11.19257164||0.3878324628|
|FGSM + SAD (50 70 90)||76.42406584||0.438691169||0.3776216805||11.02315617||0.3915502429|
|DeepFool + Bit-depth Reduction||76.45489553||0.416684866||0.3889657855||10.1579113||0.3980270326|
|DeepFool + JPEG80 Compression||83.7110553||0.4636833072||0.4037819207||9.535971642||0.4293003976|
|DeepFool + SHIELD||82.80526771||0.4643773437||0.4027466178||9.67798233||0.4266972542|
|DeepFool + SAD (20 50 70 70 80 90)||82.65074847||0.4604454637||0.3989365101||9.831030846||0.4227913916|
|DeepFool + SAD (50 70 90)||83.29619378||0.4635473192||0.4021643996||9.622299194||0.4274015129|
5 Conclusion and Future Work
With adversarial attacks increasing in popularity and constantly evolving, new defenses are continuously being counteracted by new methods of attack. In this work, we presented a new method for defense against adversarial images which is based upon visual saliency estimation. In comparison with existing localized and global approaches, our method is a strategically applied defense. Our targeted approach demonstrates better reduction of adversarial distortions while preserving salient content of the original data. Our proposed SAD model outperforms existing countermeasures in a range of standard saliency metrics.
While SAD has been proven effective, there are still many different areas to explore. In future work, we will look to optimize saliency thresholds as well as the back-end saliency model, to further improve the results of SAD. Further analysis of the effectiveness of our model will be explored in comparison with a growing number of state-of-the-art defenses on additional saliency datasets, and can be analyzed in terms of the classification of images across similar phases - before attack, during attack, and after cleaned.
-  (2018) Synthesizing robust adversarial examples. In International Conference on Machine Learning, pp. 284–293. Cited by: §1.
-  MIT saliency benchmark. Note: http://saliency.mit.edu/ Cited by: §4.1.
-  (2017-05) Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), Vol. , pp. 39–57. External Links: Cited by: §1.
-  (2018) Shield: fast, practical defense and vaccination for deep learning using jpeg compression. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 196–204. Cited by: Figure 1, §1, §2, §3.1, §3.1, Figure 5, Figure 8, §4.1, §4.3.
-  (2009) ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, Cited by: §4.1.
-  (2001) Image quilting for texture synthesis and transfer. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 341–346. Cited by: §2.
-  (2019) On the salience of adversarial examples. In 14th International Symposium on Visual Computing (ISVC), Cited by: §1.
-  (2015) Explaining and harnessing adversarial examples. In International Conference on Learning Representations, External Links: Cited by: Figure 1, §1, §1, Figure 2, Figure 6, Figure 7, §4.1, §4.3, §4.3.
-  (2017) Adversarial examples for malware detection. In European Symposium on Research in Computer Security, pp. 62–79. Cited by: §1.
-  (2015-06) SALICON: saliency in context. In , Cited by: Figure 8, §4.1, §4.3, §4.3, Table 1, Table 4.
-  (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §1.
-  (2016) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236. Cited by: §1.
-  (2018) PiCANet: learning pixel-wise contextual attention for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3089–3098. Cited by: Figure 2, §3.1.
-  (2017) Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147. Cited by: §2.
-  (2016) Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582. Cited by: §1, Figure 8, §4.1, §4.3, §4.3.
Salgan: visual saliency prediction with adversarial networks.
CVPR Scene Understanding Workshop (SUNw), Cited by: Figure 8, §4.1, §4.2.
-  (2015) Distillation as a defense to adversarial perturbations against deep neural networks. 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. Cited by: §2.
-  (2019-06) BASNet: boundary-aware salient object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Figure 5, Figure 6, §4.1, §4.2, §4.3, §4.3, §4.3, Table 1, Table 2.
Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In International Conference on Machine Learning, pp. 5231–5240. Cited by: §1.
-  (2016) Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540. Cited by: §1.
-  (2016-04) Hierarchical image saliency detection on extended cssd. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (4), pp. 717–729. External Links: Cited by: Figure 1, §1, Figure 2, Figure 5, Figure 6, Figure 7, §4.1, §4.3, §4.3, Table 2, Table 3.
-  (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §4.1.
-  (2017-07) Learning to detect salient objects with image-level supervision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §3.1.
-  (2019) Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3907–3916. Cited by: Figure 1, Figure 7, §4.1, §4.2, §4.3, §4.3, Table 3, Table 4.