SAD: Saliency-based Defenses Against Adversarial Examples

03/10/2020 ∙ by Richard Tran, et al. ∙ 6

With the rise in popularity of machine and deep learning models, there is an increased focus on their vulnerability to malicious inputs. These adversarial examples drift model predictions away from the original intent of the network and are a growing concern in practical security. In order to combat these attacks, neural networks can leverage traditional image processing approaches or state-of-the-art defensive models to reduce perturbations in the data. Defensive approaches that take a global approach to noise reduction are effective against adversarial attacks, however their lossy approach often distorts important data within the image. In this work, we propose a visual saliency based approach to cleaning data affected by an adversarial attack. Our model leverages the salient regions of an adversarial image in order to provide a targeted countermeasure while comparatively reducing loss within the cleaned images. We measure the accuracy of our model by evaluating the effectiveness of state-of-the-art saliency methods prior to attack, under attack, and after application of cleaning methods. We demonstrate the effectiveness of our proposed approach in comparison with related defenses and against established adversarial attack methods, across two saliency datasets. Our targeted approach shows significant improvements in a range of standard statistical and distance saliency metrics, in comparison with both traditional and state-of-the-art approaches.



There are no comments yet.


page 1

page 2

page 3

page 5

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With increased adoption of machine and deep learning models into critical systems, adversarial attacks against these models have proportionally become a growing concern. Adversarial examples have been demonstrated in a growing range of applications, not only in classification tasks, but also in malware detection[9] and recognition of speech and audio[19]

. In the physical world, models used in facial recognition systems

[20] and autonomous vehicle interpretations of traffic signs, road patterns, and pedestrians [12] are also susceptible to these distorted inputs. Adversarial attacks, such as fast gradient sign method (FGSM)[8], iterative FGSM (I-FGSM)[11], Carlini-Wagner’s L2 (CWL2)[3] and DeepFool[15], take a broad range of approaches toward a similar goal: drifting a targeted model away its original intent through a series of malicious inputs[1]. Attacks can be categorized as targeted or non-targeted, describing their intention of misleading a model toward a specific desired outcome or generally causing it to incorrectly interpret the input. Additionally, attacks may be white- or black- box, if the adversary has prior knowledge of the underlying model, training information, or system.

Figure 1: Comparison of CPD[24] on cleaning measures of ECSSD[21] data attacked with FGSM[8]. Top row: Original image, ground truth saliency map; Bottom row from left (defenses): bit-depth reduction, JPEG compression, SHIELD[4], SAD.

In this work, we focus on black-box non-targeted attacks to images, in an attempt to provide defense for general machine learning models against a range of attacks. We leverage prevalent defense measures, identifying approaches that take a global approach to removing distortions as well as recent localized approaches. Ultimately, we propose a new defense strategy, SAD, which uses regions of interest to strategically reduce adversarial distortions. Figure

1 motivates the effectiveness of our proposed approach, comparing the saliency maps of an adversarial example image after application of bit-depth reduction, JPEG compression, SHIELD[4], and our proposed SAD model. Some defense techniques, including bit-depth reduction and JPEG compression, use a globalized approach to clean the inputs, while other techniques, such as SHIELD [4], use a more localized approach in reducing distortions. Both types of approach have demonstrated performance against adversarial attacks of image classification models, however it is difficult to ensure preservation of the data integrity.

While ROI region selection can be prohibitive on a perturbed image, some visual saliency models have been proven effective despite adversarial attack. [7]. In Figure 2, an image from ECSSD[21] dataset is shown with a generated saliency map, prior to and after a FGSM[8] attack. Note the reduction in salient region identification in the upper leaves of the image, however the overall content remains correctly identified.

In this work, we propose a novel defense technique based on visual saliency. The proposed approach identifies a region of interest (ROI) and leverages a saliency map to apply targeted cleaning techniques. In demonstration of our proposed method, we evaluate the performance of state-of-the-art saliency models on established saliency data under the following conditions: original data, attacked by FGSM and DeepFool, and finally cleaned by four methods. We discuss the impact of the choice of saliency estimation approaches in the effectiveness of our defense solution, and recommend augmentations for future improvement.

2 Related Work

Among all the different cleaning techniques, we divide them into two major categories: globalized techniques and localized techniques. Globalized techniques, including methods such as bit-depth reduction and JPEG compression, have proven to be successful in reducing the effectiveness of adversarial attacks through the use of relatively simplistic approaches. Bit-depth reduction limits the color in an image, which reduces distortions and therefore the effectiveness of the adversarial attacks. However, while bit-depth reduction impacts general perturbations, it can also damage core features used to identify salient information. JPEG Compression can also be used to reduce the effectiveness of malicious input by compressing the image. This causes malicious inputs to get smoothed out, but at the same time introduces unwanted artifacts. These unwanted artifacts can have unexpected consequences in saliency generation. While each technique has a unique way of approaching the problem, they all reduce overall number of features within the data. However, these globalized techniques are predictable and thus can be easily circumvented. Related to globalized approaches, distillation has also been demonstrated by Papernot et al as a viable means of defending against adversarial examples in deep neural networks[17]. Magnet[14] takes a cryptography-based approach to defending against adversarial examples. Built for gray-box attacks, this defense is randomly selected from a set of precomputed methods at runtime. Beyond globalized approaches, there are more localized approaches, such as image quilting [6], watermarking and SHIELD [4]. Inherent randomness in these techniques makes them difficult for the adversarial attacks to circumvent.

We present a unique method that reduces the effectiveness of adversarial attacks while preserving original content, and demonstrate its viability by examining the saliency of the images prior to attacks, under attacks, and after defenses.

3 Saliency-based Adversarial Defense (SAD)

In response to the need for a targeted defense measure against diverse adversarial inputs, we propose a Saliency-based Adversarial Defense (SAD) approach outlined in Figure 3. Our model estimates relevant regions of interest (ROI) in an input and strategically applies countermeasures against adversarial perturbations.

Figure 2: Consequences of adversarial attack on ECSSD data [21] predicted using PiCANet [13]. Top row: original image, Bottom row: FGSM attack [8]

3.1 Model Description

In order to select the most relevant ROI, our proposed model leverages a model for visual saliency estimation. First, a saliency map is generated for the input image. Our implementation uses PiCANet [13] trained on the DUTS-TR dataset [23]. Once a map is generated, JPEG compression is applied at differing qualities based on the saliency predictions.

Figure 3: Overview of SAD

In addition to the images to be processed, a list of compression level must be passed as a parameter to our model. This list will be denoted as , where denotes the th compression level. Much like SHIELD [4], each image processed is segmented into 88 windows. is used to denote the window at the th row and th column of the image. The saliency map, taken as a grey-scale image, is identically segmented into 88 windows. Each window in the saliency map is assigned a scalar value, from 0 to 255, based on the average saliency prediction of all pixels within the window. These scalar values, denoted as , are then divided by a threshold and used as an index into . The compression level for can be expressed as the following equation.


The goal of this approach is to reduce the effectiveness of an adversarial attack while minimizing damage done to the ROI.

Figure 4: Example input and output of SAD. Top down: Original image, Saliency prediction, Output.

This technique is similar in nature to SHIELD [4], with the primary difference being the replacement of a randomized compression algorithm with a saliency based compression algorithm. Figure 4 provides an example input, saliency prediction and output that demonstrates compressing the background significantly more than the salient parts of the image. In this example, the output has salient regions compressed at 90% while non-salient regions are compressed at 20%.

4 Experiments & Results

In this section, we first review the extensive setup of datasets, adversarial image generations, adversarial defenses, and evaluation metrics used in this work. We then specify the series of experiments performed, and demonstrate the performance of our proposed approach in the final section.

4.1 Setup

Two popular saliency datasets, ECSSD [21] and SALICON [10], were chosen for experimental setup, due to their prominence recent saliency research and their degree of difficulty. The Extended Complex Scene Saliency Dataset (ECSSD) [21] is comprised of complex scenes, presenting textures and structures common to real-world images. ECSSD [21] contains 1,000 intricate images and respective ground-truth saliency maps, created as an average of the labeling of five human participants. The Salicency in Context (SALICON) [10] is a similarly complex dataset, chosen in this work to provide a broader range of scenes and a larger number of samples. SALICON [10] contains a training set of 10,000 images and their respective ground-truth saliency maps as well as a validation set of 5,000 images with their corresponding ground-truths and was designed to purpose of evaluating current saliency models with natural scene images. In addition to ground truth saliency maps, this dataset provides fixation maps for analysis. For our experiments, we selected only the training set of SALICON [10] to evaluate the saliency models, for a total of 10,000 images on this dataset.

Two adversarial attacks were chosen in this work, for their prominence as well as diversity in approach. We leveraged FGSM [8] and DeepFool [15] attacks to evaluate the efficacy of our proposed countermeasure. The Fast Gradient Sign Method (FGSM) [8] was chosen as it is a more traditional attack which has proven to be effective in creating input images which are significantly misleading to popular convolutional frameworks. The inclusion of FGSM [8] allows us to test the effectiveness of saliency models and cleaning algorithms against a well-known and common adversarial attack. DeepFool [15] was chosen as it is considered a state-of-the-art attack against image-based classification models, with a more robust attack surface. As each of these approaches requires an objective function to consider in their attack, we chose a common VGG16[22]

backbone and pretrained this model using the ImageNet

[5] dataset.

In defense against the adversarial examples, we selected three countermeasures in comparison with our proposed SAD approach: bit-depth Reduction, JPEG-compression, and SHIELD. [4]. We chose these as a balance of global and localized defense techniques, for a robust comparison with our proposed approach. Both bit-depth reduction and JPEG-compression are established countermeasures to defend against adversarial attacks, effective in reducing the number of perturbations present within the images. For the purposes of our experiments, we used a 3-bit depth reduction and a compression level of 80 for JPEG-compression. The recent Secure Heterogeneous Image Ensemble with Localized Denoising (SHIELD) [4] uses a randomized compression levels to reduce the number of perturbations present within the images.

In order to evaluate the effectiveness of our defense, we put all images - original, adversarial, and ”cleaned” - through state-of-the-art saliency models, and evaluate the performance of each model. As these models have demonstrated top performance on these popular saliency datasets, we can establish how they are affected by the adversarial inputs. In this work, we selected three diverse models to generate saliency maps for the images: BASNet [18], CPD [24], and SalGAN [16]. The Boundary-Aware Salient Object Detection model (BASNet) [18] uniquely leverages edges and bounding boxes to help establish a saliency map for an image. The Cascaded Partial Decoder (CPD) [24] model incorporates a holistic attention mechanism into the traditional encoder-decoder framework. The Saliency GAN (SalGAN) [16] model is a generative adversarial network approach, providing discriminator and generator models in adversarial training. It is important to note that SalGAN [16] was mainly designed with to generate saliency maps based on eye-fixations rather than a basic saliency map.

Finally, we leverage saliency metrics in this work as an evaluation of the effectiveness of our proposed defense. The MIT Saliency Benchmark [2] provides established metrics for saliency estimation models, on both binary saliency maps and fixation maps. For the purposes of this work, in application to only ground truth maps, we selected Earth Mover’s Distance (EMD), Pearson’s Correlation Coefficient (CC), Normalized Scanpath Saliency (NSS), KL-Divergence (KLD) and similarity score (SIM).

4.2 Experiments

To establish a baseline, we generated the saliency maps for the ECSSD and SALICON datasets using the BASNet[18], CPD[24], and SalGAN[16] saliency models.

After establishing a baseline, we then took each dataset and performed separate FGSM and DeepFool attacks on the images. In uniform comparison, all attacks leveraged a VGG-16 common backbone. The same saliency models were used to generate saliency maps of each set of these attacked images.

Finally, all attacked images were cleaned using a series of adversarial attack defenses. We started by performing a bit-depth reduction on the attacked images, reducing the images to a 3-bit color representation. Next, we performed a JPEG compression on the attacked images, compressing the image quality by 80 percent. Once we finished the JPEG compression, we performed the state-of-the-art defense SHIELD on the attacked images. SHIELD functions takes JPEG compression but instead applying a uniform image compression level, the compression is applied in patches, randomly determining the quality reduction of the image. We took the new images from all the current cleaning techniques and then fed them into all of the saliency models to get their respective saliency maps to use for the metrics. Once we had all of the saliency maps, we then ran all of our metrics on the cleaned images to show how the cleaning techniques affected the saliency maps.

Finally, using the same experimental guidelines, we performed SAD on the attacked datasets. Testing was performed with 2 lists of compression qualities (20, 50, 70, 70, 80, 90) and (50, 70, 90). The cleaned SAD images were then run through the same metrics in order to make a direct comparison of our technique and other modern cleaning techniques.

4.3 Results

Figure 5 provides a collection of images picked from ECSSD[21]. The first two rows of the figure contain the original image and its respective ground truth. The third row contains the saliency map generated by BASNet[18] from an FGSM[8] adversarial example. In this figure, FGSM[8] is shown to cause minor distortions to the saliency maps that were generated by BASNET[18]. The following rows contain the resulting saliency map from the bit-depth reduction, JPEG-Compression, SHIELD[4] and SAD respectively. These rows demonstrate the highlight the effects that each cleaning technique has on the saliency map generation.

Table 1 shows the results of running the BASNet[18] visual saliency model [18] against the SALICON[10] dataset.

Tables 2 and 3 show metric results of running BASNet[18] and CPD[24] respectively on the entire ECSSD[21] dataset. In each of these cases we conclude that SAD performs significantly better on global attacks, such as FGSM[8], than localised attacks, such as DeepFool[15]. This is because more distortions are present in the non-salient regions of global attacks, thus more distortions are removed overall. We posit that for localised attacks on ECSSD[21], while SAD performed worse than standard JPEG compression, the difference in performance is comparatively small. Figures 6 and 7 are min-max normalised graphs presented to visualise these results.

Table 4 shows metric results of running CPD[24] on the SALICON[10] dataset. In this case, because CPD[24] does not perform well on this fixation based dataset, the overall results do not vary much between the original, attacked, and cleaned examples.

In general, DeepFool[15] has little to no effect on saliency prediction as is illustrated by Figure 8. In this figure we see only slight difference between the original, attacked and cleaned saliency predictions. This result is further backed up by the metrics of DeepFool[15] across all tables.

Figure 5: Comparison of BASNet[18] on cleaning measures of ECSSD[21] data attacked with FGSM. From top: Original image, ground truth, Attacked, BitDepth, JPEG, SHIELD[4], SAD
Original 83.60872647 0.4218207896 0.3761202097 10.67353916 0.4060104787
FGSM 79.91393103 0.4087593555 0.3648214042 11.35947323 0.3891682327
DeepFool 83.60636636 0.4222988188 0.3763460219 10.67110252 0.40616256
FGSM + Bit-depth Reduction 75.04985629 0.3049599528 0.2969438136 13.31941891 0.3179412484
FGSM + JPEG80 Compression 79.73553778 0.4079829454 0.3639315665 11.40826893 0.3880238533
FGSM + SHIELD 79.44973525 0.4092005491 0.3637762368 11.43848801 0.3876400888
FGSM + SAD (20 50 70 70 80 90) 79.25510666 0.4074067175 0.3620625138 11.51703453 0.3857473135
FGSM + SAD (50 70 90) 79.89621414 0.4122531116 0.3667055368 11.30664825 0.3907471597
DeepFool + Bit-depth Reduction 78.72886619 0.3303083181 0.3210006058 12.43772602 0.3442973197
DeepFool + JPEG80 Compression 83.44919726 0.421741128 0.3755041957 10.72119999 0.4052546024
DeepFool + SHIELD 83.24127967 0.4230029285 0.3759891391 10.71790504 0.4055115879
DeepFool + SAD (20 50 70 70 80 90) 83.10682231 0.4206542075 0.3742873669 10.78884315 0.403165251
DeepFool + SAD (50 70 90) 83.54616027 0.4241522551 0.3771335781 10.64822292 0.4069490135
Table 1: Evaluation of the BASNet[18] visual saliency model[18] on the SALICON[10] dataset.
Original 48.07041578 0.9120191336 1.979211807 1.506018996 0.8843896985
FGSM 45.51845167 0.8434635997 1.829114914 3.207652092 0.8040903211
DeepFool 47.9678527 0.908826232 1.972466826 1.60269177 0.8807195425
FGSM + Bit-depth Reduction 41.21447617 0.5987574458 1.297375202 8.744916916 0.5463407636
FGSM + JPEG80 Compression 45.44040079 0.8403670192 1.823511362 3.267945766 0.8005516529
FGSM + SHIELD 45.26718383 0.8304385543 1.805399299 3.560398102 0.7886587977
FGSM + SAD (20 50 70 70 80 90) 45.35846763 0.8506878614 1.85140121 3.170518398 0.8131732941
FGSM + SAD (50 70 90) 45.93143812 0.8615031838 1.87408042 2.825253487 0.8248550296
DeepFool + Bit-depth Reduction 43.46246487 0.6593744755 1.430215001 7.15671587 0.6113178134
DeepFool + JPEG80 Compression 47.933426 0.9080747962 1.97224772 1.61575985 0.8803170323
DeepFool + SHIELD 47.71402123 0.8999755979 1.953188896 1.781389356 0.870287478
DeepFool + SAD (20 50 70 70 80 90) 47.33478719 0.9016960859 1.960625887 1.858531952 0.8717075586
DeepFool + SAD (50 70 90) 47.77355919 0.9041004777 1.961497784 1.685516238 0.8761977553
Table 2: Evaluation of the BASNet[18] visual saliency model[18] on the ECSSD[21] dataset.
Figure 6: ECSSD [21] attacked with FGSM [8] generated by BASNet [18] (min-max normalized)
Original 48.28455886 0.9043654799 1.964785576 1.253266454 0.874347806
FGSM 45.36623833 0.8253191113 1.793251872 3.001737833 0.7846102118
DeepFool 48.16762985 0.9003679752 1.957540512 1.323252082 0.8697779179
FGSM + Bit-depth Reduction 37.93778128 0.5365927815 1.133271813 9.139689445 0.4919550121
FGSM + JPEG80 Compression 45.04148904 0.8208998442 1.786798239 3.14307785 0.7796351314
FGSM + SHIELD 44.63645556 0.8076060414 1.756557226 3.521080494 0.7643808722
FGSM + SAD (20 50 70 70 80 90) 44.82445048 0.816835165 1.78500545 3.504522562 0.7739418745
FGSM + SAD (50 70 90) 45.2943999 0.8305669427 1.812999964 3.0791049 0.7902354002
DeepFool + Bit-depth Reduction 40.7708216 0.62786448 1.342035532 6.702296734 0.5820772648
DeepFool + JPEG80 Compression 47.96485411 0.8997527361 1.956662774 1.369203806 0.869066
DeepFool + SHIELD 47.49587288 0.8849500418 1.92660892 1.676331162 0.8517687917
DeepFool + SAD (20 50 70 70 80 90) 47.21730354 0.8871904016 1.92954421 1.74879241 0.8548846841
DeepFool + SAD (50 70 90) 47.65713406 0.892139554 1.939523816 1.546016574 0.8613493443
Table 3: Evaluation of the CPD[24] saliency visual saliency model[24] on the ECSSD[21] dataset.
Figure 7: ECSSD [21] attacked with FGSM [8] generated by CPD [24] (min-max normalized)
Salicon 84.03098486 0.464445889 0.4046932757 9.461788177 0.4308495224
FGSM 77.17695558 0.4409204125 0.3802604973 10.84904957 0.3955992162
DeepFool 83.99931418 0.4644609392 0.4047108293 9.465325356 0.4308281243
FGSM + Bit-depth Reduction 71.81423414 0.3703564703 0.344774574 11.47888279 0.3569065928
FGSM + JPEG80 Compression 76.72632067 0.43930161 0.378226012 10.95398521 0.3932281733
FGSM + SHIELD 76.17508536 0.4379900992 0.3765870929 11.05368805 0.3907161355
FGSM + SAD (20 50 70 70 80 90) 76.01439047 0.4356130958 0.3746709526 11.19257164 0.3878324628
FGSM + SAD (50 70 90) 76.42406584 0.438691169 0.3776216805 11.02315617 0.3915502429
DeepFool + Bit-depth Reduction 76.45489553 0.416684866 0.3889657855 10.1579113 0.3980270326
DeepFool + JPEG80 Compression 83.7110553 0.4636833072 0.4037819207 9.535971642 0.4293003976
DeepFool + SHIELD 82.80526771 0.4643773437 0.4027466178 9.67798233 0.4266972542
DeepFool + SAD (20 50 70 70 80 90) 82.65074847 0.4604454637 0.3989365101 9.831030846 0.4227913916
DeepFool + SAD (50 70 90) 83.29619378 0.4635473192 0.4021643996 9.622299194 0.4274015129
Table 4: Evaluation of the CPD [24] visual saliency model[24] on the SALICON[10] dataset.
Figure 8: Comparison of SalGAN[16] on cleaning measures of SALICON[10] data attacked with DeepFool[15]. From top: Original image, ground truth, Attacked, BitDepth. JPEG, SHIELD[4], SAD

5 Conclusion and Future Work

With adversarial attacks increasing in popularity and constantly evolving, new defenses are continuously being counteracted by new methods of attack. In this work, we presented a new method for defense against adversarial images which is based upon visual saliency estimation. In comparison with existing localized and global approaches, our method is a strategically applied defense. Our targeted approach demonstrates better reduction of adversarial distortions while preserving salient content of the original data. Our proposed SAD model outperforms existing countermeasures in a range of standard saliency metrics.

While SAD has been proven effective, there are still many different areas to explore. In future work, we will look to optimize saliency thresholds as well as the back-end saliency model, to further improve the results of SAD. Further analysis of the effectiveness of our model will be explored in comparison with a growing number of state-of-the-art defenses on additional saliency datasets, and can be analyzed in terms of the classification of images across similar phases - before attack, during attack, and after cleaned.


  • [1] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok (2018) Synthesizing robust adversarial examples. In International Conference on Machine Learning, pp. 284–293. Cited by: §1.
  • [2] Z. Bylinskii, T. Judd, F. Durand, A. Oliva, and A. Torralba MIT saliency benchmark. Note: Cited by: §4.1.
  • [3] N. Carlini and D. Wagner (2017-05) Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), Vol. , pp. 39–57. External Links: Document, ISSN 2375-1207 Cited by: §1.
  • [4] N. Das, M. Shanbhogue, S. Chen, F. Hohman, S. Li, L. Chen, M. E. Kounavis, and D. H. Chau (2018) Shield: fast, practical defense and vaccination for deep learning using jpeg compression. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 196–204. Cited by: Figure 1, §1, §2, §3.1, §3.1, Figure 5, Figure 8, §4.1, §4.3.
  • [5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei (2009) ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, Cited by: §4.1.
  • [6] A. A. Efros and W. T. Freeman (2001) Image quilting for texture synthesis and transfer. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 341–346. Cited by: §2.
  • [7] A. Fernandez (2019) On the salience of adversarial examples. In 14th International Symposium on Visual Computing (ISVC), Cited by: §1.
  • [8] I. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In International Conference on Learning Representations, External Links: Link Cited by: Figure 1, §1, §1, Figure 2, Figure 6, Figure 7, §4.1, §4.3, §4.3.
  • [9] K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel (2017) Adversarial examples for malware detection. In European Symposium on Research in Computer Security, pp. 62–79. Cited by: §1.
  • [10] M. Jiang, S. Huang, J. Duan, and Q. Zhao (2015-06) SALICON: saliency in context. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    Cited by: Figure 8, §4.1, §4.3, §4.3, Table 1, Table 4.
  • [11] A. Kurakin, I. Goodfellow, and S. Bengio (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §1.
  • [12] A. Kurakin, I. Goodfellow, and S. Bengio (2016) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236. Cited by: §1.
  • [13] N. Liu, J. Han, and M. Yang (2018) PiCANet: learning pixel-wise contextual attention for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3089–3098. Cited by: Figure 2, §3.1.
  • [14] D. Meng and H. Chen (2017) Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147. Cited by: §2.
  • [15] S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard (2016) Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582. Cited by: §1, Figure 8, §4.1, §4.3, §4.3.
  • [16] J. Pan, E. Sayrol, X. G. Nieto, C. C. Ferrer, J. Torres, K. McGuinness, and N. E. OConnor (2017) Salgan: visual saliency prediction with adversarial networks. In

    CVPR Scene Understanding Workshop (SUNw)

    Cited by: Figure 8, §4.1, §4.2.
  • [17] N. Papernot, P. D. McDaniel, X. Wu, S. Jha, and A. Swami (2015) Distillation as a defense to adversarial perturbations against deep neural networks. 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. Cited by: §2.
  • [18] X. Qin, Z. Zhang, C. Huang, C. Gao, M. Dehghan, and M. Jagersand (2019-06) BASNet: boundary-aware salient object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Figure 5, Figure 6, §4.1, §4.2, §4.3, §4.3, §4.3, Table 1, Table 2.
  • [19] Y. Qin, N. Carlini, G. Cottrell, I. Goodfellow, and C. Raffel (2019)

    Imperceptible, robust, and targeted adversarial examples for automatic speech recognition

    In International Conference on Machine Learning, pp. 5231–5240. Cited by: §1.
  • [20] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter (2016) Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540. Cited by: §1.
  • [21] J. Shi, Q. Yan, L. Xu, and J. Jia (2016-04) Hierarchical image saliency detection on extended cssd. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (4), pp. 717–729. External Links: Document, ISSN 0162-8828 Cited by: Figure 1, §1, Figure 2, Figure 5, Figure 6, Figure 7, §4.1, §4.3, §4.3, Table 2, Table 3.
  • [22] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §4.1.
  • [23] L. Wang, H. Lu, Y. Wang, M. Feng, D. Wang, B. Yin, and X. Ruan (2017-07) Learning to detect salient objects with image-level supervision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §3.1.
  • [24] Z. Wu, L. Su, and Q. Huang (2019) Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3907–3916. Cited by: Figure 1, Figure 7, §4.1, §4.2, §4.3, §4.3, Table 3, Table 4.