An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense

11/26/2019 ∙ by Chao Tang, et al. ∙ Georgia Institute of Technology 0

The safety and robustness of learning-based decision-making systems are under threats from adversarial examples, as imperceptible perturbations can mislead neural networks to completely different outputs. In this paper, we present an adaptive view of the issue via evaluating various test-time smoothing defense against white-box untargeted adversarial examples. Through controlled experiments with pretrained ResNet-152 on ImageNet, we first illustrate the non-monotonic relation between adversarial attacks and smoothing defenses. Then at the dataset level, we observe large variance among samples and show that it is easy to inflate accuracy (even to 100 size  10^4) subsets on which a designated method outperforms others by a large margin. Finally at the sample level, as different adversarial examples require different degrees of defense, the potential advantages of iterative methods are also discussed. We hope this paper reveal useful behaviors of test-time defenses, which could help improve the evaluation process for adversarial robustness in the future.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Adversarial examples have brought uncertainties and threatened the robustness and safety of learning-based decision-making systems, especially for autonomous driving 17Bojarski et al. (2016)25 and robotic systemsKumra and Kanan (2017)Mahler et al. (2017)Lenz et al. (2015), in which perception is served as important inputs to the entire system. As an intriguing property first found in Szegedy et al. (2013), carefully computed perturbations on inputs could lead to misclassification with high confidence, even though the perturbations are imperceptible to humans. Over the past few years, researchers have developed a number of adversarial attacks Goodfellow et al. (2014)Carlini and Wagner (2017)Papernot et al. (2016) on classification systems, which show the underlying instability for decision making with deep networks. In later studies, noisy patterns are observed in the feature maps of adversarial examples Xie et al. (2018). Motivated by the noisy pattern, we explore the possibility of defense with various smoothing techniques at test time. Instead of proving the superiority of a particular method, the focus of this paper is to reveal the adaptive characteristic of adversarial robustness. Meanwhile, the test-time smoothing methods discussed in this paper can still complement other defense schemes such as adversarial training Goodfellow et al. (2014).

The contributions of this paper are:

  • We implement a test-time defense pipeline which can smooth both the original inputs and intermediate outputs from any specified layer(s). This pipeline can be generalized to study the influence of intermediate layers in various tasks.

  • We present the non-monotonic relation between adversarial attacks and smoothing defenses: For a fixed attack, the successful defense rate first increases then decreases as smoothing becomes stronger. For a fixed defense, the classification accuracy on adversarial examples first drops then rebounds as the number of attack iterations increases.

  • We conduct the first investigation on the performance of defense on each test sample. The variance among samples is so large that it becomes easy to select large (i.e., at scale ) subsets from ImageNet Russakovsky et al. (2015) validation set, allowing a designated method to outperform others, or even to inflate accuracy.

  • We demonstrate that different adversarial examples require different degrees of defense at test time and illustrate the potential benefits of iterative defenses.

The rest of the paper is organized as follows. Section 2 reviews related studies. Section 3 explains the methodology of test-time defense with various smoothing techniques. Sections 4 and 5 follow with experiments and discussion, respectively.

All relevant materials that accompany this paper, including both the codes and pre-computed data, are publicly available at GitHub111https://github.com/mkt1412/testtime-smoothing-defense and Dropbox222https://www.dropbox.com/sh/ayiyiboba5nyv2k/AAAYZffyD0CeY_1aOmrkrg8Ba?dl=0.

2 Related work

Since the discovery of adversarial examples, researchers have been working on methods of attacking or securing neural networks. We can roughly categorize the attacks with multiple criteria such as white-box or black-box, targeted or untargeted, and one-shot or iterative Gomes (2018). On the defenders’ side, two major strategies Mustafa et al. (2019)

have been adopted in current practice: (1) reinforcing the classifier during training time so that it accounts for adversarial examples and (2) detecting and converting adversarial examples back to normal at test time. A comprehensive survey on adversarial examples is available in

Huang et al. (2018)

. It is worth stating that fooling machine-learning algorithms with adversarial examples is much easier than designing models that cannot be fooled

Goodfellow and Papernot (2017). One can even fool a neural network with very few pixels Su et al. (2019)

. For efficiency purpose, most existing attacks utilize gradient information of the network and “optimally” perturb the image to maximize the loss function. The noisy pattern might be a side-effect of such a gradient-based operation.

In this paper, we limit our scope to white-box untargeted attacks, which is the most common type in literature. Compared with the feature denoising in Xie et al. (2018), all our smoothing defenses are performed at test time. We assume that the neural-network classifier has already been shipped and deployed, or it might not be feasible to retrain with adversarial examples. In addition, we believe test-time defenses studied in this paper can also serve as a useful complimentary post-processing procedure, even when adversarial training is affordable. Contrary to existing work which often compares methods at dataset level with static configurations (e.g., a few sets of fixed parameters), we thoroughly investigate the behavior of test-time defenses at multiple levels and with varying strength. Smoothing methods are selected for illustration because their strength can be naturally measured using the number of iterations or radius of kernels.

3 Methodology

In this section, we elaborate the general test-time defense scheme and all relevant smoothing techniques that we experiment with.

3.1 Test-time defense scheme

Let denote a pretrained classifier that maps an image to its label . An adversarial attack then maps a legitimate image to an adversarial example under certain constraints (e.g., on distance) such that . To defend adversarial attacks at test time, an ideal solution would be applying the inverse mapping . In reality, however, we have to find a defense , which is an alternative approximation of with the hope that can be satisfied. In addition, a defense is more desirable for deployment if it brings less distortion to legitimate images , keeping . To achieve that, we may also introduce a detector (as a part of ) that distinguishes adversarial examples from legitimate ones at the first stage of the defense. Once an input is considered legitimate, no further defense is required.

In this work, we apply smoothing techniques as the alternative approximation () of the inverse mapping () of the attack. Theoretically, the smoothing defense only works when outputs from an adversarial attack () are “noisy,” which implies from the perspective of .

3.2 Smoothing techniques

The smoothing techniques involved can be categorized into three groups: common, edge-preserving, and advanced. The common group includes mean, median and Gaussian filter, which are most commonly used in image processing. Edge-preserving smoothing algorithms include anisotropic diffusion and bilateral filter. More advanced smoothing techniques include non-local means and modified curvature motion. We will explain concisely the algorithms in the following paragraphs.

Mean, median, and Gaussian filters: These filters are widely applied in image processing. Despite the simple forms, they do not necessarily perform the worst in defending adversarial examples.

Anisotropic diffusion Perona and Malik (1990): The Perona-Malik anisotropic diffusion aims at reducing image noise without removing important edges by assigning lower diffusion coefficients for edge pixels (which have larger gradient norm). During iterations, the image is updated according to the formula below.

in which denotes the divergence operator, denotes the Laplacian, and denotes the gradient. The diffusion coefficient is defined either as or .

Bilateral filter Tomasi and Manduchi (1998): A bilateral filter is a non-linear edge-preserving filter that computes the filtered value of a pixel using weighted average of its neighborhood . The filter can be expressed with the following formula Paris et al. (2007).

in which is the normalization term. and are weight functions (e.g., Gaussian) for space and range, respectively. Edges are preserved because pixels that fall on different sides of the edge will have lower weights for range.

Non-local means Buades et al. (2005): The non-local mean algorithm takes a more general form in which the output value of a pixel is computed as a average of all pixels in the image , weighted by a similarity which is measured as a decreasing function of the weighted Euclidean distance to that pixel. For a discrete image , the filtered value for a pixel is computed as follows.

in which the weights . In the formula, denotes a square neighborhood of fixed size centered at a pixel , and

is the standard deviation of the Gaussian kernel. The normalizing constant is computed as

.

Modified curvature motion Yezzi (1998): As most smoothing techniques are originally designed for gray-scale images, generalizing them to multi-channel color images and feature maps might be less natural, and sometimes there may exist multiple ways for the generalization. Instead of splitting a color image into separate channels, we can treat it as a surface . Following the geometric property that smoother surfaces have smaller areas (or volumes), we can then iteratively smooth it with a general curvature motion method:

where is a scaling factor. As becomes larger, the algorithm transits from isotropic to a more edge-preserving diffusion. Such a formulation can be easily and naturally extended to feature maps with more channels along the axis.

4 Experiments

In this section, we present our experimental results of test-time smoothing against adversarial examples. In order to prepare the test set, we first select all images (39,156 in total) that are correctly classified by the pretrained ResNet-152 He et al. (2016) . Then we generate and store adversarial examples using attacks that are provided by Foolbox Rauber et al. (2017) and ART Nicolae et al. (2018). The white-box untargeted attacks include Projected Gradient Descent (PGD) Madry et al. (2017), Deep FoolMoosavi-Dezfooli et al. (2016), Saliency MapPapernot et al. (2016), Newton Fool Jang et al. (2017), and salt-and-pepper noise. For strong attacks (e.g., PGD) that cannot be mitigated by quantization, we store the adversarial examples in jpeg format; for other attacks that require adversarial examples in floating point accuracy, we store their results in pkl files. To save computation time for future work, all generated data will be available for download.

4.1 Performance of various smoothing techniques on defending fixed adversarial attacks

We start with a set of controlled experiments that show the performance of various smoothing techniques on defending a fixed attack. Two sets of PGD attacks, with maximum perturbation (imperceptible to humans) and (similar scale as in Xie et al. (2018)), are chosen as baseline because (1) PGD attack is one of the strongest attacks and (2) adversarial examples from PGD attack cannot be defended by simple quantization. Following the default settings, 20 iterations of PGD are performed. After obtaining the perturbed images, we tweak the parameters in each smoothing method to pursue the optimal ones that lead to the highest classification accuracy over all perturbed images. If a method contains multiple parameters, we tweak them one after another and naively apply the optimal parameter values obtained from previous exploration.

Figure 1 shows the classification accuracy on ImageNet validation set as the strength of smoothing defenses varies. The strength is measured by the most sensitive parameter, that is, number of iterations for iterative methods such as anisotropic diffusion and modified curvature motion, size of the kernel for mean filters, and diameter of the neighborhood for bilateral filters.

(a) PGD (, 20 iterations)
(b) PGD (, 20 iterations)
Figure 1: Change of classification accuracy on ImageNet validation set (vertical) along with the strength of smoothing defense (horizontal). The strength of smoothing method is measured by the most sensitive parameter: number of iterations for anisotropic diffusion and modified curvature motion, size of the kernel for mean filter, and radius of the neighborhood for bilateral filter.

We only present results from four selected methods because the rest of them lead to much lower (i.e., 20-30% less) classification accuracy. Henceforth, we will focus on these four methods in subsequent sections.

(a) fixed attack , varying defense
(b) varying attack , fixed defense
Figure 2: Simplified illustration on the “detour” effect between adversarial attack and test-time defense

The curves in Figure 1 share a similar concave shape, which might suggest a geometric relation between adversarial attack and test-time defense . As illustrated by Figure 2(a), the test-time defense should not travel too far along the “detour.” In the following subsection, we further study the non-monotonic effect from the attackers’ perspective.

4.2 Performance of a fixed defense under attacks with varying number of iterations

We continue our controlled experiments by setting the parameters of each smoothing defense to the optimal values obtained in section 4.1 and varying the strength (i.e., number of iterations) of PGD attacks. Figure 3 presents the classification accuracy on ImageNet validation set as the number of attack iteration increases from 1 to 100. Surprisingly, the accuracy first drops but then rebounds as the number of attack iterations keeps increasing. Such performance might seem contradictory to previous work as we used to believe that more iterations leads to stronger attacks, especially for defenses that involve adversarial training. For test-time defenses, however, the convex curves may reflect the actual non-monotonic relation between attacks and defenses , as illustrated in Figure 2(b). In contrast, when no defense is performed, the classification accuracy keeps dropping and stabilizes at a low level.

(a) PGD ()
(b) PGD ()
Figure 3: Change of classification accuracy on ImageNet validation set (vertical) along with the number of iterations in PGD attack (horizontal). The bump at iteration corresponds to a switch from the ImageNet validation set to a subset of 5,000 images to reduce computation time.

4.3 Variance of performance among categories

During the experiments, we noticed that the variance of classification accuracy for each category was quite large. For illustration purposes, we take PGD attack and anisotropic diffusion as an example. Figure 4(a) shows the sorted accuracies from ImageNet categories. The lowest categorical accuracy stays below 20% whereas the highest accuracy reaches almost 100%. Similar distribution of categorical accuracy is observed from other attack-defense pairs.

(a) PGD,
(b) PGD,
Figure 4: Distribution of categorical accuracy on adversarial examples in increasing order. (a): Results on adversarial examples that are generated from PGD (), with anisotropic diffusion as defense. (b): same as (a) but the adversarial examples are generated from PGD ().

Such an observation leads us to a question: is it possible to select a relatively large subset of test samples on which a designated method works the best? The task turns out to be easy. For each smoothing technique, we sort the test samples that are correctly classified according to their prediction confidence. Then, we can select a relatively large (with more than 20,000 samples) "optimal" subset with high prediction confidence.

"Optimal" subset Defense Anisotropic diffusion Bilateral Mean
Anisotropic diffusion 100.0% 88.51% 93.79%
Bilateral 92.79% 100% 93.81%
Mean 93.62% 89.31% 100%
Table 1: Classification accuracy on “optimal” subsets consisting of adversarial examples generated from PGD attack (). Accuracies can be inflated to 100% on a dataset with samples.

The performance on these optimal subsets are shown in Table 1. For example, the subset at the first row is selected based on anisotropic diffusion. Therefore, anisotropic diffusion achieves 100% accuracy while bilateral filters only achieve less than 90%. The optimal subset for each smoothing technique in this work will be released.

4.4 Variance of required defense for test samples

Both non-monotonic relations in section 4.1 and large variance in section 4.3 suggest the idea of an adaptive version of the test-time smoothing defense, which is favorable for iterative methods. Specifically, an optimal iteration number or termination criterion is required for each adversarial example. In order to demonstrate the potential advantage of the adaptive method, we compute the minimum number of iterations required for defending an adversarial example. Figure 5 shows the histograms of minimum iterations required with anisotropic diffusion under two sets of PGD attacks. In addition, the upper bound of the minimum iteration number is set to 30. In other words, if an adversarial sample remains misclassified throughout 30 smoothing iterations, we consider it as undefendable. We then compute an upper-bound accuracy for the defense by taking account results from all iterations. Compared with the result from a fixed iteration number over the whole dataset, our simulation of the “adaptive method” enhance the accuracy from 72.2% to 83.6% on adversarial examples generated by PGD () and from 55.5% to 70.1% on adversarial examples generated by PGD (). Designing and implementing the adaptive algorithm is left for future work.

(a) PGD () minimum iteration
(b) PGD () minimum iteration
Figure 5: Minimum number of iterations required for defending adversarial examples

5 Discussion

In this work, we present the non-monotonic relation between adversarial attacks and test-time defenses. The huge variance of classification accuracy on adversarial samples may cast doubts on previous work with unpublished or relatively small datasets, as selecting a large-scale dataset to inflate accuracy for a particular defense is feasible. For this reason, we did not argue the superiority of a single defense. As already stated in previous sections, the effectiveness of test-time smoothing defenses rely on a strong assumption that adversarial images are more noisy than clean samples. Such an assumption, however, may not be valid for more advanced attacks in the future. Based on results from our experiment, test-time smoothing is helpful in defending exisiting white-box untargeted attacks. There are a few failed attempts that are worth reporting. Edge-preserving smoothing techniques do not necessarily outperform common filters because they may also preserve structural details in strong noise. In addition, multi-channel smoothing techniques are not superior to channel-by-channel smoothing filters. Different from the feature denoising at adversarial training, feature smoothing at test time becomes less useful once the smoothing is sufficient at the image domain.

We hope this paper will stimulate more detailed analysis on adversarial examples at a finer scale. Future work includes investigating and comparing characteristics of defendable and undefendable adversarial examples, as well as constructing robust and adaptive termination criteria for each test sample. In addition, a more comprehensive metric is needed for the evaluation of defenses.

Acknowledgement

This work was funded in part by Army Research Office grant number ARO W911NF-18-1-0281 and Google Cloud Platform Research Credit award.

References

  • M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al. (2016) End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316. Cited by: §1.
  • A. Buades, B. Coll, and J. Morel (2005) A non-local algorithm for image denoising. In

    2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)

    ,
    Vol. 2, pp. 60–65. Cited by: §3.2.
  • N. Carlini and D. Wagner (2017) Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. Cited by: §1.
  • J. Gomes (2018) Medium. External Links: Link Cited by: §2.
  • A. Goodfellow and B. Papernot (2017) Is attacking machine learning easier than defending it?. cleverhans-blog 15. External Links: Link Cited by: §2.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §1.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §4.
  • X. Huang, D. Kroening, M. Kwiatkowska, W. Ruan, Y. Sun, E. Thamo, M. Wu, and X. Yi (2018) Safety and trustworthiness of deep neural networks: a survey. arXiv preprint arXiv:1812.08342. Cited by: §2.
  • U. Jang, X. Wu, and S. Jha (2017) Objective metrics and gradient descent algorithms for adversarial examples in machine learning. In Proceedings of the 33rd Annual Computer Security Applications Conference, pp. 262–277. Cited by: §4.
  • S. Kumra and C. Kanan (2017)

    Robotic grasp detection using deep convolutional neural networks

    .
    In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 769–776. Cited by: §1.
  • I. Lenz, H. Lee, and A. Saxena (2015) Deep learning for detecting robotic grasps. The International Journal of Robotics Research 34 (4-5), pp. 705–724. Cited by: §1.
  • A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. Cited by: §4.
  • J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg (2017) Dex-net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv preprint arXiv:1703.09312. Cited by: §1.
  • S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard (2016) Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582. Cited by: §4.
  • A. Mustafa, S. H. Khan, M. Hayat, J. Shen, and L. Shao (2019)

    Image super-resolution as a defense against adversarial attacks

    .
    arXiv preprint arXiv:1901.01677. Cited by: §2.
  • M. Nicolae, M. Sinn, M. N. Tran, A. Rawat, M. Wistuba, V. Zantedeschi, N. Baracaldo, B. Chen, H. Ludwig, I. Molloy, and B. Edwards (2018) Adversarial robustness toolbox v0.8.0. CoRR 1807.01069. External Links: Link Cited by: §4.
  • [17] NVIDIA self driving vehicles development platform.. Note: http://www.nvidia.com/object/drive-px.html.Accessed: 2019-09-17 Cited by: §1.
  • N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami (2016) The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. Cited by: §1, §4.
  • S. Paris, P. Kornprobst, J. Tumblin, and F. Durand (2007) A gentle introduction to bilateral filtering and its applications. In ACM SIGGRAPH 2007 courses, pp. 1. Cited by: §3.2.
  • P. Perona and J. Malik (1990) Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on pattern analysis and machine intelligence 12 (7), pp. 629–639. Cited by: §3.2.
  • J. Rauber, W. Brendel, and M. Bethge (2017) Foolbox: a python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131. External Links: Link, 1707.04131 Cited by: §4.
  • O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115 (3), pp. 211–252. External Links: Document Cited by: 3rd item.
  • J. Su, D. V. Vargas, and K. Sakurai (2019) One pixel attack for fooling deep neural networks.

    IEEE Transactions on Evolutionary Computation

    .
    Cited by: §2.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §1.
  • [25] Tesla. autopilot | tesla.. Note: https://www.tesla.com/autopilot.Accessed: 2019-09-17 Cited by: §1.
  • C. Tomasi and R. Manduchi (1998) Bilateral filtering for gray and color images.. In Iccv, Vol. 98, pp. 2. Cited by: §3.2.
  • C. Xie, Y. Wu, L. van der Maaten, A. Yuille, and K. He (2018) Feature denoising for improving adversarial robustness. arXiv preprint arXiv:1812.03411. Cited by: §1, §2, §4.1.
  • A. Yezzi (1998) Modified curvature motion for image smoothing and enhancement.. IEEE transactions on image processing: a publication of the IEEE Signal Processing Society 7 (3), pp. 345–352. Cited by: §3.2.