Image Super-Resolution as a Defense Against Adversarial Attacks
Convolutional Neural Networks have achieved significant success across multiple computer vision tasks. However, they are vulnerable to carefully crafted, human imperceptible adversarial noise patterns which constrain their deployment in critical security-sensitive systems. This paper proposes a computationally efficient image enhancement approach that provides a strong defense mechanism to effectively mitigate the effect of such adversarial perturbations. We show that the deep image restoration networks learn mapping functions that can bring off-the-manifold adversarial samples onto the natural image manifold, thus restoring classifier beliefs towards correct classes. A distinguishing feature of our approach is that, in addition to providing robustness against attacks, it simultaneously enhances image quality and retains models performance on clean images. Furthermore, the proposed method does not modify the classifier or requires a separate mechanism to detect adversarial images. The effectiveness of the scheme has been demonstrated through extensive experiments, where it has proven a strong defense in both white-box and black-box attack settings. The proposed scheme is simple and has the following advantages: (1) it does not require any model training or parameter optimization, (2) it complements other existing defense mechanisms, (3) it is agnostic to the attacked model and attack type and (4) it provides superior performance across all popular attack algorithms. Our codes are publicly available at https://github.com/aamir-mustafa/super-resolution-adversarial-defense.READ FULL TEXT VIEW PDF
In recent years, deep neural network approaches have been widely adopted...
While great progress has been made at making neural networks effective a...
Deep learning models are being integrated into a wide range of high-impa...
At present, adversarial attacks are designed in a task-specific fashion....
The linear and non-flexible nature of deep convolutional models makes th...
Though Deep Neural Networks (DNN) show excellent performance across vari...
Despite their tremendous success in modelling high-dimensional data
Image Super-Resolution as a Defense Against Adversarial Attacks
Success of Convolutional Neural Networks (CNNs) over the past several years has lead to their extensive deployment in a wide range of computer vision tasks, including image classification [1, 2, 3], object detection [4, 5], semantic segmentation[6, 7] and visual question answering. Not only limited to that, CNNs now play a pivotal role in designing many critical real world systems including, self driving cars and models for disease diagnosis, which necessitates their robustness in such situations. Recent works [11, 12, 13] however have shown that CNNs can easily be fooled by distorting natural images with small, well-crafted, human-imperceptible additive perturbations. These distorted images, known as adversarial examples, have further been shown to be transferable across different architectures, e.g an adversarial example generated for an Inception v-3 model is able to fool other CNN architectures [11, 14].
Owing to the critical nature of security-sensitive CNN applications, significant research has been carried out to devise defence mechanisms against these vulnerabilities [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. We can broadly categorize these defenses along two directions: the first being model-specific mechanisms, which aim to regularize a specific model’s parameters through adversarial training or parameter smoothing [18, 17, 15, 26, 24]. Such methods often require differentiable transformations that are computationally demanding. Moreover these transformations are vulnerable to further attacks, as the adversaries can circumvent them by exploiting the differentiable modules. The second category of defenses are model-agnostic. They mitigate the effect of adversarial perturbations in the input image domain by applying various transformations. Examples of such techniques include JPEG compression [27, 28], foveation-based methods, which crop the image background , random pixel deflection 
and random image padding & re-sizing. Compared with differentiable model-specific methods, most of the model-agnostic approaches are computationally faster and carry out transformations in the input domain, making them more favorable. However, most of these approaches lose critical image content when removing adversarial noise, which results in poor classification performance on non-attacked images.
This paper proposes a model-agnostic defense mechanism against a wide range of recently proposed adversarial attacks [12, 13, 30, 31, 32, 33] and does not suffer from information loss. Our proposed defense is based upon image super-resolution (SR), which selectively adds high frequency components to an image and removes noisy perturbations added by the adversary. We hypothesize that the learned SR models are generic enough to remap off-the-manifold samples on to the natural image manifold (see Fig. 1). The effect of added noise is further suppressed by wavelet domain filtering and inherently minimized through a global pooling operation on the higher resolution version of the image. The proposed image super-resolution and wavelet filtering based defense results in a joint non-differentiable module, which can efficiently recover the original class labels for adversarially perturbed images.
The main contributions of our work are:
Through extensive empirical evaluations, we show image super-resolution to be an effective defense strategy against a wide range of recently proposed state-of-the-art attacks in the literature [12, 13, 30, 31, 32, 33]. Using Class Activation Map visualizations, we demonstrate that super-resolution can successfully divert the attention of the classifier from random noisy patches to more distinctive regions of the attacked images (see Fig. 5 and 6).
Super-resolving an adversarial image projects it back to the natural image manifold learned by deep image classification networks.
Unlike existing image transformation based techniques, which introduce artifacts in the process of overcoming adversarial noise, the proposed scheme retains critical image content, and thus minimally impacts the classifier’s performance on clean, non-attacked images.
The proposed defense mechanism tackles adversarial attacks with no knowledge of the target model’s architecture or parameters. This can easily complement other existing model-specific defense methods.
that first estimate the manifold of clean data to detect adversarial examples and then apply a mapping function to reduce adversarial noise. Since they use generator blocks to re-create images, their studied case is restricted to small datasets (CIFAR-10, MNIST) with low-resolution images. In contrast, our approach does not require any prior detection scheme and works for all types of natural images with a much generic mapping function.
Below, we first formally define the underlying problem (Sec. 2.1), followed by a brief description of existing adversarial attacks (Sec. 2.2) and defenses (Sec. 2.3). We then present our proposed defense mechanism (Sec. 3). The effectiveness of our proposed defense is then demonstrated through extensive experiments against state-of-the art adversarial attacks [12, 13, 30, 31, 32, 33] and comparison with other recently proposed model-agnostic defenses [35, 16, 36, 23] (see Section 4).
Here we introduce popular adversarial attacks and defenses proposed in the literature, which form the basis of our evaluations and are necessary for understanding our proposed defense mechanism. We only focus on adversarial examples in the domain of image classification, though the same can be crafted for various other computer vision tasks as well.
Let denote a clean image sample and its corresponding ground-truth label, where the subscript emphasizes that the image is clean. Untargeted attacks aim to mis-classify a correctly classified example to any incorrect category. For these attacks, for a given image classifier , an additive perturbation is computed under the constraint that the generated adversarial example looks visually similar to the clean image i.e., for some dissimilarity function and the corresponding labels are unequal i.e . Targeted attacks change the correct label to a specified incorrect label i.e., they seek such that where is a specific class label such that . An attack is considered successful for an image sample if it can find its corresponding adversarial example under the given set of constraints. In practice is norm between a clean image and its corresponding adversarial example, where .
(a) Fast Gradient Sign Method (FGSM): This is one of the first attack methods, introduced by Goodfellow et al. 
. Given a loss function, where denotes the network parameters, the goal is to maximize the loss as:
FGSM is a single step attack which aims to find the adversarial perturbations by moving in the opposite direction to the gradient of the loss function w.r.t. the image ():
Here is the step size, which essentially restricts the norm of the perturbation.
(b) Iterative Fast Gradient Sign Method (I-FGSM) is an iterative variant of FGSM, introduced by Kurakin et al.. I-FGSM performs the update as follows:
where , and after iterations, .
(c) Momentum Iterative Fast Gradient Sign Method (MI-FGSM) proposed by Dong et al.  is similar to I-FGSM with introduction of an additional momentum term which stabilizes the direction of gradient and helps in escaping local maxima. MI-FGSM is defined as follows:
where is the decay factor, and after iterations.
(d) DeepFool was proposed by Moosavi-Dezfooli et al. and aims to minimize the norm between a given image and its adversarial counterpart. The attack assumes that a given image resides in a specific class region surrounded by the decision boundaries of the classifier. The algorithm then iteratively projects the image across the decision boundaries, which is of the form of a polyhydron, until the image crosses the boundary and is mis-classified.
(e) Carlini and Wagner (C&W) is a strong iterative attack that minimizes an auxiliary variable as follows:
where is the perturbation and is defined as
are the logits values corresponding to a classand is the margin parameter. C&W attack works for various norms.
(f) DIFGSM and MDIFGSM : The aforementioned attacks can be grouped into: single-step and iterative attacks. Iterative attacks have higher success rate under white-box conditions, but they tend to overfit, and generalize poorly across black-box settings. On the contrary, single-step attacks generate perturbed images with fairly improved transferability but less success rate in white-box conditions. Recently proposed Diverse-Input-Iterative-FGSM (DIFGSM) and Momentum-Diverse-Input-Iterative-FGSM (MDIFGSM)  methods claim to fill in this gap and improve the transferabiltiy of iterative attacks. DIFGSM performs random image re-sizing and padding as image transformation , thus creating an augmented set of images, which are then attacked using I-FGSM as:
Here is the ratio of transformed images to total number of images in the augmented dataset. MDIFGSM is a variant, which incorporates the momentum term in DIFGSM to stabilize the direction of gradients. The overall update for MDIFGSM is similar to MI-FGSM with Equation 4 being replaced by:
Tremer et al.
proposed ensemble adversarial training, which results in regularizing the network by softening the decision boundaries, thereby encompassing nearby adversarial images. Defensive distillation improves the model robustness essentially in a similar fashion by retraining a given model using soft labels acquired by distillation mechanism . Kurakin et al.  augmented a training batch of clean images with corresponding adversarial images to improve robustness. Moosavi-Dezfooli et al., however, showed that adversarial examples can also be generated for an already adversarially trained model.
Recently, some defense methods have been proposed in input image transformation domain. Data compression (JPEG image compression) as a defense was studied by [27, 28]. JPEG compression deploys discrete cosine transform to suppress the human imperceptible high frequency noisy components. Guo et al. 
, however, noted that JPEG compression alone is far from being an effective defense. They proposed image transformations using quilting and Total Variance Minimization (TVM). Feature squeezing reduces the image resolution either by using bit depth reduction or smoothing filters to limit the adversarial space. A foveation based method was proposed by Luo et al. which shows robustness against weak attacks like L-BFGS and FGSM. Other closely related work to ours is that of Prakash et al., which deflects attention by carefully corrupting less critical image pixels. This introduces new artifacts which reduce the image quality and can result in mis-classification. To handle such artifacts, BayesShrink denoising in the wavelet domain is used. It has been shown that denoising in the wavelet domain yields superior performance than other techniques such as bilateral, an-isotropic, TVM and Wiener-Hunt de-convolution . Another closely related work is that of Xie et al. , which performs image transformations by randomly re-sizing and padding an image before passing it through a CNN classifier. Xie et al.  showed that adding adversarial patterns to a clean image results in noisy activation maps. A defense mechanism was proposed to perform feature denoising using non-local means, which requires retraining the model end-to-end with adversarial data augmentation. One of the main shortcomings of the aforementioned defense techniques (JPEG compression, PD and foveation based method) is that the transformations degrade the image quality, which results in loss of significant information from images.
Existing defense mechanisms against adversarial attacks aim to reduce the effects of added perturbations so as to recover the correct image class. Defenses are being developed along two main directions: (i) modifying the image classifier to such that it recovers the true label for an adversarial example i.e. ; and (ii) transforming the input image such that , where is an image transformation function. Ideally, should be model-agnostic, complex and a non-differentiable function, making it harder for the adversary to circumvent the transformed model by back-propagating the classifier error through it.
Our proposed approach, detailed below, falls under the second category of defense mechanisms. We propose to use image restoration techniques to purify perturbed images. The proposed approach has two components, which together form a non-differentiable pipeline that is difficult to bypass. As an initial step, we apply wavelet denoising to suppress any noise patterns. The central component of our approach is the super-resolution operation, which enhances the pixel resolution while simultaneously removing adversarial patterns. Our experiments show that image super-resolution alone is sufficient to reinstate classifier beliefs towards correct categories; however, the second step provides added robustness since it is a non-differentiable denoising operation.
Our goal is to defend a classification model against the perturbed images generated by an adversary. Our approach is motivated by the manifold assumption 
, which postulates that natural images lie on low-dimensional manifolds. This explains why low-dimensional deep feature representations can accurately capture the structure of real datasets. The perturbed images are known to lie off the low-dimensional manifold of natural images, which is approximated by deep networks. Gong et al. in  showed that a simple binary classifier can successfully separate of-the-manifold adversarial images from clean ones and thereby concluded that adversarial and clean data are not twins, despite appearing visually identical. Fig. 2 shows a low-dimensional manifold of natural images. Data points from a real-world dataset (say ImageNet) are sampled from a distribution of natural images and can be considered to lie on-the-manifold. Such images are referred to as in-domain . Corrupting these in-domain images by adding adversarial noise takes the images off-the-manifold. A model that learns to yield images lying on-the-manifold from off-the-manifold images can go a long way in detecting and defending against adversarial attacks. We propose to use image super-resolution as a mapping function to remap off-the-manifold adversarial samples on to the natural image manifold and validate our proposal through experimentation (see Sec. 4.1). In this manner, robustness against adversarial perturbations is achieved by enhancing the visual quality of images. This approach provides remarkable benefits over other defense mechanisms that truncate critical information to achieve robustness.
Super-resolution Network: A required characteristic for defense mechanisms is the ability to destroy fraudulent perturbations added by an adversary. Since these perturbations are generally high-frequency details, we use a super-resolution network that explicitly uses residual learning to focus on such details. These details are added to the low-resolution inputs in each residual block to eventually generate a high-quality, super-resolved image. The network considered in this work is the Enhanced Deep Super-Resolution (EDSR)  network, which uses a hierarchy of such residual blocks. While our proposed approach achieves competitive performance with other super-resolution and up-sampling techniques, we demonstrate the added efficacy of using residual learning based EDSR model through extensive experiments.
Effect on Spectral Distribution: The underlying assumption of our method is that the deep super-resolution networks learn a mapping function that is generic enough to map the perturbed image onto the manifold of its corresponding class image. This mapping function learned with deep CNNs basically models the distribution of real non-perturbed image data. We validate this assumption by analyzing the frequency-domain spectrum of the clean, adversarial and recovered images in Fig. 3. It can be observed that adversarial image contains high frequency patterns and the super-resolution operation further injects high frequency patterns to the recovered image. This achieves two major benefits: first, the newly added high-frequency patterns smooth the frequency response of the image (column 5, Fig. 3) and, second, the super-resolution destroys the adversarial patterns that seek to fool the model.
Effect of adversarial perturbations on feature maps: Adversarial attacks add small perturbations to images, which are often imperceptible to the human eye or generally perceived as small noise in an image in the pixel space. However this adversarial noise amplifies in the feature maps of a convolutional network, leading to substantial noise . Fig. 4 shows the feature maps for three clean images, their adversarial counterparts and the defended images chosen from ResNet-50 res block after the activation layer. Each feature map is of dimensions. The features for a clean image sample are activated only at semantically significant regions of the image, whereas those for its adversarial counterpart seem to be focused at semantically irrelevant regions as well. Xie et al  performed feature denoising using non-local means  to improve the robustness of convolutional networks. Their model is trained end-to-end on adversarially perturbed images. Our defense technique recovers the feature maps (Cols 2 and 4, Fig. 4) without requiring any model retraining or adversarial image data augmentation.
Advantages of proposed method: Our proposed method offers a number of advantages. (a) The proposed approach is agnostic to the attack algorithm and the attacked model. (b) Unlike many recently proposed techniques, which degrade critical image information as part of their defense, our proposed method improves image quality while simultaneously providing a strong defense. (c) The proposed method does not require any learning and only uses a fixed set of parameters to purify input images. (d) It does not hamper the classifier’s performance on clean images. (e) Due to its modular nature, the proposed approach can be used as a pre-processing step in existing deep networks. Furthermore, our purification approach is equally applicable to other computer vision tasks beyond classification such as segmentation and object detection.
Since all adversarial attacks add noise to an image in the form of well-crafted perturbations, an efficient image denoising technique can go a long way in mitigating the effect of these perturbations if not remove them altogether. Image denoising in the spatial or frequency domain causes loss of textural details which is detrimental to our goal of achieving clean image-like performance on denoised images. Denosing in the wavelet domain has gained popularity in recent works. It yields better results than various other techniques including, bilateral, anisotropic, Total Variance Minimization (TVM) and Wiener-Hunt de-convolution . The main principle behind wavelet shrinkage is that Discrete Wavelet Transform (DWT) of real world signals are sparse in nature. This can be exploited to our advantage since the ImageNet dataset  contains images that capture real world scenes and objects. Consider an adversarial example , the wavelet transform of is a linear combination of the wavelet transform of the clean image and noise. Unlike image smoothing, which removes the higher frequency components in an image, DWT of real world images have large coefficients corresponding to significant image features and noise can be removed by applying a threshold on the smaller coefficients.
Thresholding parameter determines how efficiently we shrink the wavelet coefficients and remove adversarial noise from an image. In practice, two types of thresholding methods are used: a) Hard thresholding and b) Soft thresholding. Hard thresholding is basically a non-linear technique, where each coefficient is individually compared to a threshold value , as follows:
Reducing the small noisy coefficients to zero and then carrying out an inverse wavelet transform produces an image which retains critical information and suppresses the noise. Unlike hard-thresholding where the coefficients larger than are fully retained, soft-thresholding modifies the coefficients as follows:
In our method, we use soft-thresholding as it reduces abrupt sharp changes that otherwise occur in hard-thresholding. Also hard-thresholding over-smooths an image which reduces the classification accuracy on clean non-adversarial images.
Choosing an optimal threshold value is the underlying challenge in wavelet denoising. A very large threshold value means ignoring larger wavelets which results in an over smoothed image. In contrast, a small threshold allows even the noisy wavelets to pass thus failing to produce a denoised image after reconstruction. Universal thresholding is employed in VisuShrink  to determine the threshold parameter for an image with pixels as , where is an estimate of the noise level. BayesShrink  is an efficient method for wavelet shrinkage which employs different thresholds for each wavelet sub-band by considering Gaussian noise. Suppose is the wavelet transform of an adversarial image, since and are mutually independent, the variances , and of , , , respectively, follow: . A wavelet sub-band variance for an adversarial image is estimated as:
where are the sub-band wavelets and is the total number of wavelet coefficients in a sub-band. Threshold value for BayesShrink soft-thresholding is given as:
In our experiments, we explored both VisuShrink and BayesShrink soft-thresholding and find the latter to perform better and provide visually superior denoising.
An algorithmic description of our end-to-end defense scheme is provided in Algorithm 1. We first smooth the effect of adversarial noise using soft wavelet denoising. This is followed by employing super resolution as a mapping function to enhance the visual quality of images. Super resolving an image maps the adversarial examples to natural image manifold in high resolution space which otherwise lies off-the-manifold in low resolution space. The recovered image is then passed through the same pre-trained models on which the adversarial examples were generated. As can be seen, our model-agnostic image transformation technique is aimed at minimizing the effect of adversarial perturbations in the image domain, with little performance loss on clean images. Our technique causes minimal depreciation in classification accuracy of non-adversarial images.
|Inception ResNet v-2||100||59.4||55.0||53.6||21.6||0.1||0.3||0.5||1.5||0.6|
|JPEG Compression (Das et al. )|
|Inception ResNet v-2||95.5||67.0||55.3||53.7||81.3||83.9||83.1||72.8||1.6||1.1|
|Random resizing + zero padding (Xie et al. )|
|Inception ResNet v-2||98.7||70.7||59.1||55.8||87.5||89.7||88.0||88.3||7.5||5.3|
|Quilting + Total Variance Minimization (Guo et al. )|
|Inception ResNet v-2||95.6||74.6||67.3||59.0||86.5||86.2||85.3||84.8||4.5||1.2|
|Pixel Deflection (Prakash et al. )|
|Inception ResNet v-2||92.1||78.2||75.7||71.6||91.3||88.9||89.7||89.8||57.9||24.6|
|Our work: Wavelet Denoising + Image Super Resolution|
|Inception ResNet v-2||98.2||95.3||87.4||82.3||95.8||96.0||95.6||95.0||69.8||35.6|
Models and Datasets:
We evaluate our proposed defense and compare it with existing methods for three different classifiers: Inception-v3, ResNet-50 and InceptionResNet v-2. For these models, we obtain ImageNet pre-trained weights from TensorFlow’s Github repository111https://github.com/tensorflow/models/tree/master/research/slim, and do not perform any re-training or fine-tuning. The evaluations are done on a subset of 5000 images from ILSVRC  validation set. The images are selected such that the respective model achieves a top-1 accuracy of on the clean non-attacked images. Evaluating defense mechanisms on already mis-classified images is not meaningful, since an attack on a mis-classified image is considered successful as per the definition. We also perform experiments on NIPS 2017 Competition on Adversarial Attacks and Defenses DEV dataset . The dataset is collected by Google Brain organizers, and consists of images of size . An ImageNet pre-trained Inception v-3 model achieves top-1 accuracy on NIPS 2017 DEV images.
Attacks: We generate attacked images using different techniques, including Fast Gradient Sign Method FGSM , iterative FGSM (I-FGSM) , Momentum Iterative FGSM (MI-FGSM) , DeepFool , Carlini and Wagner , Diverse Input Iterative FGSM (DIFGSM) and Momentum Diverse Input Iterative FGSM (MDIFGSM) . We use publicly available implementations of these methods: Cleverhans 222https://github.com/tensorflow/cleverhans, Foolbox 333https://github.com/bethgelab/foolbox and codes444https://github.com/dongyp13/Non-Targeted-Adversarial-Attacks 555https://github.com/cihangxie/DI-2-FGSM provided by [30, 33]. For FGSM, we generate attacked images with and for iterative attacks, the maximum perturbation size is restricted to . All attacks are carried out in white-box settings, since adversarial attacks are less transferable for larger datasets like ImageNet.
Defenses: We compare our proposed defense with a number of recently introduced state-of-the-art schemes in the literature. These include JPEG Compression , Random Resizing and Padding , Image quilting + total variance minimization  and Pixel Deflection (PD). We use publicly available implementations 666https://github.com/poloclub/jpeg-defense 777https://github.com/cihangxie/NIPS2017_adv_challenge_defense 888https://github.com/facebookresearch/adversarial_image_defenses 999https://github.com/iamaaditya/pixel-deflection of these methods. All experiments are run on the same set of images and against the same attacks for a fair comparison.
For our experiments, we explore two broad categories of Single Image Super Resolution (SISR) techniques: i)Interpolation based methods and ii)Deep Learning (DL) based methods. Interpolation based methods like Nearest Neighbor (NN), Bi-Linear and Bi-cubic upsampling are computationally efficient, but not quite robust against stronger attacks (DIFGSM and MDIFGSM). Recently proposed DL based methods have shown superior performance in terms of Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the mean squared error (MSE). Here, we consider three DL based SISR techniques, i) Super Resolution using ResNet model (SR-ResNet) , ii) Enhanced Deep Residual Network for SISR (EDSR)  and iii) Super Resolution using Generative Adversarial Networks (SR-GAN) . Our experiments show that EDSR consistently performs better. EDSR builds on a residual learning 
scheme that specifically focuses on high-frequency patterns in the images. Compared to the original ResNet, EDSR demonstrates substantial improvements by removing Batch Normalization layers (from each residual block) and ReLU activation (outside residual blocks).
In this paper we propose that clean and adversarial examples lie on different manifolds and super-resolving an image to a higher dimensional space remaps the adversarial sample back to natural image manifold.
To validate this assumption, we fine-tune a pre-trained Inception v-3 model on ImageNet dataset as a binary classifier using 10,000 pairs of clean and adversarial examples (generated from all the aforementioned attack techniques). We re-train the top 2 blocks while freezing the rest with a learning rate reduced by a factor of 10. The global average pooling layer of the model is followed by a batch normalization layer, drop-out layer and two dense layers (1024 and 1 nodes respectively). Our model efficiently leverages the subtle difference between clean images and their adversarial counterparts and separate the two with a very high accuracy (99.6%). To further validate our assumption on super-resolution, we test our defended images using this binary classifier. The classifier labels around 91% of the super-resolved images as clean, confirming that a vast majority of restored samples lie on the natural image manifold.
In Figure. 1
, we plot the features extracted from the last layer of the binary classifier to visualize our manifold assumption validation. We reduce the dimensionality of features to 3 for visualization (containing 90% of variance) using Principle Component Analysis.
|Attack||No Defense||SR-ResNet ||SR-GAN ||EDSR |
Table I shows the destruction rates of various defense mechanisms on 5000 ILSVRC validation set images. Destruction rate is defined as the ratio of successfully defended images . A destruction rate of implies that all images are correctly classified after applying the defense mechanism. It should be noted that we define destruction rates in terms of top-1 classification accuracy, which makes defending against attacks more challenging since we have to recover the exact class label. ‘No Defense’ in Table I shows the model performance on generated adversarial images. A lower value under ‘No Defense’ is an indication of a strong attack. The results show that iterative attacks are better at fooling the model compared with the single-step attacks. The iterative attacks, however, are not transferable and are easier to defend. Similarly, targeted attacks are easier to defend compared with their non-targeted counterparts, as they tend to over-fit the attacked model . Considering them as weak attacks, we therefore only report the performance of our defense scheme against more generic non-targeted attacks.
For the iterative attacks (C&W and DeepFool), both Random Resizing + Padding and PD achieve similar performance, successfully recovering about 90% of the images. In comparison, our proposed super-resolution based defense recovers about 96% of the images. For the single-step attack categories, Random Resizing + Padding fails to defend. This is also noted in . To overcome this limitation, an ensemble model with adversarial augmentation is used for defense. Compared with the JPEG compression based defense , our proposed method achieves a substantial performance gain of 31.1% for FGSM (). In single-step attacks category (e.g., FGSM-10), our defense model outperforms Random Resizing + Padding and PD by a large margin of 26.7% and 21.0% respectively. For recently proposed strong attack (MDIFGSM), all defense techniques (JPEG compression, Random Resizing + Padding, Quilting + TVM and PD) largely fail, recovering only 1.3%, 5.8%, 1.7% and 21.9% of the images, respectively. In comparison, the proposed image super-resolution based defense can successfully recover of the images.
We show a further performance comparison of our proposed defense with other methods on the NIPS-DEV dataset in Table II. Here, we only report results on Inception v-3, following the standard evaluation protocols as per the competition’s guidelines . Inception v-3 is a stronger classifier, and we expect the results to generalize across other classifiers. Our experimental results in Table II show a superior performance of the proposed method.
Super-resolution Methods: Image super resolution recovers off-the-manifold adversarial images from low-resolution space and remaps them to the high-resolution space. This should hold true for different super-resolution techniques in the literature. In Table III, we evaluate the effectiveness of three image super-resolution techniques- SR-ResNet, SR-GAN  and EDSR . Specifically, attacked images are super-resolved to , without using any wavelet denoising. Experiments are performed on Inception v-3 classifier. The results in Table III show a comparable performance across the evaluated super-resolution methods. These results demonstrate the effectiveness of super-resolution to recover images.
|Inception v-3 model||ResNet-50 model||Inception ResNet v-2 model|
|Attack||No Defense||WD||SR||WD+SR||No Defense||WD||SR||WD+SR||No Defense||WD||SR||WD+SR|
Besides state-of-the-art image super-resolution methods, we further consider documenting the results on enhancing image resolution using interpolation-based techniques. For this, we perform experiments by resizing the images with Nearest Neighbor, Bi-linear and Bi-cubic interpolation techniques. In Table IV, we report the results achieved by three different strategies: upsample (by ), upsample + downsample and downsample + upsample. The results show that, although the performance of the simple interpolation based methods is inferior to more sophisticated state-of-the-art super-resolution techniques in Table III, the simple interpolation based image resizing is surprisingly effective and achieves some degree of defense against adversarial attacks.
Effect of Wavelet Denoising: Our proposed defense first deploys wavelet denoising, which aims to minimize the effect of adversarial perturbations, followed by image super-resolution to selectively introduce high frequency components into an image (as seen in Fig. 3) and recover off-the-manifold attacked images. Here we investigate the individual impact of these two modules towards defending adversarial attacks. We perform extensive experiments on three classifiers: Inception v-3, ResNet-50 and InceptionResNet v-2. Table V shows the top-1 accuracy of each of the models for different adversarial attacks. The results show that, while wavelet denoising helps suppress added adversarial noise, the major performance boost is achieved with image super-resolution. The best performance is achieved when wavelet denoising is followed by super-resolution. These empirical evaluations demonstrate that image super-resolution with wavelet denoising is a robust model-agnostic defense technique for both iterative and non-iterative attacks.
Hyper-parameters Selection: Unlike many existing defense schemes, which require computationally expensive model re-training and parameter optimization [16, 17, 18, 24], our proposed defense is training-free and does not require tuning a large set of hyper-parameters. Our proposed defense has two hyper-parameters: the scale of super-resolution () and the coefficient of BayesShrink (). We perform a linear search over the scaling factor for one single-step (FGSM-2) and one iterative (C&W) attack on images, randomly selected from the ILSVRC validation-set. These experiments are performed on Inception v-3 model. Table VI shows the classifier performance across different super-resolution scaling factors. We select , since it clearly shows significantly superior performance. Higher values of introduce significant high frequency component in the image which degrade the performance. For , we follow  and choose as .
CAMs Visualization: Class Activation Maps (CAMs) are weakly supervised localization techniques, which are helpful in interpreting the predictions of the CNN model by providing a visualization of discriminative regions in an image. CAMs are generated by replacing the last fully connected layer by a global average pooling (GAP) layer. A class weighted average of the outputs of the GAP results in a heat map which can localize the discriminative regions in the image responsible for the predicted class labels. Fig. 5 and 6 show the CAMs for the top-1 prediction of Inception v-3 model for clean, attacked and recovered image samples. It can be observed that mapping an adversarial image to higher resolution destroys most of the noisy patterns, recovering CAMs similar to the clean images. Row 5 (Fig. 5 and 6) show the added perturbations to the clean image sample. Super-resolving an image selectively adds high frequency components that eventually help in recovering model attention towards discriminative regions corresponding to the correct class labels (see Row 6, Fig. 5 and 6).
Adversarial perturbations can seriously compromise the security of deep learning based models. This can have wide repercussions since the recent success of deep learning has led to these models being deployed in a broad range of important applications, from health-care to surveillance. Thus designing robust defense mechanisms that can counter adversarial attacks without degrading performance on unperturbed images is an absolute requisite. In this paper, we presented an image restoration scheme based on super-resolution, that maps off-the-manifold adversarial samples back to the natural image manifold. We showed that the primary reason that super-resolution networks can negate the effect of adversarial noise is due to their addition of high-frequency information into the input image. Our proposed defense pipeline is agnostic to the underlying model and attack type, does not require any learning and operates equally well for black and white-box attacks. We demonstrated the effectiveness of proposed defense approach compared to state-of-the-art defense schemes, where it outperformed competing models by a considerable margin.
Computer vision and pattern recognition, 2016.
X. Zhu and A. B. Goldberg, “Introduction to semi-supervised learning,”
Synthesis lectures on artificial intelligence and machine learning, vol. 3, no. 1, pp. 1–130, 2009.