DeepAI
Log In Sign Up

Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks

Effective defense of deep neural networks against adversarial attacks remains a challenging problem, especially under powerful white-box attacks. In this paper, we develop a new method called ensemble generative cleaning with feedback loops (EGC-FL) for effective defense of deep neural networks. The proposed EGC-FL method is based on two central ideas. First, we introduce a transformed deadzone layer into the defense network, which consists of an orthonormal transform and a deadzone-based activation function, to destroy the sophisticated noise pattern of adversarial attacks. Second, by constructing a generative cleaning network with a feedback loop, we are able to generate an ensemble of diverse estimations of the original clean image. We then learn a network to fuse this set of diverse estimations together to restore the original image. Our extensive experimental results demonstrate that our approach improves the state-of-art by large margins in both white-box and black-box attacks. It significantly improves the classification accuracy for white-box PGD attacks upon the second best method by more than 29 dataset and more than 39

READ FULL TEXT VIEW PDF
01/31/2022

Boundary Defense Against Black-box Adversarial Attacks

Black-box adversarial attacks generate adversarial samples via iterative...
03/04/2021

Structure-Preserving Progressive Low-rank Image Completion for Defending Adversarial Attacks

Deep neural networks recognize objects by analyzing local image details ...
08/25/2022

A Perturbation Resistant Transformation and Classification System for Deep Neural Networks

Deep convolutional neural networks accurately classify a diverse range o...
10/23/2020

Learn Robust Features via Orthogonal Multi-Path

It is now widely known that by adversarial attacks, clean images with in...
05/06/2021

Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model

Deep neural networks have been shown to suffer from critical vulnerabili...
08/18/2022

Resisting Adversarial Attacks in Deep Neural Networks using Diverse Decision Boundaries

The security of deep learning (DL) systems is an extremely important fie...

1 Introduction

Researchers have recognized that deep neural networks are sensitive to adversarial attacks [32]

. Very small changes of the input image can fool the state-of-art classifier with very high success probabilities. The attackers often generate noise patterns by exploiting the specific network architecture of the target deep neural network so that small noise at the input layer can accumulate along the network inference layers, finally exceed the decision threshold at the output layer, and result in false decision. On the other hand, we know a well-trained deep neural networks are robust to random noise

[1], such as Gaussian noise. Therefore, the key issue in network defense is to destroy the sophisticated pattern or accumulative process of the attack noise while preserving the original image content or network classification performance.

During the past few years, a number of methods have been proposed to construct adversarial samples to attack the deep neural networks, including fast gradient sign (FGS) method [9], Jacobian-based saliency map attack (J-BSMA) [25], and projected gradient descent (PGD) attack [18, 20]. Different classifiers can be failed by the same adversarial attack method [32]. The fragility of deep neural networks and the availability of these powerful attacking methods present an urgent need for developing effective defense methods. Meanwhile, deep neural network defense methods have also been developed, including adversarial training [18, 32]

, defensive distillation

[26, 4, 27], Magnet [21], and featuring squeezing [13, 41]. It has been recognized that these methods suffer from significant performance degradation under strong attacks, especially white-box attacks with large magnitude and iterations [29].

Figure 1: Illustration of the proposed ensemble generative cleaning with feedback loops for defending adversarial attacks.

In this work, we explore a new approach, called ensemble generative cleaning with feedback loop (EGC-FL), to defend deep neural network against powerful adversarial attacks. Our approach is motivated by the following observation: (1) the adversarial attack has sophisticated noise patterns which should be disturbed or destroyed during the defense process. (2) The attack noise, especially those powerful white-box attacks, such as the PGD and BPDA attacks [2], are often generated with an iterative process. To clean them, we also need an iterative process with multiple rounds of cleaning to achieve effective defense.

Motivated by these observations, our proposed EGC-FL approach first introduces a transformed deadzone (TDZ) layer into the defense network, which consists of an orthonormal transform and a deadzone-based activation function, to destroy the sophisticated noise pattern of adversarial attacks. Second, it introduces a new network structure with feedback loops, as illustrated in Figure 1, into the generative cleaning network. This feedback loop network allows us to remove the residual attack noise and recover the original image content in an iterative fashion. Specifically, over multiple feedback iterations, the EGC-FL network generates an ensemble of cleaned estimations of the original image. Accordingly, we also learn an accumulative image fusion network which is able to fuse the new estimation with existing result in an iterative fashion. According to our experiments, this feedback and iterative process converges very fast, often within 2 to 4 iterations. Our extensive experimental results on benchmark datasets demonstrate that our EGC-FL approach improves the state-of-art by large margins in both white-box and black-box attacks. It significantly improves the classification accuracy for white-box attacks upon the second best method by more than 29% on the SVHN dataset and more than 39% on the challenging CIFAR-10 dataset with PGD attacks.

The major contributions of this work can be summarized as follows. (1) We have introduced a transform deadzone layer into the defense network to effectively destroy the noise pattern of adversarial attacks. (2) We have developed a new network structure with feedback loops to remove adversarial attack noise and recover original image content in an iterative manner. (3) We have successfully learned an accumulative image fusion network which is able to fuse the incoming sequence of cleaned estimations and recover the original image in an iterative manner. (4) Our new method has significantly improved the performance of the state-of-the-art methods in the literature under a wide variety of attacks.

The rest of this paper is organized as follows. Section 2 reviews related work. The proposed EGC-FL method is presented in Section 3. Experimental results, performance comparisons with existing methods, and ablation studies are provided in Section 4. Section 5 concludes the paper.

2 Related work

In this section, we review related work on adversarial attack and network defense methods which are two tightly coupled research topics. The goal of attack algorithm design is to fail all existing network defense methods, while the goal of defense algorithms is to defend the deep neural networks against all existing adversarial attack methods.

(A) Attack methods. Attack methods can be divided into two threat models: white-box attacks and black-box attacks. The white-box attacker has full access to the classifier network parameters, network architecture, and weights. The black-box attacker has no knowledge of or access to the target network. For white-box attack, a simple and fast approach called Fast Gradient Sign (FGS) method has been developed by Goodfellow et al. [9] using error back propagation to directly modify the original image. Kurakin et al. [18] apply FGS iteratively and propose BIM. Carlini et al. [4] designed an optimization-based attack method, called Carlini-Wagner (CW) attack, which is able to fool the target network with the smallest perturbation. Xiao et al. [37] trained a generative adversarial network (GAN) [10] to generate perturbations. Kannan et al. [17] found that the Projected Gradient Descent (PGD) is the strongest among all attack methods. It can be viewed as a multi-step variant of FGS [20]. Athalye et al. [2] introduced a method, called Backward Pass Differentiable Approximation (BPDA), to attack networks where gradients are not available. It iteratively computes the adversarial gradient on the defense results. It is able to successfully attack all existing state-of-the-arts defense methods. For black-box attacks, the attacker has no knowledge about the target classifier. Papernot et al. [24] introduced the first approach for black-box attack using a substitute model. Dong et al. [8] proposed a momentum-based iterative algorithms to improve the transferability of adversarial examples. Xie et al. [40] boosted the transferability of adversarial examples by creating diverse input patterns.

Figure 2: Framework of the proposed ensemble generative cleaning network for defending adversarial attacks.

(B) Defense methods. Several approaches have recently been proposed for defending both white-box attacks and black-box attacks. Adversarial training trains the target model using adversarial examples [32, 9]. Madry et al. [20] suggested that training with adversarial examples generated by PGD improves the robustness. [21] proposed a method, called MagNet, which detects the perturbations and then reshape them according to the difference between clean and adversarial examples. Recently, there are several defense methods based on GANs have been developed. Samangouei et al. [29] projected the adversarial examples into a trained generative adversarial network (GAN) to approximate the input using generated clean image. Recently, some defense methods have been developed based on input transformations. Guo et al. [11] proposed several input transformations to defend the adversarial examples, including image cropping and re-scaling, bit-depth reduction, and JPEG compression. Xie et al. [38]

proposed to defend against adversarial attacks by adding a randomization layer, which randomly re-scales the image and then randomly zero-pads the image. Jia

et al. [15] proposed an image compression framework to defend adversarial examples, called ComDefend. Xie et al. [39] introduced a feature denoising method for defending PGD white-box attacks.

3 The Proposed Method

In this section, we present our method of ensemble generative cleaning with feedback loops for defending adversarial attacks.

3.1 Overview

As illustrated in Figure 1, our proposed method of ensemble generative cleaning with feedback loops (EGC-FL) for defending adversarial attacks is based on two main ideas: (1) we introduce a transformed deadzone layer into the cleaning network to destroy the sophisticated noise patterns of adversarial attacks. (2) We introduce a generative cleaning network with a feedback loop to generate a sequence of diverse estimations of the original image, which will be fused in an accumulative fashion to restore the original image.

Figure 2 shows a more detailed framework of the proposed EGC-FL method. The attacked image is first pre-processed by a convolutional layer and then passed to the transformed deadzone layer , which aims to destroy the sophisticated noise patterns of the adversarial attacks. To remove the residual attack noise in and recover the original image content , the generative cleaning network generates a series of estimations of the original image using a feedback loop. The feedback network consists of three converter networks, , , and , which are fully convolutional layers. These three converter networks are used to normalize the output features from different networks before they are concatenated or fused together. At the -th feedback loop, let be the output of the generative cleaning network . We concatenate the output and the original after being normalized by converter networks and , respectively. The concatenated feature map is then normalized by converter before feeding back to the generative cleaning network to produce the output . This feedback loop is summarized by the following formula:

(1)

where represents the cascade operation. This ensemble generative cleaning network with feedback will generate a series of cleaned versions , representing a diverse set of estimations of the original image . To recover the original image , we introduce an accumulative image fusion network , which operates as follows

(2)

Specifically, the input to the fusion network are two images: which is the current output from the generative cleaning network , and which is the current fused image produced by . The generative cleaning network is separated from the accumulative fusion network so that the generative network can generate multiple estimations of the original image. The fusion network can then fuse them together. In other words, the output of is fed back to itself as the input for the next round of fusion. All networks, including the convolution pre-processing, the generative cleaning network, converter networks, and the accumulative fusion network are learned from our training data, which will be explained in more detail in the following sections.

Figure 3: Activation function for the TDZ layer.

3.2 Transformed Deadzone Layer

The goal of the transformed deadzone layer in our defense network is to destroy the noise pattern and perform the first round of removal of the adversarial attack noise. Let be the original image and be its pixel at location . The attacked image is given by where is adversarial attack with magnitude and is the attack noise at pixel location

, which is a random variable with maximum magnitude of

. We have . In the spatial domain, it is very challenging to separate the attack noise from the original image content since the attacked image and the original image are visually very similar to each other perceptually.

To address this issue, we propose to first transform the image using a de-correlation or energy compaction orthonormal transform matrix . One choice of this transform is the blockwise discrete cosine transform (DCT) [34]. After this transform, the energy of the original image will be aggregated onto a small fraction of transform coefficients with the remaining coefficients being very close to zeros. We then pass this transformed image through a deadzone activation function shown in Figure 3. Here, if . Otherwise . Since the transform is linear, the transformed image after the deadzone activation is given by

(3)
(4)
(5)

Statistically, the attack noise is white noise. After transform,

remains white noise. Notice that a vast majority of transform coefficients in the transformed image will be very small. In this case, the deadzone activation function will largely remove the transformed attack noise . Meanwhile, since the major image content or energy has be aggregated onto a smaller number large-valued coefficients, which remain unchanged by the deadzone function. In this way, the energy-compaction transform is able to help protecting the original image content from being damaged by the deadzone activation function during removal of attack noise. Certainly, it will still cause some damage to the original image content since the small transform coefficients are forced to zeros. Figure 4 shows the energy of the attack noise before and after the TDZ, namely, and , for 860 test images organized in 215 batches. Here represents the transformed deadzone operation. We can see that the energy of attack noise has been significantly reduced. Certainly, some parts of the original image content, especially those high-frequency details, are also removed, which need to be recovered by the subsequent generative cleaning network.

Figure 4: The energy of attack noise before and after the transformed TDZ for 860 test images from 215 batches.
Defense Methods Clean FGS PGD BIM C&W
No Defense 94.38% 31.89% 0.00% 0.00% 0.99%
Label Smoothing [36] 92.00% 54.00% (NA) 8.00% 2.00%
Feature Squeezing [41] 84.00% 20.00% (NA) 0.00% 78.00%
PixelDefend [30] 85.00% 70.00% (NA) 70.00% 80.00%
Adv. Network [35] 91.08% 72.81% 44.28% (NA) (NA)
Parametric Noise Injection (PNI) [14] 85.17% 56.51% 49.07% (NA) (NA)
Sparse Transformation Layer (STL) [31] 90.11% 87.15% (NA) 88.03% 89.04%
Our Method 91.65% 88.51% 88.61% 88.75% 90.03%
Gain +1.36% +39.54% +0.72% +0.99%
Table 1: Performance of our method (classification accuracy after defense) against white-box attacks on CIFAR-10 dataset ( = ). Some methods did not provide results on specific attack methods, which were left blank (marked with (NA) ).

3.3 Learning the Ensemble Generative Cleaning Network

In our defense method design, the generative cleaning network , the feedback loop and accumulative fusion network are jointly trained. The goal of our method is three-fold: (1) first, the generative cleaning network needs to make sure that the original image content is largely recovered. (2) Second, the feedback loop needs to successfully remove the residual attack noise. (3) Third, the accumulative fusion network

needs to iteratively recover the original image content. To achieve the above three goals, we formulate the following generative loss function for training the networks

(6)

where is perceptual loss, is the adversarial loss and is the cross-entropy loss. is a weighting parameter. In our experiments, we set it to be 1/3. To define the perceptual loss, the -norm between the recovered image and the original image is used [16]. In this work, we observe that the small adversarial perturbation often leads to very substantial noise in the feature map of the network [39]. Motivated by this, we use a pre-trained VGG-19 network, denoted by to generate visual features for the recovered image and the original image , and use their feature difference as the perceptual loss . Specifically,

(7)

The adversarial loss aims to train generative cleaning network and the feedback loop so that the recovered images will be correctly classified by the target network. It is formulated as

(8)

We train our accumulative fusion network , along with the generative cleaning network , to optimize the following loss function:

(9)

Here, represents the cross-entropy between the output generated by the generative network and the target label for clean images. With the above loss functions, our ensemble generative cleaning network learns to iteratively recover adversarial images.

The accumulative fusion network acts as a multi-image restoration network for original image reconstruction. Cascaded with the generative cleaning network , it will guide the training of and feedback loop network using back propagation of gradients from its own network, aiming to minimize the above loss function. In our design, during the adversarial learning process, the target classifier is called to determine if the recovered image is clean or not, as illustrated in Figure 2. The output of is fed back to itself as the input to enhance the next round of fusion.

4 Experimental Results

In this section, we implement and evaluate our EGC-FL defense method and compare its performance with state-of-the-art defense methods under a wide variety of attacks, with both white-box and black-box attack modes.

4.1 Experimental Settings

Our experiments are implemented on the Pytorch platform

[28]. Our proposed method is implemented on the AdverTorch [7] in both white and black-box attack modes, including the BPDA attack [2]. We choose the CIFAR-10 and SVHN (Street View House Number) datasets for performance evaluations and comparisons since most recent papers reported results on these two datasets. The CIFAR-10 dataset consists of 60,000 images in 10 classes of size . The Street View House Numbers (SVHN) dataset [23] has about 200K images of street numbers. For each of these two datasets, a classifier is independently trained on its training set, and the test set is used for evaluations.

Defense Methods Accuracy
Thermometer Encodings (TE) [3]
Stochastic Activation Pruning (SAP) [6] 0.00%
Local Intrinsic Dimensionality (LID) [19] 5.00%
PixelDefend [30]
Cascade Adv. Training (=0.015) [22]
PGD Adv. Training [20]
Sparse Transformation Layer (STL) [31]
Our Method
Gain +38.77%
Table 2: BPDA attack results on CIFAR-10 dataset. Results with are achieved with additional adversarial training.
Defense Methods No Attack FGS PGD
No Defense 94.38% 63.21% 38.71%
Adv. PGD [33] 83.50% 57.73% 55.72%
Adv. Network [35] 91.32% 77.23% 74.04%
Our Method 91.65% 79.09% 82.78%
Gain +1.86% +8.74%
Table 3: Performance of our method against black-box attacks on CIFAR-10 ( = ).

4.2 Results on the CIFAR-10 Dataset

We compare the performance of our defense method with state-of-the-art methods developed in the literature under five different white-box attacks: (1) FGS attack [9], (2) PGD attack [20], (3) BIM attack [18], (4) C&W attack [5], and (5) BPDA attack [2]. Following [17] and [35], the white-box attackers generate adversarial perturbations within a range of . In addition, we set the step size of attackers to be with attack iterations as the baseline setting.

We generate the perturbed images for training using PGD attacks and tested for all attack methods. During training, we set the iteration number . The perturbed images are used as the input, passing through our EGC-FL network for 3 iterations. But, during test, is flexible. In our white-box attack experiments, we unfold the feedback loops so that the attacker has full access to the end-to-end defense network, including the number of iterations.

(1) Defending against white-box attacks. Table 1 shows image classification accuracy with 6 defense methods: (1) Label Smoothing [36], (2) Feature Squeezing [41], (3) PixelDefend [30], (4) Adversarial Network [35], (5) the PNI (Parametric Noise Injection) method [14], and (6) the STL (Sparse Transformation Layer) method [31]. The second column shows the classification accuracy when the input images are all clean. We can see that some methods, such as the PixelDefend [30], Feature Squeezing [41], and PNI [14], degrade the classification accuracy of clean images. This implies that their defense methods have caused significant damages to the original images, or they cannot accurately tell if the input image is clean or being attacked. Since our method has a strong reconstruction capacity, the ensemble of reconstructed images still preserve the useful information. The rest four columns list the final image classification accuracy with different defense methods. For all of these four attacks, our methods significantly outperforms existing methods. For example, for the powerful PGD attack, our method outperforms the Adv. Network and the PNI method by more than 39%.

(2) Defending against the BPDA attack. The Backward Pass Differentiable Approximation (BPDA) [2] attack is very challenging to defend since it can iteratively strengthen the adversarial examples using gradient approximation according to the defense mechanism. BPDA also targets defenses in which the gradient does not optimize the loss. This is the case for our method since the transformed deadzone layer is non-differentiable. Table 2 summarizes the defense results of our algorithm in comparison with other seven methods: (1) Thermometer Encodings (TE) [3], (2) Stochastic Activation Pruning (SAP) [6], (3) Local Intrinsic Dimensionality (LID) [19], (4) PixelDefend [30], (5) Cascade Adversarial Training [22], (6) PGD Adversarial Training [20], and (7) Sparse Transformation Layer (STL) [31]. We choose these methods for comparison since the original BPDA paper [2] has reported results of these methods. We can see that our EGC-FL network is much more robust than other defense methods on the CIFAR-10 dataset, outperforming the second best by more than 38%.

Defense Methods No Attack FGS PGD
No Defense 96.21% 50.36% 0.15%
M-PGD [20] 96.21% (NA) 44.40%
ALP [17] 96.20% (NA) 46.90%
Adv. PGD [33] 87.45% 55.94% 42.96%
Adv. Network [35] 96.21% 91.51% 37.97%
Our Method 94.00% 94.10% 76.67%
Gain +2.59% +29.77%
Table 4: Performance of our method against white-box attacks on SVHN ( = ).

(3) Defending against black-box attacks. We generate the black-box adversarial examples using FGS and PGD attacks with a substitute model [24]. The substitute model is trained in the same way as the target classifier with a ResNet-34 network [12] structure. Table 3 shows the performance of our defense mechanism under back-box attacks on the CIFAR-10 dataset. The adversarial examples are constructed with = under the substitute model. We observe that the target classifier is much less sensitive to adversarial examples generated by FGS and PGD black-box attacks than the white-box ones. But the powerful PGD attack is still able to decrease the overall classification accuracy to a very low level, . We compare our method with the Adversarial PGD [20] and Adversarial Network [35] methods. We include these two because they are the only ones that provide performance results on CIFAR-10 with black-box attacks. From the Table 3, we can see our method improves the accuracy by 8.74% over the state-of-the-art Adversarial Network method for the PGD attack.

4.3 Results on the SVHN Dataset.

We evaluate our EGC-FL method on the SVHN dataset with comparison with four state-of-the-art defense methods: (1) M-PGD (Mixed-minibatch PGD ) [20]

, (2) ALP (Adversarial Logit Pairing)

[17], (3) Adversarial PGD [33], and (4) Adversarial Network [35]. For the SVHN dataset, as in the existing methods [17, 35], we used the Resnet-18 [12] for the target classifier. The average classification accuracy is . We use the same parameters as in [17] for the PGD attack with a total magnitude of . Within each single step, the perturbation magnitude is set to be and iterative steps are used.

(1) Defending against white-box attacks. Table 4 summarizes the experimental results and performance comparisons with those four existing defense methods. We can see that on this dataset the PGD attack is able to decrease the overall classification accuracy to an extremely low level, 0.15%. Our algorithm outperforms existing methods by a very large margin. For example, for the PGD attack, our algorithm outperforms the second best ALP [17] algorithm by more than . With the FGS attacks, the iterative cleaning process will produce image versions with more diversity than the clean image without attack noise. This helps reconstruct the original image.

Defense Methods No Attack FGS PGD
No Defense 96.21% 69.91% 67.66%
M-PGD [20] 96.21% (NA) 55.40%
ALP [17] 96.20% (NA) 56.20%
Adv. PGD [33] 87.45% 87.41% 83.23%
Adv. Network [35] 96.21% 91.48% 81.68%
Our Method 94.00% 94.03% 88.60%
Gain +2.55% +5.37%
Table 5: Performance of our method against black-box attacks on SVHN ( = ).
Figure 5: The perturbed-data accuracy of ResNet-18 under adversarial attack (Top) versus number of attack iteration, and (Bottom) versus perturbation magnitude (under ) on CIFAR-10 dataset.

(2) Defending against black-box attacks. We also perform experiments of defending black-box attacks on the SVHN dataset. Table 5 summarizes our experimental results with the powerful PGD attack and provides the comparison with those four methods. We can see that our approach outperforms other methods by for the FGS attacks and for the PGD attacks. From the above results, we can see that our proposed method is particularly effective for defense against the strong attacks, for example, the PGD attacks with large iteration steps and noise magnitude.

4.4 Ablation Studies and Algorithm Analysis

In this section, we provide in-depth ablation study results of our algorithm to further understand its capability.

(1) Defense against large-iteration and large-epsilon attacks. Figure 5 (Top) shows the performance results under large-iteration PGD and BPDA attacks. We can see that the large-iteration PGD attack significantly degrades the accuracy of the Vanilla Adversary Training method (VAT) [20] and the PNI (Parametric Noise Injection) method [14], as well as our method. But, our method significantly outperforms the other two. In both cases, the perturbed-data accuracy starts saturating without further drop when . In Figure 5 (Top), we also include the performance results of our method under large-iteration BPDA attacks. We set the adversarial perturbations within a range of with attack iterations as the baseline setting. This result is not reported by other methods so we could not include them for comparison. We can see that the BPDA attack is much more powerful. But, our algorithm can still survive large-iteration BPDA attacks and largely maintain the defense performance.

Figure 5 (bottom) shows comparison results against attacks with large perturbation magnitude. We can see that our method significantly outperforms the VAT and PNI defense methods even when the magnitude of adversarial noise is increased to under the PGD attack. We also include the performance of our method under large- BPDA attacks. We can see that our method is robust under very powerful attacks of large magnitudes.

(2) Analyze the impact of feedback loops. We notice that the feedback loop network plays an important role in the defense. In our method, the key parameter controlling the image quality is the number of feedback loop . We gradually increase and explore classification accuracy of the fused image. Table 6 shows the performance (classification accuracy after defense) of our method on the CIFAR-10 dataset with various attacks. We denote Gen as the number of feedback loops. We can see that the feedback loops within the range of 3 or 4 yields the best performance. One feedback loop does not provide efficient defense since the EGC-FL network is not able to fully destroy the attack noise pattern and restore useful information. Once the key features in the original image have been reconstructed, the classification accuracy will be stable and maintain the highest performance, although the image quality may get even better with accumulative fusion. In Figure 6, we show sample images from the CIFAR-10 when our method is applied. The first column is the clean image without attacks. The second column is attacked image. The third to last columns are reconstructed images of 4 generations by our EGC-FL method. We can see that our algorithm is able to remove the attack noise and largely recover the original image content.

Attack Method Gen1 Gen2 Gen3 Gen4
FGS 57.64% 78.04% 78.15% 78.31%
PGD 78.46% 85.36% 86.25% 86.55%
BPDA 19.40% 79.12% 79.28% 79.79%
Table 6: Performance of our method with feedback loops under adversarial attacks on CIFAR-10 dataset.
Figure 6: Adversarial images and their fused image produced by our method.
Defense Methods FGS PGD BPDA
Our Method (Full Alg.) 88.51% 88.04% 85.77%
 - Without Transform 79.32% 79.35% 79.62%
 - Without Feedback 77.12% 78.46% 19.37%
Table 7: Performance analysis of algorithm components.

(3) In-depth analysis of major algorithm components. In the following ablation studies, we perform in-depth analysis of major components of our EGC-FL algorithm, which includes the transform, deadzone, and the EGC network with feedback loops. In Table 7, the first row shows the classification accuracy of images after defense with our proposed EGC-FL method (full algorithm) on the CIFAR-10 dataset with FGS, PGD, and BPDA attacks. The second row shows results without the transform. We can see that the accuracy drops about 7-9%. The transform module is important because it can help protecting the original content from being damaged by the deadzone activation function by aggregating the energy of the original image into a small number of large transform coefficients. The third row shows the results without the feedback loop. We can see that it drops the accuracy by 10-11% under the FGS and PGD attacks. For the powerful BPDA attack, the drop is very dramatic, about 66%. With multiple feedback loops for progressive attack noise removal and original image reconstruction, it can significantly improve the defense performance, especially under powerful BPDA attacks.

(4) Visualizing the defense process. Network defense is essentially a denosing process of the feature maps. To further understand how the the proposed EGC-FL method works, we visualize the feature maps of original, attacked, and EGC-FL cleaned images. We use the feature map from the activation layer, the third from the last layer in the network. Figure 7 shows two examples. In the first example, the first row is the original image (classified into terrapin), its gradient-weighted class activation heatmap, and the heatmap overlaid on the original image. The heatmap shows which parts of the original image the classification network is paying attention to. The second row shows the attacked image (being classified into cobra), heatmap, and the heatmap overlaid on the attacked image. We can see that the feature map is very noisy and the heatmap is distorted. The third row shows the EGC-cleaned images. We can see that both the feature map and heatmaps have been largely restored.

Figure 7: Each pair of examples are feature maps corresponding to clean images (top), to their adversarial perturbed images (middle) and to their reconstructed images (bottom).

5 Conclusion

We have developed a new method for defending deep neural networks against adversarial attacks based on the EGC-FL network. This network is able to recover the original image while cleaning up the residual attack noise. We introduced a transformed deadzone layer into the defense network, which consists of an orthonormal transform and a deadzone-based activation function, to destroy the sophisticated noise pattern of adversarial attacks. By constructing a generative cleaning network with a feedback loop, we are able to generate an ensemble of diverse estimations of the original clean image. We then learned a network to fuse this set of diverse estimation images together to restore the original image. Our extensive experimental results demonstrated that our approach outperforms the state-of-art methods by large margins in both white-box and black-box attacks. Our ablation studies have demonstrated that the major components of our method, the transformed deadzone layer and the ensemble generative cleaning network with feedback loops, are both critical, contributing significantly to the overall performance.

Acknowledgement

This work was supported in part by National Science Foundation under grants 1647213 and 1646065. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

References

  • [1] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein gan. arXiv preprint arXiv:1701.07875. Cited by: §1.
  • [2] A. Athalye, N. Carlini, and D. Wagner (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In

    Proceedings of the 35th International Conference on Machine Learning

    ,
    pp. 274–283. Cited by: §1, §2, §4.1, §4.2, §4.2.
  • [3] J. Buckman, A. Roy, C. Raffel, and I. Goodfellow (2018) Thermometer encoding: one hot way to resist adversarial examples. In International Conference on Learning Representations, External Links: Link Cited by: §4.2, Table 2.
  • [4] N. Carlini and D. Wagner (2016) Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311. Cited by: §1, §2.
  • [5] N. Carlini and D. Wagner (2017-05) Towards evaluating the robustness of neural networks. 2017 IEEE Symposium on Security and Privacy (SP). External Links: ISBN 9781509055333, Link, Document Cited by: §4.2.
  • [6] G. S. Dhillon, K. Azizzadenesheli, J. D. Bernstein, J. Kossaifi, A. Khanna, Z. C. Lipton, and A. Anandkumar (2018) Stochastic activation pruning for robust adversarial defense. In International Conference on Learning Representations, External Links: Link Cited by: §4.2, Table 2.
  • [7] G. W. Ding, L. Wang, and X. Jin (2019) AdverTorch v0.1: an adversarial robustness toolbox based on pytorch. arXiv preprint arXiv:1902.07623. Cited by: §4.1.
  • [8] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li (2018-06) Boosting adversarial attacks with momentum.

    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

    .
    External Links: ISBN 9781538664209, Link, Document Cited by: §2.
  • [9] I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §1, §2, §2, §4.2.
  • [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §2.
  • [11] C. Guo, M. Rana, M. Cisse, and L. van der Maaten (2018) Countering adversarial images using input transformations. In International Conference on Learning Representations, External Links: Link Cited by: §2.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun (2016-06) Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §4.2, §4.3.
  • [13] W. He, J. Wei, X. Chen, N. Carlini, and D. Song (2017) Adversarial example defense: ensembles of weak defenses are not strong. In 11th USENIX Workshop on Offensive Technologies (WOOT 17), Cited by: §1.
  • [14] Z. He, A. S. Rakin, and D. Fan (2019) Parametric noise injection: trainable randomness to improve deep neural network robustness against adversarial attack. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 588–597. Cited by: Table 1, §4.2, §4.4.
  • [15] X. Jia, X. Wei, X. Cao, and H. Foroosh (2019) ComDefend: an efficient image compression model to defend adversarial examples. CoRR abs/1811.12673. Cited by: §2.
  • [16] J. Johnson, A. Alahi, and L. Fei-Fei (2016)

    Perceptual losses for real-time style transfer and super-resolution

    .
    In European conference on computer vision, pp. 694–711. Cited by: §3.3.
  • [17] H. Kannan, A. Kurakin, and I. Goodfellow (2018) Adversarial logit pairing. External Links: 1803.06373 Cited by: §2, §4.2, §4.3, §4.3, Table 4, Table 5.
  • [18] A. Kurakin, I. Goodfellow, and S. Bengio (2016) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236. Cited by: §1, §2, §4.2.
  • [19] X. Ma, B. Li, Y. Wang, S. M. Erfani, S. Wijewickrema, G. Schoenebeck, D. Song, M. E. Houle, and J. Bailey (2018) Characterizing adversarial subspaces using local intrinsic dimensionality. External Links: 1801.02613 Cited by: §4.2, Table 2.
  • [20] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2018)

    Towards deep learning models resistant to adversarial attacks

    .
    In International Conference on Learning Representations, External Links: Link Cited by: §1, §2, §2, §4.2, §4.2, §4.2, §4.3, §4.4, Table 2, Table 4, Table 5.
  • [21] D. Meng and H. Chen (2017) Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147. Cited by: §1, §2.
  • [22] T. Na, J. H. Ko, and S. Mukhopadhyay (2018) Cascade adversarial machine learning regularized with a unified embedding. In International Conference on Learning Representations, External Links: Link Cited by: §4.2, Table 2.
  • [23] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Ng (2011-01) Reading digits in natural images with unsupervised feature learning. NIPS, pp. . Cited by: §4.1.
  • [24] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami (2017) Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. Cited by: §2, §4.2.
  • [25] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami (2016) The limitations of deep learning in adversarial settings. pp. 372–387. Cited by: §1.
  • [26] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. Cited by: §1.
  • [27] N. Papernot and P. McDaniel (2016) On the effectiveness of defensive distillation. arXiv preprint arXiv:1607.05113. Cited by: §1.
  • [28] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §4.1.
  • [29] P. Samangouei, M. Kabkab, and R. Chellappa (2018) Defense-gan: protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605. Cited by: §1, §2.
  • [30] Y. Song, T. Kim, S. Nowozin, S. Ermon, and N. Kushman (2018) PixelDefend: leveraging generative models to understand and defend against adversarial examples. In International Conference on Learning Representations, External Links: Link Cited by: Table 1, §4.2, §4.2, Table 2.
  • [31] B. Sun, N. Tsai, F. Liu, R. Yu, and H. Su (2019-01) Adversarial defense by stratified convolutional sparse coding. CVPR. External Links: Link Cited by: Table 1, §4.2, §4.2, Table 2.
  • [32] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §1, §1, §2.
  • [33] F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel (2018) Ensemble adversarial training: attacks and defenses. In International Conference on Learning Representations, External Links: Link Cited by: §4.3, Table 3, Table 4, Table 5.
  • [34] G. K. Wallace (1992) The jpeg still picture compression standard. IEEE transactions on consumer electronics 38 (1), pp. xviii–xxxiv. Cited by: §3.2.
  • [35] H. Wang and C. Yu (2019) A direct approach to robust deep learning using adversarial networks. In International Conference on Learning Representations, External Links: Link Cited by: Table 1, §4.2, §4.2, §4.2, §4.3, Table 3, Table 4, Table 5.
  • [36] D. Warde-Farley (2016) 11 adversarial perturbations of deep neural networks. Perturbations, Optimization, and Statistics 311. Cited by: Table 1, §4.2.
  • [37] C. Xiao, B. Li, J. Zhu, W. He, M. Liu, and D. Song (2018) Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610. Cited by: §2.
  • [38] C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille (2018) Mitigating adversarial effects through randomization. In International Conference on Learning Representations, External Links: Link Cited by: §2.
  • [39] C. Xie, Y. Wu, L. van der Maaten, A. Yuille, and K. He (2018) Feature denoising for improving adversarial robustness. arXiv preprint arXiv:1812.03411. Cited by: §2, §3.3.
  • [40] C. Xie, Z. Zhang, Y. Zhou, S. Bai, J. Wang, Z. Ren, and A. Yuille (2018) Improving transferability of adversarial examples with input diversity. External Links: 1803.06978 Cited by: §2.
  • [41] W. Xu, D. Evans, and Y. Qi (2017) Feature squeezing: detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155. Cited by: §1, Table 1, §4.2.