. As the applications of deep learning expand, DNNs are more likely to be used in security-sensitive systems, such as medical imaging, autonomous driving, and surveillance systems. The reliability and robustness of deep learning for these applications is essential and cannot be ignored. A DNN, like many other systems, can be attacked purposefully using carefully crafted methods. For example, results have shown that DNNs are vulnerable against a variety ofadversarial attacks [4, 7, 9, 18, 24].
Adversarial attacks are designed to maliciously add small perturbations to the original input in order to fool the neural network into making incorrect predictions. For typical adversarial attacks, a key element is that the adversary knows in advance which classifier will be used and can design the perturbation using knowledge of the classifier. Of course, there is a trade-off between the size of the perturbation and its effectiveness. In many cases, the adversary can greatly increase the classification error rate by adding a perturbation that is almost imperceptible[3, 17].
In an abstract sense, one can model the adversarial classification problem as a two-player zero-sum game. In this game, the attacking player wins if the classifier makes an error and the defending player wins if the classifier is correct. A single point is awarded to the winner and no points are awarded to the loser. It is worth noting that the attacker has an advantage because their move depends on the true image while the defender’s move can only depend on the perturbed image. This is a game with imperfect information because the defender does not see the original image. In games of this type, it is well-known that both players may benefit from using randomized strategies. This is because, if one player fixes their strategy, then the other can always optimize against it. Although current adversarial models typically consider less complicated scenarios, this framework motivated us to choose a VAE due to its use of randomness. The asymmetry between the attack and defense in current adversarial models is also apparent in paper that design defense mechanisms for varied attacks [12, 27].
Adversarial training methods, which retrain the original neural network with additional adversarial examples , can learn to defend against specific attacks on which they are trained. But, as mentioned earlier, attackers still have an advantage because the cost of altering the attack is much lower than retraining with new adversarial examples. Therefore, researchers continue to search for a universal defense that performs well against a wide range of attacks. One example of an efficient low-cost defense is JPEG compression .
A novel defense is proposed for adversarial attacks on image classification networks which uses a variational auto-encoder (VAE) to reconstruct the input image before classification by the targeted neural network model. This defense method does not modify the deployed network nor does it depend on the particular attack chosen. Thus, it is universal.
Our defense strategy uses randomness in our via the random sampling process in the VAE. Patch-wise defense is used for large images to reducing the training cost of the VAE.
The proposed method is flexible and has multiple tunable hyper-parameters. Even without retraining the VAE, the defense can be altered by modifying the reconstruction process. Experiments show that this defense is capable of matching the performance other defenses based on JPEG compression. Due to its flexibility, however, it has more potential for improvement and integration with other methods.
2 Related Work
2.1 Adversarial Attacks
Consider a classification problem for images where there are classes labeled by . For this problem, a neural network with parameters maps an input image
to a vectorand the index of the largest value in in a way that reduces the training loss
where is the set of training pairs, denotes the standard basis vector with a one in the -th position, and
is the loss function associated with the neural network outputtingwhen the true one-hot class vector is . Common choices are squared-error and cross-entropy .
For a particular input with true class , an adversarial attack on this network adds a perturbation to create such that is small and , where is some norm on . A targeted attack chooses the value of in advance while a non-targeted attack is free to choose any .
For an input with class label , many attack methods are based computing the gradient to find perturbation directions that increase the loss function and, thus, cause the network to misclassify . In this paper, we mainly focus on the following attack methods:
|CNN Classifier Structures|
|Inception V3 net||1001|
Fast Gradient Sign Method (FGSM) :
FGSM is a gradient-based single-step attack method. It is quite fast and it generates adversarial images by adding or subtracting a fixed amount from each pixel in the image. For each pixel, the sign of the perturbation is determined by the sign of the gradient of the loss function with respect to the image. For an image , the adversarial image would be
where clips a vector to minimum/maximum pixel values and computes for the element-wise sign of the corresponding gradient.
Iterative FGSM (I-FGSM) :
I-FGSM is based on repeating the FGSM attack times with ,
and . During each iteration, is created by attacking with FGSM method and reduced by a factor of . The adversarial input is the output of the final iteration.
There are two primary methods of defense against adversarial attacks. The first type modifies the structure of the classifier and/or changes the training procedure [10, 19]. The second type does not change the classifier but instead focuses on modifying the input vector to mitigate attacks [6, 17, 20, 22]. The goal of the second type is to detect and/or remove adversarial perturbations before passing the data to the classifier. In this case, one needs to design a transformation that maps images to images and minimizes the effect if adversarial perturbations.
For an adversarial image with true class label , the attack implies that , where denotes the class label estimated by the classifier. When the defense is activated, the system instead passes to the classifier and this results in the class vector and the estimated class label . If the defense is successful, then and the image is classified correctly.
As mentioned by Shaham et al. , one good choice for is a basis transformation followed by scaling and/or quantization, which includes defenses such as low-pass filtering and JPEG compression. The success of these methods is attributed to the fact that they alter the image significantly and tend to reduce adversarial perturbations.
JPEG Compression  is a lossy image compression method that first applies the two-dimensional discrete cosine transform to image patches. Then, it quantizes the resulting coefficients (using more bits for lower frequencies) and uses lossless compression to compress the quantized values. We note that lossless compression plays no role in the adversarial defense and can be ignored in this application. There is a quality parameter that controls the amount of information loss during the compression. In our experiments, different quality parameters provide different defense performance depending on the attack strength.
|Variational Auto-encoder Structures|
Variational-autoencoder encoder structures. The decoder structures for each model is symmetric, using transpose convolution and upsampling.
|NIPS 2017 VAE Encoder Structures|
3 Variational Auto-encoder (VAE)
A variational auto-encoder [14, 21] is a neural-network model that maps a high-dimensional feature vector to a lower-dimensional latent vector and then incorporates randomness before mapping it back to the original feature space. It can be seen as a standard auto-encoder, with an encoder and a decoder, but with a random sampling operation separating the two .
In our case, we use a VAE where the encoder determines the mean and variance of a Gaussian random vector that is then mapped back to an image by the decoder. In this case, the encoder is a convolutional neural network parametrized by, denoted as , which maps the input to the random latent vector . The deterministic part of the encoder acts like a compressor  by mapping the key features of the input to a lower-dimensional representation. The decoder is a de-convolutional neural network parametrized by , denoted as which takes a vector as input and reconstructs an image as the output. Auto-encoders are trained by optimizing a single loss function that measures a distance between input image and output image .
The key difference between a VAE and an ordinary auto-encoder is that, instead of the encoder neural network mapping directly to the latent space, it outputs the mean and variance, and , of the sampled latent vector . Then,
is drawn from a Gaussian distribution with meanand diagonal covariance matrix . Finally, the sampled vector is passed into the decoder and used to produce the output image . Instead of trying to approximate the output image exactly, a VAE defines a conditional distribution that approximates the underlying continuous distribution of images. Thus, the VAE provides a generative model for images similar to the input image.
In order to learn the parameters of a VAE, the optimized loss function typically consists of two parts: the reconstruction loss and the KL-divergence loss . The first term penalizes the error between the reconstructed image and the input image while the second term encourages the mean/variance pairs for latent variable that are far from a standard Gaussian vector. The loss function for the -th image in the training set is given by
where is a standard Gaussian distribution on , is a Gaussian distribution whose mean and variance are given by the encoder neural network, and is a distribution associated with the reconstruction error term. For mean-squared error, first maps to using the de-convolutional neural network with parameters and then defines
where is the Euclidean norm. Alternatively, one can use binary cross-entropy loss between and to define .
For the Gaussian case, a ”reparameterization trick” allows backpropagation to calculate the gradients during training. The idea is to represent the random sampling process in a differentiable form, where the meanis added to standard Gaussian noise which is multiplied by the .
Disentanglement. When we obtain the latent vector compressed by the VAE, we want every component of the latent vector to represent different features of the original image. Disentangled variational auto-encoders are designed to approach this goal [2, 11]
. Disentanglement is applied by modifying the optimization loss function and adding the hyperparameterto get
In our experiments, we adjusted the hyperparameter to reduce the weight of KL-Divergence loss. This pushed the VAE towards higher-quality reconstruction with less randomness.
4 Experimental Setup
In this section, we describe the datasets, CNN models, variational auto-encoder models, attacks and evaluation metrics used for our experiments.
Datasets. We use three different datasets: MNIST, CIFAR-10 and the NIPS 2017 Defense Against Adversarial Attacks development dataset111NIPS 2017 Defense Against Adversarial Attacks Kaggle Competition Dataset https://www.kaggle.com/c/nips-2017-defense-against-adversarial-attack/data. MNIST is a dataset of handwritten digits where each image is a gray-scale image labeled one of ten classes from 0 to 9. There are 60000 training images and 10000 testing images. CIFAR-10 is a ten-class image dataset where each image is a RGB image. There are 50000 training images and 10000 testing images. The NIPS 2017 Defense Against Adversarial Attacks development dataset is a 1001-class image dataset where each image is
RGB image obtained from Imagenet. We have access to the 1000 images provided by the competition in the development set.
CNN Classifiers. For MNIST and CIFAR-10, we set up our own CNN classifiers and train them for the classification. For NIPS dataset, we use the pre-trained 1001-class Inception V3 model  provided by the NIPS Kaggle competition11footnotemark: 1. The structure of each classifiers is described in Table 1. Before being attacked by any adversarial attack, the classification accuracy on the MNIST test set, CIFAR-10 test set and NIPS 2017 dataset are 0.9904, 0.8663 and 0.945 respectively. We use these as baselines when evaluating our defense.
Attacks. All our attacks are white-box attacks. Thus, We assume that the attacker has access to the parameters of the deployed classifier, but is not aware of the defense strategies. In order to make our defense universal, we only train our VAEs to reconstruct original images instead of learning to reconstruct original images from attacked images (adversarial training). We use FGSM, I-FGSM as our attack methods. For MNIST, we set for FGSM and I-FGSM, the number of iterations for I-FGSM is 10. For CIFAR-10, we set for FGSM and I-FGSM, the number of iterations for I-FGSM is 10. For NIPS, we set for FGSM, following the same settings as Shaham et al. .
Defense. We evaluate our defense along with JPEG compression, which is the overall best basis transformation used as defense proposed by Shaham et al. . Following their results on tuning the JPEG quality parameter, we consider the JPEG quality of 23 and also evaluate the performance of other JPEG qualities.
Evaluation. We evaluate the performance of the attacks and defense on top-1 prediction accuracy versus the average relative L2 difference between original and attacked images. The average relative L2 difference is defined to be
Normally a larger L2 difference results in larger perturbations which cause more errors in classification. However, for a more efficient attack, a relatively small L2 difference can result in a significant decrease in the classification accuracy. For example, in tables 6 and 7, a normalized relative L2 difference of 0.01 results in 0.35 and 0.15 accuracy for FGSM attack and I-FGSM attack respectively. I-FGSM is a more precisely crafted attack adding small perturbations that confuses the classifier more efficiently.
|JPEG||VAE p||VAE p||VAE p|
|L2 Diff||No Def||JPEG 10||JPEG 23||Stride 16||Stride 32||Stride 32||Stride 64||Ensembled|
5 Variational auto-encoder training
Models. We trained multiple variational auto-encoder models from scratch. The encoder structures for MNIST and CIFAR-10 are in Table 2. The decoder structure are mirror images of the encoders. For MNIST and CIFAR-10, we train the VAE on the training set images and evaluate the defense on the testing set.
For NIPS 2017 dataset, the size of original images are . Instead of training an auto-encoder on the original images, we decided to train our VAE on patches randomly extracted from the 1000 original images provided. In this way, the model is easier to train and a patch-wise reconstruction is much more flexible. We use three different input patch sizes, , and .
Training. We use the ADAM optimizer  to perform the end-to-end training process independently of the attack. For the NIPS 2017 dataset, we also made some modifications to the model structure. In particular, a tunable clipping value to the noise
was added for sampling the latent space. This restricts the random Gaussian noise to be within the range of clipping. We set the default value for clipping to [-5, 5] and used it for training. For each training process, we trained enough epochs until the first term in the loss function changes very little.
MNIST. For MNIST, our defense is much more effective against the FGSM than JPEG. The results are consistent as we tune the FGSM and I-FGSM parameter to enhance the attack and cause a different amount of perturbations. In Table 4, for example, our VAE defense maintains an accuracy of 0.845 while JPEG (quality 23) defense can only defend the network from 0.408 to 0.505. The results for the I-FGSM attack are shown in Figure 2.
|JPEG||VAE p||VAE p||VAE p|
|L2 Diff||No Def||JPEG 10||JPEG 23||Stride 16||Stride 32||Stride 32||Stride 64||Ensembled|
CIFAR-10. For CIFAR-10, our defense also outperforms JPEG compression consistently in general. For small and large perturbations, our VAE defense maintains about higher accuracy than JPEG with quality parameter 23. However, if we set the quality parameter to be comparatively low (10), at small adversarial perturbations JPEG compression restores a lot fewer images for correct classification due to the information loss. But, as the adversarial perturbation gets larger, JPEG surpasses the performance of VAE because the significant information loss also removes most of the adversarial perturbations.
NIPS 2017. For the NIPS 2017 dataset, we trained three different VAE models with different input image sizes, , and . The training data are randomly extracted patches from the 1000 images. Since the original images are , the training process would be very time consuming if we directly trained a variational auto-encoder on the images. In order to decrease training time and add flexibility, we decided to apply our image reconstruction on patches. Our approach is to extract patches from the original image, reconstruct the image with our trained VAE, stack reconstructed patches together and average the overlapping parts.
Here, we introduce another hyper-parameter for our VAE defense: the stride of the reconstruction process where we stack overlapping pixels and take the average. A smaller stride results in more overlapping area and creates higher-quality images. But, it tends to also preserve the adversarial perturbations. We also improve the defense’s flexibility by adding this as a hyperparameter. Figure 4 shows an example of image reconstruction with patches and a stride of 1.
Reconstruction of images from patches typically suffers from sharp edges between patches stitched together. If we only stack overlapping patches and average the pixel values, we still suffer from significant artifacts near the corners and edges of the stacked patches. The worst effect is that these edge effects behave like additive noise for the classifier. To mitigate this problem, we also apply a smoothing filter to every image we reconstruct from patches. While the edge effects are significantly reduced, the smoothing filter also blurs the image. To compensate for this blurriness, we adjust the hyper-parameter and train the model longer to achieve more precise reconstruction of the images. The end result was that the smoothing filter increased the classification accuracy on VAE reconstructed images by roughly 10%, for both benign and attacked images.
Table 6 shows the performance of our VAE defense on the NIPS 2017 dataset under FGSM attack. We apply VAE models of different input sizes with different reconstruction strides. Our best single model is the patch VAE model. With an 0.167 average relative L2 difference between original and adversarial images, the model restores the classification accuracy back to 0.563. In this case, JPEG compression of quality 10 reaches an accuracy of 0.548. However, for small perturbations (0.01 L2 difference), JPEG compression of quality 23 significantly outperforms any other model, restoring the accuracy to 0.86. But the performance of 23 quality JPEG drops rapidly as the L2 difference increases. Eventually, it ends up with an accuracy of 0.0401, which is the lowest.
Through these experiments, we can observe that a better reconstruction of the original image results in better defense against small perturbations but fails to remove adversarial noise as the perturbations get larger. Our and patch VAE models show similar results and their defense against small perturbations is better than patch model. But, this advantage does not persist as the attack becomes severe.
To overcome this disadvantages of each patch size, we also considered a ensemble-averaged output of the four VAE models in the table. To compute this, the pixels of the four independent VAE reconstructions are averaged. The last column shows the performance of the ensemble averaged output. Its performance approaches that of JPEG quality 10 on small perturbations while maintaining a better performance for larger perturbations.
The I-FGSM attack is a more efficient attack than the FGSM attack. I-FGSM creates small perturbations (0.07 average relative L2 difference) that significantly decrease the classification accuracy to 0.02. As shown in Table 7, a single patch VAE model is capable of restoring the classification accuracy from 0.025 to 0.739. From the results, we see that the ensemble-averaged model provides better defense against small perturbations than any single VAE model but sacrifices performance when facing larger perturbations.
In this paper, we explore the performance of variational auto-encoders (VAE) designed to defend against adversarial attacks and proposed a patch-wise reconstruction method for large resolution images. The proposed method turned out to be robust against FGSM and I-FGSM adversarial attacks.
In future work, we plan to improve these strategies by modifying the training to more directly reward the removal of adversarial attacks. During our experiments, we observed that our method is less effective for small perturbations to large images. Thus, we are currently considering modified VAE structures to overcome this weakness.
Autoencoders, unsupervised learning, and deep architectures.In
Proceedings of ICML workshop on unsupervised and transfer learning, pages 37–49, 2012.
-  C. P. Burgess, I. Higgins, A. Pal, L. Matthey, N. Watters, G. Desjardins, and A. Lerchner. Understanding disentangling in -VAE. arXiv preprint arXiv:1804.03599, 2018.
N. Carlini and D. Wagner.
Adversarial examples are not easily detected: Bypassing ten
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 3–14. ACM, 2017.
-  N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE, 2017.
-  N. Das, M. Shanbhogue, S.-T. Chen, F. Hohman, L. Chen, M. E. Kounavis, and D. H. Chau. Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:1705.02900, 2017.
-  N. Das, M. Shanbhogue, S.-T. Chen, F. Hohman, S. Li, L. Chen, M. E. Kounavis, and D. H. Chau. Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression. arXiv preprint arXiv:1802.06816, 2018.
-  A. Fawzi, O. Fawzi, and P. Frossard. Analysis of classifiers’ robustness to adversarial perturbations. Machine Learning, 107(3):481–508, 2018.
-  I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
-  I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples (2014). arXiv preprint arXiv:1412.6572.
-  S. Gu and L. Rigazio. Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068, 2014.
-  I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. 2016.
-  X. Huang, M. Kwiatkowska, S. Wang, and M. Wu. Safety verification of deep neural networks. In International Conference on Computer Aided Verification, pages 3–29. Springer, 2017.
-  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
-  S. Kullback. Information theory and statistics. Courier Corporation, 1997.
-  A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
-  B. Liang, H. Li, M. Su, X. Li, W. Shi, and X. Wang. Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction. IEEE Transactions on Dependable and Secure Computing, 2018.
-  A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 427–436, 2015.
-  N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP), pages 582–597. IEEE, 2016.
-  A. Prakash, N. Moran, S. Garber, A. DiLillo, and J. Storer. Deflecting adversarial attacks with pixel deflection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8571–8580, 2018.
-  D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
-  U. Shaham, J. Garritano, Y. Yamada, E. Weinberger, A. Cloninger, X. Cheng, K. Stanton, and Y. Kluger. Defending against Adversarial Images using Basis Functions Transformations. arXiv preprint arXiv:1803.10840, 2018.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
-  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
-  L. Theis, W. Shi, A. Cunningham, and F. Huszár. Lossy image compression with compressive autoencoders. arXiv preprint arXiv:1703.00395, 2017.
-  G. K. Wallace. The JPEG still picture compression standard. IEEE transactions on consumer electronics, 38(1):xviii–xxxiv, 1992.
-  V. Zantedeschi, M.-I. Nicolae, and A. Rawat. Efficient defenses against adversarial attacks. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 39–49. ACM, 2017.
-  Y. Zhou, M. Kantarcioglu, and B. Xi. Breaking Transferability of Adversarial Samples with Randomness. arXiv preprint arXiv:1805.04613, 2018.