In the past decade, the research on Neuromorphic-inspired computing models, and the applications of Deep Neural Networks (DNN) for estimation of hard-to-compute functions, or learning of hard-to-program tasks has significantly grown and their accuracy has considerably improved. Research on learning models was first to focus on improving the accuracy of the models(Krizhevsky and et al., 2012; He and et al., 2016), but then as models matured, researchers explored other dimensions, such as energy efficiency of the models (Neshatpour and et al., 2018, 2019; Sayadi and et al., 2017) and underlying hardware (Mirzaeian et al., 2020b, a; Chen and et al., 2018). The wide adoption of these capable solutions then started raising concerns over their security.
Among many security aspects of learning solutions, is their vulnerability to adversarial attacks (Szegedy et al., 2013; Goodfellow et al., 2014). Researchers have illustrated that imperceptible, yet targeted adversarial changes to the input (i.e. image, audio, or video input) of neural networks can dramatically drop their performance (Kurakin et al., 2016; Papernot et al., 2016b, a).
The vulnerability of DNNs to adversarial attacks has raised serious concerns for using these models in critical applications in which an adversary can slightly perturb the input to fool the model (Elsayed et al., 2018; Yang and et al., 2018). In this paper, we focus on adversarial attacks on image classification models where an adversary manipulates an input image, forcing the DNN to misclassify.
Non-robust features are those features that are strongly associated within a certain class, yet have small variation across classes (Garg et al., 2018; Behnia and et al., 2020). Ilyas and et al. at (Ilyas et al., 2019) showed that the high sensitivity of the underlying model to the non-robust features existing at the input dataset is a major reason for the vulnerability of the model to the adversarial examples. So an adversary crafts a perturbation that accentuates the non-robust features to achieve a successful adversarial attack.
From this discussion, a means of building robust classifiers is identifying robust features and training a model using only robust features (that have a large intra-class variation), making it harder for an adversary to mislead the classifier,(Ilyas et al., 2019). Motivated by this discussion, we proposed a simple yet effective method for improving the resilience of DNNs, by introducing auxiliary model(s) trained in the spirit of knowledge distillation, while forcing diversity across features formed in their latent spaces.
We argue that the reason why adversarial attacks are transferable across models is that they learn similar latent spaces for non-robust features. This is either the result of using the 1) same training set or 2) the use of knowledge distillation while solely focusing on improving the classification accuracy. In other words, for sharing the dataset or the knowledge of a trained network (on a dataset), the potential vulnerabilities of models coincide. Hence, an attack that works on one model is very likely to work on the other(s). This conclusion is also supported by the observations by Ilyas and et al. at (Ilyas et al., 2019). From this argument, we propose to augment the task of knowledge distillation with an additional and explicit requirement that the learned features by the student (i.e. auxiliary) model(s) should be distinct and independent from the teacher (i.e. main) model. For this reason, as illustrated in Fig. 1, we introduce the concept of Latent Space Separation (LSS), forcing the auxiliary model to learn features with little or no correlation to those of the teacher’s. Hence, an adversarial attack on the main model will have minimal impact on the latent representation of features learned by the auxiliary model(s).
2. Prior Work
Prior research on adversarial learning has produced different explanations on why leaning models could be easily fooled by adversarial input perturbation. The early research blamed the non-linearity of neural networks for their vulnerability (Goodfellow et al., 2014; Biggio and et al., 2013). However, this perception was later challenged by Goodfellow and et. al. (Goodfellow et al., 2014), who developed the Fast Gradient Sign Method (FGSM), explaining how neural network linearity can be exploited for rapidly building adversarial examples.
Building robust learning models that could resist adversarial examples has been a topic of interest for many researchers. Some of the most notable prior art on this topic includes 1) Adversarial Training (Shaham et al., 2018)(Amberkar et al., 2018), 2) Knowledge Distillation (KD) (Hinton et al., 2015), and 3) de-noising and refinement of the adversarial examples (Meng and Chen, 2017).
Adversarial Training: It is the process of incremental training of a model with the known adversarial examples to improve its resilience. The problem with this approach is that the model’s resilience only improves when the model is attacked with similarly generated adversarial examples (Wang et al., 2019; Tramèr and Boneh, 2019).
Knowledge Distillation (KD): In this method, a compact (student) model learns to follow the behavior of one or more teacher models. It was originally introduced as a means of building compact models (students) from more accurate yet larger models (teachers). But, later it was also used for diminishing the sensitivity of the student’s output model concerning the input’s perturbations (Papernot et al., 2015). However, the work in (Carlini and Wagner, 2016) illustrated that if the attacker has access to the student model, with minor changes, the student model could be as vulnerable as the teacher. Specifically, knowledge distillation can be categorized as a gradient masking defense (Athalye and others., 2018) in which the magnitude of the underlying model’s gradients are reduced to minimize the effect of changes in the model’s input to its output. Although grading masking defenses can be effective defense against white-box attacks, they are not resistant against black-box evasion attacks (Carlini and Wagner, 2016). Our proposed solution is motivated by KD, however, we do not force the auxiliary (i.e. student) network(s) to follow the output layers of the main network (i.e. teacher); In contrary to the KD, the auxiliary network has to learn a different latent space while being trained for the same task and on the same dataset.
Refining the Input Image:
The adversarial defenses that rely on refining the input samples try to denoise the input image using some sort of autoencoder (variational, denoising, etc.)(Chen and Sirkeci-Mergen, 2018). In this approach, the image is first encoded using a deep network to extract a latent code (a lossy, compressed, yet reconstructable representation of the input image) and then the image is reconstructed using a decoder. Then the decoded image is fed to the classifier (Chen and Sirkeci-Mergen, 2018). However, this approach suffers from two main weaknesses: (1) The reconstruction error of decoder can significantly reduce the classifier’s accuracy and such reconstruction error increases as the number of input classes increases; (2) the used encoder-network is itself vulnerable to adversarial attacks which mean new adversarial examples can be crafted on the model which also include the encoder-network.
3. Proposed Method
Our objective is to formulate a knowledge distillation process in which one or more auxiliary (student) models are trained to closely follow the prediction of a main (teacher) model , while being forced to learn substantially different latent spaces. For example in Fig. 2, let’s assume three auxiliary models , , have been trained alongside the main model to have the maximum diversity between their latent space representations. Our desired outcome is to assure that an adversarial perturbation that could move the latent space of the input sample out of its corresponding class boundary, , has a negligible or small impact on the movement of the corresponding latent space of in the class boundaries of the auxiliary models , , . Hence, an adversarial input that could fool model , becomes less effective or ineffective on the auxiliary models. This objective is reached by the way the loss function of each model is defined. The details of and network(s), learning procedure and objective function of are explained next:
Main Model (:) In this paper, we evaluated our proposed method on two datasets MNIST (LeCun et al., 1998) and CIFAR10 (Krizhevsky and et al., 2009). So depending on the underlying dataset, the structure of the main model (teacher) is selected as showed in Table 1. We also employed cross-entropy loss , see Eq. 1, as the objective function for training the model .
In this equation, and are training sample and its label, respectively.
Auxiliary Model (): Each auxiliary model is a structural replica of the main model . However, model is trained using a modified KD training process: let’s denote the output of layer of model (i.e. latent space of model ) and by and respectively. Our training objective is to force model to learn very different latent space compared to when both do the same classification task on the same dataset. To achieve this, the term which shows the similarity of the layer of model A to layer of model is defined as follows:
In this objective function, is the dataset, and the is the inner product function. This similarity measure then is factored in to define the loss function for the training the model , which increases the dissimilarity of the layer of the model with respect to the :
In this equation, is a regularization parameter to control the contribution of each term.
Let’s assume the adversarial perturbation , when added to the input , forces the model to misclassify , or more precisely . For this misclassification (evasion) to happen, in a layer (close to the output) the added noise has forced some of the class-identifying features outside its related class boundary learned by model . However, the class boundaries learned by and are quite different. Therefore, as illustrated in Fig. 2, although noise can move a feature out of its learned class boundary in model , it has very limited power in displacing the features learned by model outside of its class boundary in layer . In other words, although the term between the main and the auxiliary models has a low value, the term between the auxiliary model before and after adding perturbation has a high value, subsequently the student model has a low sensitivity to the perturbation , meaning .
3.1. Black and White-Box defense:
The auxiliary models could be used to defend against both white and black box defenses, description and explanation for each are given next:
Black-box Defense: In the black box attack, an attacker has access to model , and can apply her desired input to the model and monitor the model’s prediction for designing an attack and adding the adversarial perturbation to the input . Considering no access to the model , and for having very different feature space, the model remains resistant to black-box attacks and using a single is sufficient.
White-box Defense: In the white-box attack, the attacker knows everything about the model and , including model parameters and weights, full details of each model’s architecture, and the dataset used for training the network. For this reason, using a single model is not enough, as that model could be used for designing the attack. However, we can make the attack significantly more difficult (and also improve the classification confidence) by training and using multiple robust auxiliary models. However, each of our models learns different features compared with all other auxiliary models. Then, to resist the white-box attack, we create a majority voting system from the robust auxiliary models.
Let’s assume we want to train auxiliary models, , each having a diverse latent space (i.e. ). To learn these networks, firstly, based on 3, the is learned aiming . Then, the is learned to be diverse of both and . This process continues one by one, reaching the model, that its latent space is diverse from all previous models (i.e. ). According to this discussion for learning the i auxiliary model, the loss function is defined as:
Finally, to increase the confidence of the prediction, instead of a simple majority voting system for the top-1 candidate, we consider a boosted defense in which the voting system considers the top candidates of each model for those cases that majority on top-1 fails (there is no majority between the top-1 predictions). This gives us two benefits 1) if a network misclassify due to adversarial perturbation, there is still a high chance for the network to assign a high probability (but not the highest) to the correct class. 2) if a model is confused between a correct class and a closely related but incorrect class and assign the top-1 confidence to the incorrect class, it still helps in identifying the correct class in the voting system.
for those cases that majority on top-1 fails (there is no majority between the top-1 predictions). This gives us two benefits 1) if a network misclassify due to adversarial perturbation, there is still a high chance for the network to assign a high probability (but not the highest) to the correct class. 2) if a model is confused between a correct class and a closely related but incorrect class and assign the top-1 confidence to the incorrect class, it still helps in identifying the correct class in the voting system.
4. Experimental Results
In this section, we evaluate the performance of our proposed defense against various white and black-box attacks. All models are trained using Pytorch framework (which is a toolbox for crafting various adversarial examples). The details of the hyperparameters and system configuration are shown in Table
In this section, we evaluate the performance of our proposed defense against various white and black-box attacks. All models are trained using Pytorch framework(Paszke et al., 2019), and all attack scenarios are implemented using Foolbox(Rauber et al., 2017)
(which is a toolbox for crafting various adversarial examples). The details of the hyperparameters and system configuration are shown in Table1.
4.1. Latent Space Separation
To quantify the diversity of the latent space representations of the ensemble trained on a dataset , we first define the Latent Space Separation (LSS) measure between latent spaces of two models and as:
In which , are latent space representations of the dataset D for models and , respectively. (Cortes and Vapnik, 1995) for linearly separate the latent spaces obtained on the dataset . More precisely, LSS between two latent spaces is obtained by following these 4 steps: 1) training both models , on a dataset i.e., MNIST. 2) generating the latent space of each model on the evaluation set i.e., and . 3) tuning the latent representations into a two-class classification problem tackled by SVM classifier 4) using SVM margin as LSS distance between two latent representations of the dataset MNIST, . This process has been shown in Fig. 4. Note that the SVM classifier should be set in the hard margin mode, meaning no support vector is allowed to pass the margins. When the SVM classification fails, it means that the latent spaces were not linearly separable i.e., there is either an overlap between latent spaces or the decision boundary cannot be modeled linearly. So in both cases, the more the marginal distance between latent spaces are the higher is the diversity of formed latent spaces.
Using Eq. 5, we define more generalized formula of LSS using an ensemble model comprised of models, see Eq. 6. In fact the total LSS of an ensemble model is obtained by averaging the LSS of each model latent space versus all other models’. For instance, Let’s imagine the ensemble model comprised of three models , , and . Then the total LSS is calculated by . LSS measures the marginal distance of SVM in a two-class-classification task, so , indicates that a SVM classification has been performed between the latent space of the model and an aggregation of latent spaces of other two models and .
To investigate the effectiveness of LSS, as a metric for measuring the diversity between different latent spaces, we considered three different scenarios for training an ensemble of 3 models with the same structure: I) Random Initialization (RI) where 3 models trained independently with a random initial value, see Fig. 3-right. II) Knowledge Distillation (KD), where 3 models trained collaboratively as shown in Fig. 3-middle, III) Diversity Knowledge Distillation (DKD), where 3 models are trained in a collaborative yet different manner of KD, see Fig. 3-left. Note that KD and RI methods deal with the softmax probabilities, shown with red boxes, however, DKD uses a mix of softmax and the latent spaces, showed with black straps. Alongside each one of the designs DKD, KD, and RI, a boosted version of each one is implemented and denoted by , , and , respectively. Both KD and DKD are trained in a one-by-one manner, meaning the model considers the previously trained models at its training phase while those models are frozen i.e., their parameters (weights) are not updated while the model is being trained.
Fig. 5-top shows the for an ensemble of three models (for MNIST and CIFAR10 datasets), with the structures described in Table. 1. Fig. 5-A and C show that for the MNIST dataset, increasing the value of the causes 1) rapid increase at the 2) slight drop at the classification accuracy. Considering the equation 4, increasing the means putting less emphasis on the cross-entropy term which reflects the slight drop of accuracy and increases the of the latent spaces of the ensemble models. A similar pattern happens for the CIFAR10 dataset, Fig. 5-B and D, in which increases to its maximum at the while the classification accuracy slightly increases. Between all the different values of acceptable accuracies, the value which leads to a higher is selected as the parameter. In other words, to have a diverse latent space, the LSS between them should be maximized while the accuracy is kept in the acceptable range. For example in Fig. 5-B, when is 0.5, the LSS is at the maximum level while accuracy is also at an acceptable level. Accordingly, the following attacks have been performed when the parameters is set to 0.5 and 0.9 for the CIFAR10 and MNIST datasets.
One instant observation in Fig. 5 is that the between the latent spaces obtained by DKD approach is noticeably larger than RI, and the between latent spaces obtained by RI is slightly larger than KD. This observation is aligned with our expectations because the DKD is designed to increase the diversity between the latent spaces while the KD in essence, increases the similarity between models and this is because the student model(s) imitates the behavior of the teacher(s).
4.2. Black-Box Attacks
For launching a black-box attack, the adversary uses a reference model and trains it based on the available dataset. Then knowing the transferability of the adversarial example, the adversary extracts the adversarial example on the reference model and applies it to the models under attack. Assume the adversary used LENET and VGG16 as the reference model for MNIST and CIFAR10 respectively, and the underlying models under attack are an ensemble of three models with the structure which has been shown at Table 1. For investigating the performance of the proposed method (DKD) on black-box attacks, we also considered two other methods RI, and KD for training an ensemble of three models.
We used the majority voting between the ensemble models, however, in some cases, each one of the models results in a different prediction regards to other models, we refer to these cases as failed majorities. So for each one of the available attacks and two datasets (MNIST and CIFAR10) we counted the number of failed majorities. Table 2, shows the difference between the regular and boosted version of each benchmark in regards to the number of the failed majority voting. From this table, we observed that 1) the number of majority voting failures at the DKD is always higher than the other two regular methods. This confirms that our objective function could successfully train diverse models because the majority voting fails whenever the models couldn’t agree on a label. So at the presence of the adversarial example the disagreement between the models which trained with DKD is higher than the others 2) The number of majority failures drop by going from a regular to a boosted model. The effect of this drop on the accuracy is shown by Accuracy Improvement percentage, A.I.
Table 3 show the result of applying some of the state-of-art attacks on the DKD, KD, and RI on the MNIST, and CIFAR10 datasets. Investigating these two Tables two trends reveals 1) Boosted version has better performance than the regular version 2) Almost at all attack scenarios, outperform the other defense scenarios.
4.3. White-Box Attacks
We considered two different white-box scenarios 1) Standalone attack, in which the adversary can only apply the adversarial perturbation to one model at a time. 2) The Aggregated attack, in which the adversary can apply the adversarial perturbation to both models simultaneously. These two scenarios have been explained through a toy example on two models and in Fig. 6. For the standalone attack, the adversary finds the direction as an adversarial perturbation on the model B. In the second step, the adversary applies the same perturbation on the samples of the model A, with the hope that those adversarial perturbations could be transferred to model A. In the forward pass of the model A, the following equation which is a simplification of the model A function is happening . So based on the angle between the latent spaces of the models A and B, based on. Fig. 6, three scenarios are possible, each of which represent a different hypothetical angle between plane , and . When both latent spaces and are orthogonal (case I), a successful adversarial perturbation of one model does not transfer to the other model (causes no perturbation), and vice versa. Hence, a disagreement between these two models can be an indicator of an adversarial example. In two other cases (II, III), the smaller the angle between the latent spaces, the more probable is that the transferrability of adversarial perturbation across models. Note that the projection of the vector on the plane , , shows the direction of the adversarial perturbation for the model A. If this projection is large enough to move the data point out of its class boundary then input misclassifies as .
For the aggregated attack, the adversary obtains the adversarial perturbation of both latent spaces A and B separately and independently. Then the adversary selects the direction of the sum as the adversarial direction, in which by changing the magnitude of the vector on that direction the adversary successfully obtains an adversarial perturbation which can be applied on both the models. In this scenario let’s assume , and are the perturbation vectors at the plane A and B respectively, in which led to an imperceptible adversarial example, and also and are the angles between these two vectors and sum of these two vectors, respectively. Based on the value of between and , three different outcomes can be assumed, 1) Fig.6 Aggregated-II, , in which the projection of on each vector and is less than either of them. In this case, as an adversarial perturbation cannot be a successful attack on either of the models. 2) Fig.6 Aggregated-I, , in which the projection of on each one of the individual vectors and is greater than either of them, which means as an adversarial example can be a successful attack. However the resulted adversarial perturbation in this scenario is large. 3) Fig.6 Aggregated-III, , in which the projection of the on and is equal to either of them, which means as an adversarial example can successfully move out the data point from its class boundary at both planes.
Table 4 captures the result of various adversarial attacks on our proposed solution. For the FGSM attack, the hyper-parameters 0.1 and 0.3 are reported. Iterative attacks are executed for 200 iterations. DKD shows the evaluation of DKD* when the adversary uses the standalone attack, and DKD shows the aggregated attack. As indicated in this table, our proposed solutions outperform prior art defense, illustrating the effectiveness of learning diverse features using our proposed solution.
To build robust models that could resist adversarial attacks, we proposed a method for increasing the diversity between the latent space representations of models used within an ensemble. We also introduced Latent Space Separation (distance between the latent space representations of models in the ensemble) as a metric for measuring the robustness of the ensemble to adversarial examples. The evaluation of our proposed solutions against the white and black box attacks indicates that our proposed ensemble model is resistant to adversarial solutions and outperforms prior art solutions.
This research was supported by the National Science Foundation (NSF Award# 1718538).
- Amberkar et al. (2018) Sairaj Amberkar et al. 2018. Efficient utilization of adversarial training towards robust machine learners and its analysis. In Proceedings of the International Conf. on Computer-Aided Design. ACM, 78.
A. Athalye and others.
Obfuscated Gradients Give a False Sense of
Security: Circumventing Defenses to Adversarial Examples. In
Proceedings of the 35th International Conf. on Machine Learning(Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm Sweden, 274–283.
- Behnia and et al. (2020) F. Behnia and et al. 2020. Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for Making a CNN Classifier Robust Against Adversarial Attacks. In International Symposium on Quality Electronic Design (ISQED) 2020.
- Biggio and et al. (2013) B. Biggio and et al. 2013. Evasion attacks against machine learning at test time. In Joint European Conf. on machine learning and knowledge discovery in databases. Springer, 387–402.
- Carlini and Wagner (2016) Nicholas Carlini and David Wagner. 2016. Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311 (2016).
- Carlini and Wagner (2017) Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 39–57.
- Chen and Sirkeci-Mergen (2018) I-Ting Chen and Birsen Sirkeci-Mergen. 2018. A Comparative Study of Autoencoders against Adversarial Attacks. In . 132–136.
- Chen and et al. (2018) Y. Chen and et al. 2018. Eyeriss v2: A Flexible and High-Performance Accelerator for Emerging Deep Neural Networks. CoRR abs/1807.07928 (2018). arXiv:1807.07928 http://arxiv.org/abs/1807.07928
- Cortes and Vapnik (1995) C. Cortes and V. Vapnik. 1995. Support Vector Networks. Machine Learning 20 (1995), 273–297.
- Elsayed et al. (2018) Gamaleldin F. Elsayed, Shreya Shankar, et al. 2018. Adversarial Examples that Fool both Computer Vision and Time-Limited Humans. In NeurIPS.
- Garg et al. (2018) Shivam Garg et al. 2018. A Spectral View of Adversarially Robust Features. In NeurIPS.
- Goodfellow et al. (2014) Ian J. Goodfellow et al. 2014. Explaining and Harnessing Adversarial Examples. In International Conf. on Learning Representations, Vol. abs/1412.6572.
- He and et al. (2016) K. He and et al. 2016. Deep residual learning for image recognition. In proc. of the IEEE conf. on computer vision and pattern recognition. 770–778.
- Hinton et al. (2015) Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the Knowledge in a Neural Network. ArXiv abs/1503.02531 (2015).
- Ilyas et al. (2019) Andrew Ilyas et al. 2019. Adversarial Examples Are Not Bugs, They Are Features. ArXiv abs/1905.02175 (2019).
- Krizhevsky and et al. (2009) A. Krizhevsky and et al. 2009. Learning multiple layers of features from tiny images. Technical Report. Citeseer.
- Krizhevsky and et al. (2012) A. Krizhevsky and et al. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097–1105.
- Kullback and Leibler (1951) S. Kullback and R. A. Leibler. 1951. On Information and Sufficiency, In The Annals of Mathematical Statistics. Ann. Math. Statist. 22, 79–86. https://doi.org/10.1214/aoms/1177729694
- Kurakin et al. (2016) Alexey Kurakin et al. 2016. Adversarial Machine Learning at Scale. ArXiv abs/1611.01236 (2016).
- LeCun et al. (1998) Yann LeCun, Léon Bottou, et al. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278–2324.
- Mannor et al. (2005) Shie Mannor et al. 2005. The cross entropy method for classification. In ICML.
- Meng and Chen (2017) Dongyu Meng and Hao Chen. 2017. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conf. on Computer and Communications Security. ACM, 135–147.
- Mirzaeian et al. (2020a) Ali Mirzaeian et al. 2020a. Nesta: Hamming weight compression-based neural proc. engine. In Proceedings of the 25th Asia and South Pacific Design Automation Conference.
- Mirzaeian et al. (2020b) A. Mirzaeian et al. 2020b. TCD-NPE: A Re-configurable and Efficient Neural Processing Engine, Powered by Novel Temporal-Carry-deferring MACs. In 2020 International Conference on ReConFigurable Computing and FPGAs (ReConFig).
- Moosavi-Dezfooli et al. (2017) Seyed-Mohsen Moosavi-Dezfooli et al. 2017. Universal adversarial perturbations. In Proceedings of the IEEE Conf. on computer vision and pattern recognition. 1765–1773.
- Neshatpour and et al. (2018) K. Neshatpour and et al. 2018. ICNN: An iterative implementation of convolutional neural networks to enable energy and computational complexity aware dynamic approximation. In 2018 Design, Automation & Test in Europe Conf. & Exhibition (DATE). IEEE, 551–556.
- Neshatpour and et al. (2019) K. Neshatpour and et al. 2019. Exploiting Energy-Accuracy Trade-off through Contextual Awareness in Multi-Stage Convolutional Neural Networks. In 20th International Symposium on Quality Electronic Design (ISQED). IEEE, 265–270.
- Papernot et al. (2015) Nicolas Papernot et al. 2015. Distillation as a Defense to Adversarial Perturbations Against Desep Neural Networks. 2016 IEEE Symposium on Security and Privacy (SP) (2015), 582–597.
- Papernot et al. (2016a) Nicolas Papernot et al. 2016a. Practical Black-Box Attacks against Machine Learning. In ASIA CCS ’17.
et al. (2016b)
Nicolas Papernot, Patrick
McDaniel, et al. 2016b.
The limitations of deep learning in adversarial settings. In2016 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 372–387.
- Paszke et al. (2019) Adam Paszke, Gross, et al. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., 8024–8035.
- Rauber et al. (2017) Jonas Rauber et al. 2017. Foolbox: A python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131 (2017).
- Samangouei et al. (2018) Pouya Samangouei et al. 2018. Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605 (2018).
- Sayadi and et al. (2017) H. Sayadi and et al. 2017. Machine Learning-Based Approaches for Energy-Efficiency Prediction and Scheduling in Composite Cores Architectures. In 2017 IEEE International Conf. on Computer Design (ICCD). 129–136.
- Shaham et al. (2018) Uri Shaham et al. 2018. Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing 307 (2018).
- Song et al. (2017) Yang Song, Taesup Kim, Nowozin, et al. 2017. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766 (2017).
- Szegedy et al. (2013) Christian Szegedy et al. 2013. Intriguing properties of neural networks, In International Conf. on Learning Representations. CoRR abs/1312.6199.
- Tramèr and Boneh (2019) Florian Tramèr and Dan Boneh. 2019. Adversarial Training and Robustness for Multiple Perturbations. ArXiv abs/1904.13000 (2019).
- Wang et al. (2019) Yisen Wang et al. 2019. On the Convergence and Robustness of Adversarial Training. In ICML.
- Yang and et al. (2018) Z. Yang and et al. 2018. Characterizing Audio Adversarial Examples Using Temporal Dependency. ArXiv abs/1809.10875 (2018).