The wide adoption of machine learning and deep learning algorithms in many critical applications introduces strong incentives for motivated adversaries to manipulate the results and models generated by these algorithms. For instance, attackers can deliberately influence the training dataset to manipulate the results of a predictive model (inpoisoning attacks (Perdisci et al., 2006; Newsome et al., 2006; Nelson et al., 2008; Rubinstein et al., 2009; Biggio et al., 2012; Newell et al., 2014; Xiao et al., 2015; Koh and Liang, 2017)), or cause misclassification of new data in the testing phase (in evasion attacks (Biggio et al., 2013; Szegedy et al., 2014; Goodfellow et al., 2015; Šrndic and Laskov, 2014; Xu et al., 2016; Dang et al., 2017; Papernot et al., 2017; Carlini and Wagner, 2017b)).
Creating poisoning and evasion attack data points is not a trivial task, particularly when many online services avoid disclosing information about the machine learning algorithm in use. As a result, attackers are forced to craft their attacks against a surrogate model instead of the real model used by the service, hoping that the attack will be effective on the real model. The transferability property of an attack is satisfied when an attack developed for a particular machine learning model (i.e., a surrogate model) is also effective against another model (i.e., the target model). Studying attack transferability has gained interest in recent years due to deployment of cyber-attack detection services based on machine learning.
In this paper we focus on understanding what makes attacks transferable. In particular, we focus on evasion and poisoning attacks that are crafted with gradient-based optimization techniques, a popular mechanism used to create attack data points. The first gradient-based attacks against machine learning were demonstrated by Biggio et al. in (Biggio et al., 2013) for test-time evasion attacks, and in (Biggio et al., 2012) for training-time poisoning attacks. Then, Szegedy et al. (Szegedy et al., 2014)
, while aiming to interpret decisions of deep neural networks, independently discovered the phenomenon of adversarial examples against deep neural networks, i.e., that deep nets were also vulnerable to small changes in the input data crafted with a gradient-based attack algorithm (see, e.g.,(Biggio and Roli, 2018) for an historical perspective on the evolution of attacks against machine learning). Attack data points are commonly referred to as adversarial examples in the case of test-time evasion attacks, and poisoning points in the case of training-time poisoning attacks, although it is not uncommon to refer to adversarial examples as a general synonym for both types of attack.
While transferability of evasion attacks has been widely investigated (Biggio et al., 2013; Szegedy et al., 2014; Goodfellow
et al., 2015; Papernot
et al., 2016b; Papernot et al., 2017; Moosavi-Dezfooli et al., 2017; Dong
et al., 2018; Liu
et al., 2017; Tramèr et al., 2017; Wu et al., 2018), transferability of poisoning attacks is still largely unexplored, the work in (Muñoz-González
et al., 2017) being a notable exception. In spite of these efforts, little is understood about what are the factors that make some attacks more transferable than others, both for evasion and poisoning attacks.
In this work, we present the first comprehensive evaluation of transferability of both poisoning and evasion attacks. We consider a wide range of classifiers, including deep neural networks, support vector machines with both linear and RBF kernels, logistic regression, ridge regression, k-nearest neighbors, and random forest. Our evaluation relies on a formal definition of transferability for evasion and poisoning attacks and an approximate relaxation of this definition, giving empirical metrics connected to (the size of) input gradients. Our experimental analysis providesnew insights into the mechanics of attack transferability. Specifically, we observe that:
transferability depends strongly on regularization hyperparameters (and the size of input gradients), both for evasion and poisoning attacks – a direct implication of this observation is that it explains why some attacks transfer successfully, while others do not; and
imperceptible poisoning and evasion samples occur when classifiers have large input gradients, which in turn arise from high-dimensional problems and/or low regularization / smoothness of the decision function. This clearly hinders transferability across models with different levels of regularization and smoothness of the decision function.
We discuss background on threat modeling against machine learning and how to craft evasion and poisoning attacks in Sect. 2. We then formally define transferability for both evasion and poisoning attacks, and show its approximate connection with the input gradients used to craft the corresponding attack samples (Sect. 3). Experiments are reported in Sect. 4, highlighting connections among regularization hyperparameters, the size of input gradients, and transferability of attacks, on two case studies involving the classification of handwritten digits and Android malware. We discuss related work in Sect. 5. While our analysis is restricted to two-class classification problems, we do believe that our conclusions can be easily generalized to multiclass settings, as discussed in the concluding remarks of this work (Sect. 6).
2. Threat Model and Attacks
In this paper, we consider a range of adversarial models against machine learning systems. Attackers are defined by: () their goal or objective in attacking the system; () their knowledge of the system; () their capabilities in influencing the system through manipulation of the input data. Before we detail each of these, we introduce our notation, and point out that the threat model and attacks considered in this work are suited to two-class classification algorithms. We nevertheless refer the reader to (Biggio and Roli, 2018; Muñoz-González et al., 2017; Melis et al., 2017) for the corresponding extensions to multiclass settings.
Notation. In the following, we denote the sample and label spaces with and , respectively, and the training data with , where is the training set size. We use to denote the loss incurred by the classifier (parameterized by ) on
. Typically, this is computed by averaging a loss functioncomputed on each data point, i.e., . We assume that the classification function is learned by minimizing an objective function
on the training data. Typically, this is an estimate of the generalization error, obtained by the sum of the empirical losson training data and a regularization term.
2.1. Attacker’s Goal
We define the attacker’s goal based on the desired security violation. In particular, the attacker may aim to cause: an integrity violation, to evade detection without compromising normal system operation; an availability violation, to compromise the normal system functionalities available to legitimate users; or a privacy violation, to obtain private information about the system, its users or data by reverse-engineering the learning algorithm (Barreno et al., 2006; Barreno et al., 2010; Huang et al., 2011; Biggio et al., 2014c; Biggio and Roli, 2018; Muñoz-González et al., 2017).111Even though we do not consider privacy attacks in this work, we refer the reader to some examples of privacy attacks based on iteratively querying the target system. They include model-extraction attacks, aimed to steal machine-learning models provided as a service; and model-inversion and hill-climbing attacks, aimed to steal sensitive information like the face and fingerprint templates of users of biometric authentication systems (Biggio et al., 2014a; Fredrikson et al., 2015; Tramèr et al., 2016; Adler, 2005; Galbally et al., 2010; Martinez-Diaz et al., 2011).
2.2. Attacker’s Knowledge
We characterize the attacker’s knowledge as a tuple in an abstract knowledge space consisting of four main dimensions, respectively representing knowledge of: () the training data ; () the feature set ; () the learning algorithm , along with the objective function minimized during training; and () the parameters learned after training the model. This categorization enables the definition of many different kinds of attacks, ranging from white-box attacks with full knowledge of the target classifier to black-box attacks in which the attacker knows almost nothing about the target system. While we refer the reader to (Biggio and Roli, 2018) for a more detailed categorization of such attacks, including the definition of gray-box attack scenarios, in this paper we consider a simplified setting only involving white-box and black-box (transfer) attacks, as detailed below.
Perfect-Knowledge (PK) White-Box Attacks. We assume here that the attacker has full knowledge of the target classifier, i.e., . This setting allows one to perform a worst-case evaluation of the security of machine-learning algorithms, providing empirical upper bounds on the performance degradation that may be incurred by the system under attack.
Limited-Knowledge (LK) Black-Box Attacks. We assume here that the feature representation is known,222With the term feature representation, we do not mean the internal representations built by learning algorithms like kernel methods and deep networks, but rather the set of input features. For images, this means that we do consider pixels as the input features, consistently with other recent work on black-box attacks against machine learning (Papernot et al., 2016b; Papernot et al., 2017). while the training data and the type of classifier are not known to the attacker. We nevertheless assume that the attacker can collect a surrogate dataset , ideally sampled from the same underlying data distribution as , and train a surrogate model on such data to approximate the target function (potentially using feedback from to relabel ). Then, the attacker can craft the attacks against , and then check whether they successfully transfer to the target classifier . By denoting limited knowledge of a given component with the hat symbol, such black-box attacks can be denoted with . They have been widely used to evaluate the transferability of attacks across learning algorithms, as firstly shown in (Biggio et al., 2013) and then in (Papernot et al., 2016b; Papernot et al., 2017)
. It is finally worth remarking that surrogate models have been largely used in the field of black-box mathematical optimization to find optima of functions which are not differentiable or analytically tractable. In these cases, gradient information from a (differentiable) surrogate function(that resembles the target ) can be used to speed up the optimization process.
2.3. Attacker’s Capability
This attack characteristic defines how the attacker can influence the system, and how data can be manipulated based on application-specific constraints. If the attacker can manipulate both training and test data, the attack is said to be causative. It is instead referred to as exploratory, if the attacker can only manipulate test data. These scenarios are more commonly known as poisoning (Biggio et al., 2012; Xiao et al., 2015; Mei and Zhu, 2015; Muñoz-González et al., 2017; Jagielski et al., 2018) and evasion attacks (Biggio et al., 2014c; Biggio et al., 2013; Biggio et al., 2014b; Szegedy et al., 2014; Goodfellow et al., 2015; Carlini and Wagner, 2017b).
Another aspect related to the attacker’s capability depends on the presence of application-specific constraints on data manipulation. For instance, to evade malware detection, malicious code has to be modified without compromising its intrusive functionality. This may be done against systems leveraging static code analysis, by injecting instructions or code that will never be executed (Šrndic and Laskov, 2014; Biggio et al., 2013; Demontis et al., ress; Grosse et al., 2017). These constraints can be generally accounted for in the definition of the optimal attack strategy by assuming that the initial attack sample can only be modified according to a space of possible modifications . In some cases, this space can also be mapped in terms of constraints on the feature values of the attack samples, e.g., by imposing that feature values corresponding to occurrences of some instructions in static malware detectors can only be incremented (Šrndic and Laskov, 2014; Biggio et al., 2013; Demontis et al., ress).
2.4. Gradient-based Attacks
Given the attacker’s knowledge and an attack sample along with its label , the attacker’s goal can be defined in terms of an objective function (e.g., a loss function) which measures how effective the attack sample is. The optimal attack strategy can be thus given as:
Note that, for the sake of clarity, we consider here the optimization of a single attack sample, but this formulation can easily account for multiple attack points, e.g., by iteratively optimizing one attack point at a time as in (Biggio and Roli, 2018; Xiao et al., 2015).
We show in Sects. 2.5–2.6 how this general formulation encompasses both evasion and poisoning attacks against supervised learning algorithms, even though it has also been used to attack clustering (Biggio et al., 2013, 2014, 2014)
and feature selection algorithms(Xiao et al., 2015; Zhang et al., 2016).
Attack Algorithm. Before detailing the specific instances of the optimization problem given in Eq. (1) for evasion and poisoning attacks, we describe a standard gradient-ascent algorithm that can be used to solve the aforementioned problem in both cases. It is given as Algorithm 1. It iteratively updates the attack sample along the gradient of the objective function, ensuring the resulting point to be within the feasible domain through a projection operator . The gradient step size is determined in each update step with a simple line-search method based on bisection, to reduce the number of iterations required to reach a local or global optimum (e.g., depending on whether the objective function and the constraints are concave).
Notably, gradient-based algorithms for the generation of evasion and poisoning attacks against machine learning have been first proposed by Biggio et al. (Biggio et al., 2012; Biggio et al., 2013), and re-discovered independently by Szegedy et al. (Szegedy et al., 2014) and follow-up work (Goodfellow et al., 2015; Carlini and Wagner, 2017b), though only in the context of test-time evasion, under the name of adversarial examples.
2.5. Evasion Attacks
In evasion attacks, the attacker manipulates test samples to have them misclassified, i.e., to evade detection by a learning algorithm. Normally, this attack aims to favor intrusions without compromising normal system operation, and it is thus categorized as an integrity security violation. For white-box evasion, the optimization problem given in Eq. (1) can be rewritten as:
where is the norm of , and we assume that the classifier parameters are known. For the black-box case, it suffices to use the parameters of the surrogate classifier . The loss function considered in this work is simply , as in (Biggio et al., 2013). We refer the reader to (Biggio and Roli, 2018; Melis et al., 2017) for the extension of evasion attacks to the multiclass setting (where the attacker may additionally specify which class the attack sample should be assigned to, among the available ones, by properly defining the objective function).
The manipulation constraints are given in terms of: () a distance constraint , which sets a bound on the maximum input perturbation between (i.e., the input sample) and the corresponding modified adversarial example ; and () a box constraint (where means that each element of has to be not greater than the corresponding element in ), which bounds the values of the attack sample .
For images, the former constraint is used to implement either dense or sparse evasion attacks (Demontis et al., 2016; Russu et al., 2016; Melis et al., 2017). Normally, the and the distances between pixel values are used to cause an indistinguishable image blurring effect (by slightly manipulating all pixels). Conversely, the distance corresponds to a sparse attack in which only few pixels are significantly manipulated, yielding a salt-and-pepper noise effect on the image (Demontis et al., 2016; Russu et al., 2016). In the image domain, the box constraint can be used to bound each pixel value between and , or to ensure manipulation of only a specific region of the image. For example, if some pixels should not be manipulated, one can set the corresponding values of and equal to those of . This is of interest to create real-world adversarial examples, as it avoids the manipulation of background pixels which do not belong to the object of interest (Melis et al., 2017; Sharif et al., 2016). Similar constraints have been applied also for evading learning-based malware detectors (Biggio et al., 2013; Demontis et al., 2016; Russu et al., 2016; Šrndic and Laskov, 2014; Demontis et al., ress).
Maximum-confidence vs. minimum-distance evasion. The formulation of evasion attacks given in Eqs. (2)–(4), as in (Biggio et al., 2013), aims to produce adversarial examples that are misclassified with maximum confidence by the classifier, within the given space of feasible modifications. This is substantially different from crafting minimum-distance adversarial examples, as formulated in (Szegedy et al., 2014) and in follow-up work (e.g., (Papernot et al., 2016b)). This difference is conceptually depicted in Fig. 1. In particular, in terms of transferability, it is now widely acknowledged that higher-confidence attacks have better chances of successfully transfer to the target classifier (and even of bypassing countermeasures based on gradient masking) (Carlini and Wagner, 2017b; Athalye et al., 2018; Dong et al., 2018). For this reason, in this work we consider evasion attacks that aim to craft adversarial examples misclassified with maximum confidence.
Initialization and Smoothing. There are two other factors that are known to improve transferability of evasion attacks, as well as their effectiveness in the white-box setting. The first one consists of running the attack starting from different initialization points to mitigate the problem of getting stuck in poor local optima (i.e., points misclassified with lower confidence) (Biggio et al., 2013; Zhang et al., 2016; Dong et al., 2018). The second one is known in the literature of mathematical optimization as smoothing, and consists of averaging gradients nearby the point of interest to reduce the impact of noise (Dong et al., 2018; Wu et al., 2018). This may be very helpful when the objective function changes very quickly around the point of interest, and gradients at specific locations are thus unreliable and noisy, hindering the optimization process.
In this work we do not consider smoothing (although it may be easily accounted for in our algorithm), but we do consider additional initialization points when running our evasion attacks, to improve their effectiveness against nonlinear algorithms. In addition to starting the gradient ascent from the initial point , we also consider starting the gradient ascent from the projection of a randomly-chosen point of the opposite class onto the feasible domain. This helps finding better local optima, through the identification of more promising paths towards evasion, as also discussed in (Biggio et al., 2013; Zhang et al., 2016; Dong et al., 2018; Wu et al., 2018).
2.6. Poisoning Attacks
Poisoning attacks consist of manipulating training data (mainly by injecting adversarial points into the training set) to either favor intrusions without affecting normal system operation, or to purposely compromise normal system operation to cause a denial of service. The former are referred to as poisoning integrity attacks, while the latter are known as poisoning availability attacks (Biggio and Roli, 2018; Xiao et al., 2015). In this work we focus on the latter, as their transferability properties have not yet been widely investigated (Biggio and Roli, 2018; Muñoz-González et al., 2017), conversely to those exhibited by recent backdoor and trojaning attacks (Chen et al., 2017; Gu et al., 2017), which belong to the category of poisoning integrity attacks (Biggio and Roli, 2018). Nevertheless, () crafting transferable poisoning availability attacks is much more challenging than crafting transferable poisoning integrity attacks, as the latter have a much more modest goal; and () the following formulation can also be used to craft poisoning integrity attacks, as we will detail in the remainder of this section.
As for the evasion case, we formulate poisoning in a white-box setting, given that the extension to black-box attacks is immediate through the use of surrogate learners. Poisoning is formulated as a bilevel optimization problem in which the outer optimization maximizes the attacker’s objective (typically, a loss function computed on untainted data), while the inner optimization amounts to learning the classifier on the poisoned training data (Biggio et al., 2012; Xiao et al., 2015; Mei and Zhu, 2015). This can be made explicit by rewriting Eq. (1) as:
where and are two data sets available to the attacker. The former, along with the poisoning point , is used to train the learner on poisoned data, while the latter is used to evaluate its performance on untainted data, through the loss function . Notably, the objective function implicitly depends on through the parameters of the poisoned classifier.
In poisoning availability attacks, the untainted validation set contains a set of representative points of the test data, and the attacker aims to have misclassified as many of them as possible. In the integrity case, the set may just contain few well-crafted intrusive samples that the attacker aims to have misclassified at test time. Accordingly, while both attacks share the same formulation, poisoning integrity attacks are much easier to craft.
Although the given formulation considers a single attack point, multiple-point poisoning attacks can be staged by solving the aforementioned problem iteratively, optimizing one attack point at a time (Xiao et al., 2015; Muñoz-González et al., 2017). While poisoning attacks do not typically have restrictions on the manipulation of the poisoning points, the attacker’s capability is limited by assuming that the attacker can inject only a small fraction of poisoning points into the training set.
Poisoning points can be optimized via gradient-ascent procedures, as that given in Algorithm 1. Provided that the attacker function is differentiable w.r.t. and
, the required gradient can be computed using the chain rule(Biggio et al., 2012; Xiao et al., 2015; Muñoz-González et al., 2017; Biggio and Roli, 2018; Mei and Zhu, 2015):
The term captures the implicit dependency of the parameters on the poisoning point . Under some regularity conditions, this derivative can be computed by replacing the inner optimization problem with its stationarity (Karush-Kuhn-Tucker, KKT) conditions, i.e., with its implicit equation . By differentiating this expression w.r.t. the poisoning point , one yields:
Solving for , we obtain , which can be substituted in Eq. (7) to obtain the required gradient:
While we refer the reader to (Muñoz-González et al., 2017) for a more detailed derivation of the aforementioned gradient, we report here its compact expression for SVM poisoning, with corresponding to the dual SVM learning problem, and to the hinge loss (in the outer optimization):
We use , and here to respectively index the attack point, the support vectors, and the validation points for which (corresponding to a non-null derivative of the hinge loss). The coefficient is the dual variable assigned to the poisoning point by the learning algorithm, and and contain kernel values between the corresponding indexed sets of points. We refer the reader to (Biggio et al., 2012) for further details on the derivation of poisoning attacks against SVMs. Poisoning attacks targeting other classifiers can be derived similarly (Xiao et al., 2015; Muñoz-González et al., 2017; Koh and Liang, 2017).
3. Transferability, Input Gradients and Regularization
We discuss here the main contribution of this work, which highlights an intriguing connection among transferability of both evasion and poisoning attacks, input gradients and regularization.
We start by formally defining transferability of an attack point as the loss attained by the target classifier (parametrized by ) under the influence of the given attack point :
where, for each given point , the adversarial perturbation is crafted against the surrogate classifier (parametrized by ):
and the norm of the perturbation is upper bounded by . This is consistent with the -norm constraint used to craft evasion attacks. Although poisoning attacks do not necessarily require this specific constraint, they are included in this formulation if we consider as the initial poisoning point (with its label flipped) prior to run the gradient-ascent attack algorithm.
The given definition of transferability, suited to both evasion and poisoning attacks, can be simplified through a linear approximation of the loss function, which reasonably holds for sufficiently-small input perturbations:
Rewriting Eq. (12) using the same linear approximation, one yields the maximization of an inner product over an -sized ball:
where is the dual norm of . It is not difficult to see that the above problem is maximized, for , by ; for , by ; and for , by setting the values of that correspond to the maximum absolute values of to their sign, i.e., , and otherwise. Substituting the optimal value of into Eq. (13), we can compute the increase of the loss function under a transfer attack in closed form. For example, for , it is given as:
where the upper bound is obtained when the surrogate classifier is equal to the target (white-box attacks), and similar results hold for and (using the dual norm in the right-hand side).
Intriguing Connections. The above finding reveals three interesting connections among transferability of attacks, regularization and size of input gradients, detailed below.
(1) Transferability depends on the size of the gradient of the target classifier, regardless of the surrogate: the larger this gradient is, the larger the attack impact may be. Note that this is a general result related to the adversarial vulnerability of classifiers, not only to transferability. Adversarial vulnerability has already been shown to depend on the size of the input gradients (Simon-Gabriel et al., 2018), although this has been only discussed for evasion attacks and in relationship to the increase of the dimensionality of the input space. Here, we confirm this result also for poisoning attacks and, more interestingly, we highlight it in the context of transferability.
(2) The size of input gradients also depends on the level of regularization. Classifiers which are highly regularized tend to have smaller input gradients (i.e., they learn smoother functions that are more robust to attacks), and vice-versa. Notably, this holds for both evasion and poisoning attacks (e.g., the poisoning gradient in Eq. 10 is proportional to , which is larger when the SVM is weakly regularized). This result has also another interesting consequence: if a classifier has large input gradients (e.g., due to high-dimensionality of the input space and low level of regularization), for an attack to succeed it suffices to apply only tiny, imperceptible perturbations. As we will see in the experimental section, this explains why adversarial examples against deep neural networks can often only be slightly perturbed to mislead detection, while when attacking strongly-regularized classifiers in low dimensions, modifications become more evident.
(3) If we compare the increase in the loss function in the black-box case (the left-hand side of Eq. 12) against that corresponding to white-box attacks (the right-hand side), we find that the relative increase in loss is given by the cosine of the angle between the gradient of the surrogate and that of the target classifier. This is a very interesting finding which explains why such metric, empirically used in previous work (Liu et al., 2017), is sound and can well characterize transferability of attacks. Worth remarking, in case of noisy gradients (i.e., non-sufficiently-smooth functions), one may use smoothing and gradient averaging nearby the point of interest, to overcome potential issues with this measure (Wu et al., 2018; Dong et al., 2018).
4. Experimental Analysis
In this section, we evaluate the transferability of evasion and poisoning attacks in the white-box and black-box attack settings. We also validate whether the proposed transferability measure works well in predicting transferability between pairs of classifiers.
4.1. Transferability of Evasion Attacks
We start by reporting our experiments on evasion attacks. We consider here two distinct case studies, involving handwritten digit recognition and Android malware detection as described below.
The MNIST89 data includes the MNIST handwritten digits from classes 8 and 9. Each digit image consists of 784 pixels ranging from 0 to 255, normalized in by dividing such values by . We run independent repetitions to average the results on different training-test splits. In each repetition, we run white-box and black-box attacks, using samples to train the target classifier and distinct samples to train the surrogate classifier (without even relabeling the surrogate data with labels predicted by the target classifier; i.e., we do not perform any query on the target). We modified test digits in both classes using an -constrained attack in this case, with .
We consider the following classifiers from scikit-learn (Pedregosa et al., 2011): () SVM with the linear kernel and (SVM); () SVM with the RBF kernel () and (SVM RBF); () logistic classifier with (LOGISTIC); () ridge regressor with (RIDGE); (
) a fully-connected neural network with two hidden layers (NN) and hyperbolic tangent as activation function. We additionally consider as target classifier a Random Forest (RF) consisting ofbase decision trees. These configurations are chosen to evaluate the robustness of classifiers that exhibit similar test accuracies but different levels of regularization.333Recall that the level of regularization increases as increases, and as decreases.
The results for white-box evasion attacks are reported in Fig. 2. We report the complete security evaluation curves, showing the mean test error (over the runs) against an increasing maximum admissible distortion . The mean test error values are further averaged over all the considered values of . This value is referred to as err in the legend, and we will use it as a synthetic measure to compactly denote the success of the attack. In other words, the higher err is, the higher the classification error (or evasion rate) is.
The results clearly show that strongly-regularized classifiers are less vulnerable against evasion attacks. The underlying reason is that classifier vulnerability depends on the size of the input gradients, which is in turn reduced when classification functions are smoother and more regularized. This can be seen in Fig. 3 by comparing the value of err for both white- and black-box attacks with the size of the input gradients of target classifiers. Note how this behavior is consistent within each family of classifiers, as the regularization hyperparameter changes.
Interestingly, nonlinear classifiers tend to be in general less vulnerable than linear ones in this case. Moreover, note that strongly-regularized linear and nonlinear classifiers provide better surrogate models on average. The reason is that they learn smoother (and) stabler functions that are capable of better approximating the target function (even when weakly regularized, and thus more prone to overfit a specific training set). Comparing the results of the black-box evasion attack transferability in Fig. 3 with the gradient alignment between surrogate and target classifiers reported in Fig. 4, it is clear that the latter measure provides a good indication of which classifier can be a better surrogate for a given target classifier. Worth remarking, this measure is extremely fast to evaluate, as it does not require simulating any attack. Nevertheless, this is only a relative measure of the attack transferability, as its final impact depends on how much the target classifier is regularized; i.e., on the size of the input gradients of the target classifier.
We finally report the images of some manipulated digits in Fig. 5. Such images highlight that imperceptible modifications suffice to evade weakly-regularized classifiers, while larger modifications are required to evade strongly-regularized classifiers. The reason is that weakly-regularized classifiers exhibit large input gradients and, thus, very small input changes cause large variations in the output function, whereas strongly-regularized classifiers are smoother and, thus, much less sensitive to small input changes.
The Drebin data (Arp et al., 2014) consists of around 120,000 legitimate and around 5000 malicious Android applications, labeled using the VirusTotal service. A sample is labeled as malicious (or positive, with ) if it is classified as such from at least five out of ten anti-virus scanners, while it is flagged as legitimate (or negative, with ) otherwise. The structure and the source code of each application is encoded as a sparse feature vector consisting of around a million binary features denoting the presence or absence of permissions, suspicious URLs and other relevant information that can be extracted by statically analyzing Android applications. We adopt the same experimental setting as in the previous case, using samples to learn surrogate and target classifiers, and samples for testing.
We perform feature selection to retain only features, chosen by maximizing information gain, i.e., (estimated on the training data), where denotes the feature. While this feature selection process does not significantly affect the classification performance (the detection rate is only reduced by , on average, at false alarm rate), it drastically reduces the computational complexity of the corresponding classification algorithms.
In each repetition, we run white-box and black-box evasion attacks on distinct malware samples (selected from the test data) against an increasing number of modified features in each malware . This is achieved by imposing the constraint . As in previous work, we further restrict the attacker to only inject features into each malware sample, to avoid compromising its intrusive functionality (Demontis et al., ress; Biggio et al., 2013).
To evaluate the impact of the aforementioned evasion attack, we measure the evasion rate (i.e., the fraction of malware samples misclassified as legitimate) at false alarm rate (i.e., when only of the legitimate samples are misclassified as malware). As in the previous experiment, we report the complete security evaluation curve for the white-box attack case, whereas we report only the value of err for the black-box case. The results, reported in Figs. 6-8, confirm the main findings of the previous experiments, which can be summarized as follows:
strongly-regularized (linear and nonlinear) classifiers are less vulnerable to evasion;
they often provide better surrogate functions than their weakly-regularized counterparts; and
the gradient alignment between surrogate and target classifiers provides a reliable metric to identify good surrogate models (depending on the specific target classifier).
4.2. Transferability of Poisoning Attacks
For poisoning attacks, we report experiments on the MNIST89 dataset.
We consider the following surrogate classifiers: linear and RBF SVMs (with different regularization parameters). As target classifiers, in addition to the aforementioned SVMs, we consider the logistic classifier logistic classifier with (LOGISTIC), k-nearest neighbors with
(kNN-l1) and(kNN-l2) distances, RF with
base decision trees, and the Convolutional Neural Network (CNN) used on MNIST data by Carlini et al.(Carlini and Wagner, 2017a). We consider training samples, validation samples to compute the attack, and a further samples to evaluate the test error. The test error is computed against an increasing number of poisoning points into the training set, from to (corresponding to 125 poisoning points). The reported results are averaged on independent, randomly-drawn data splits.
The results for white-box poisoning are reported in Fig. 9. Similarly to the evasion case, weakly-regularized classifiers are more vulnerable to poisoning attacks as their input (poisoning) gradients are larger (see Fig. 10).
The results for black-box poisoning are reported in Fig. 10. Worth remarking, here the best surrogate classifiers are those matching the regularization level of the target. This is due to the inherent geometry of the optimization landscape, which exhibits local optima for poisoning attacks in very different regions of the feature space when the regularization hyperparameter of the target classifier changes significantly. This behavior is anyway captured by our transferability measure, i.e., the average cosine angle (gradient alignment) between the surrogate and the target classifiers, reported in Fig. 11.
Finally, a visual inspection of the adversarially-crafted poisoning digits, shown in Fig. 12, reveals that the poisoning points crafted against weakly-regularized classifiers are only minimally perturbed, while the ones computed against strongly-regularized classifiers exhibit larger, visible perturbations. This is due to the fact that the norm of the input gradients of weakly-regularized classifiers is much larger, therefore even little perturbations are heavily amplified and turn out to be sufficient to significantly increase the classification error.
|SVM||SVM||SVM||SVM RBF||SVM RBF||SVM RBF|
5. Related Work
Transferability. Transferability of evasion attacks has been studied in previous work (Biggio et al., 2013; Szegedy et al., 2014; Goodfellow et al., 2015; Papernot et al., 2016b; Papernot et al., 2017; Moosavi-Dezfooli et al., 2017; Dong et al., 2018; Liu et al., 2017; Tramèr et al., 2017; Wu et al., 2018). In particular, Liu et al. (Liu et al., 2017) and Tramer et al. (Tramèr et al., 2017)
have introduced some transferability measures, including gradient alignment, based on reasonable, heuristic motivations. In this work, we have shown that the implicit, underlying assumption behind this metric is the linearity of the loss function with respect to the input sample. Furthermore, starting from a clear, formal definition of transferability, we have demonstrated that gradient alignment is not the only important factor influencing the success of an attack, but that it also depends on the size of the input gradient of the target classifier. This is another relevant factor which has not been highlighted in(Liu et al., 2017; Tramèr et al., 2017), due to the lack of a proper formalization of the notion of attack transferability. In the context of poisoning there is very little work on transferability, the exception being a preliminary investigation for neural networks (Muñoz-González et al., 2017). That work indicates that poisoning examples are transferable among very simple network architectures (logistic regression, MLP, and Adaline). Another exception is given by the work in (Suciu et al., 2018), which however neither analyzes the transferability between different learning models nor provides a mathematical formulation of transferability.
Input gradient regularization. Prior work has investigated the role of input gradients and Jacobians. Some of these works have considered training in order to decrease the magnitude of input gradients to defend against evasion attacks (Lyu et al., 2015; Ross and Doshi-Velez, 2018) or improve classification accuracy (Sokolić et al., 2017; Varga et al., 2017). In (Simon-Gabriel et al., 2018), the magnitude of the input gradient is identified as a cause for vulnerability to evasion attacks. They all identify a smaller input gradient magnitude as a more desirable property for a model in response to evasion attacks. Nevertheless, to the best of our knowledge, our work is the first to show a similar property in the context of poisoning attacks.
6. Concluding Remarks
In this paper we have conducted an analysis and experimental evaluation of the transferability of evasion and poisoning attacks on several machine-learning models and settings, showing that such attacks exhibit similar transferability properties. We have defined evasion and poisoning attack transferability under the same formalization, and highlighted its connections with input gradients via a linear approximation. This has in turn revealed that not only the alignment of the input gradients between surrogate and target classifiers plays a role, but also that the transferability of an attack significantly depends on the size of the input gradients of the target classifier. Our experiments have highlighted for the first time novel factors that significantly affect attack transferability. First, we have shown that in both white- and black-box settings, weakly-regularized target classifiers have larger input gradients and are thus much more vulnerable. Second, attacks against surrogate classifiers with gradients more aligned to those of the target do transfer better, but the overall impact on the target depends on the gradient size (regularization level) of the target classifier. Finally, we have shown that attacks crafted against weakly-regularized classifiers (with large input gradients) require much less modifications than those required to attack strongly-regularized classifiers to succeed, as shown by the images of the manipulated digits we have reported for both evasion and poisoning attacks. Despite some preliminary studies on the transferability of evasion attacks, to the best of our knowledge, the connections among input gradients, regularization and attack transferability studied in our paper have never been investigated in such detail. Most importantly, our work is the first one investigating such connections for poisoning attacks, for which transferability has been largely unstudied so far. Although our analysis is limited to binary classifiers, we firmly believe that our results also hold for multiclass classification problems. We will evaluate this aspect in more detail in future work.
- Adler (2005) Andy Adler. 2005. Vulnerabilities in Biometric Encryption Systems. In 5th International Conference on Audio- and Video-Based Biometric Person Authentication (AVBPA) (LNCS), Takeo Kanade, Anil K. Jain, and Nalini K. Ratha (Eds.), Vol. 3546. Springer, Hilton Rye Town, NY, USA, 1100–1109.
- Arp et al. (2014) D. Arp, M. Spreitzenbarth, M. Hübner, H. Gascon, and K. Rieck. 2014. Drebin: Efficient and explainable detection of android malware in your pocket. In Proc. 21st Annual Network & Distributed System Security Symposium (NDSS). The Internet Society.
- Athalye et al. (2018) A. Athalye, N. Carlini, and D. Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. ArXiv e-prints (2018).
- Barreno et al. (2010) Marco Barreno, Blaine Nelson, Anthony Joseph, and J. Tygar. 2010. The security of machine learning. Machine Learning 81 (2010), 121–148. Issue 2.
- Barreno et al. (2006) Marco Barreno, Blaine Nelson, Russell Sears, Anthony D. Joseph, and J. D. Tygar. 2006. Can machine learning be secure?. In Proc. ACM Symp. Information, Computer and Comm. Sec. (ASIACCS ’06). ACM, New York, NY, USA, 16–25.
Biggio et al. (2014)
Samuel Rota Bulò, Ignazio Pillai,
Michele Mura, Eyasu Zemene Mequanint,
Marcello Pelillo, and Fabio Roli.
Poisoning complete-linkage hierarchical clustering. In
Joint IAPR Int’l Workshop on Structural, Syntactic, and Statistical Pattern Recognition(Lecture Notes in Computer Science), P. Franti, G. Brown, M. Loog, F. Escolano, and M. Pelillo (Eds.), Vol. 8621. Springer Berlin Heidelberg, Joensuu, Finland, 42–52.
- Biggio et al. (2013) B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli. 2013. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases (ECML PKDD), Part III (LNCS), Hendrik Blockeel, Kristian Kersting, Siegfried Nijssen, and Filip Železný (Eds.), Vol. 8190. Springer Berlin Heidelberg, 387–402.
- Biggio et al. (2014a) Battista Biggio, Igino Corona, Blaine Nelson, BenjaminI.P. Rubinstein, Davide Maiorca, Giorgio Fumera, Giorgio Giacinto, and Fabio Roli. 2014a. Security Evaluation of Support Vector Machines in Adversarial Environments. In Support Vector Machines Applications, Yunqian Ma and Guodong Guo (Eds.). Springer International Publishing, Cham, 105–153. https://doi.org/10.1007/978-3-319-02300-7_4
- Biggio et al. (2014b) Battista Biggio, Giorgio Fumera, and Fabio Roli. 2014b. Pattern Recognition Systems under Attack: Design Issues and Research Challenges. Int’l J. Patt. Recogn. Artif. Intell. 28, 7 (2014), 1460002.
- Biggio et al. (2014c) Battista Biggio, Giorgio Fumera, and Fabio Roli. 2014c. Security Evaluation of Pattern Classifiers Under Attack. IEEE Transactions on Knowledge and Data Engineering 26, 4 (April 2014), 984–996.
- Biggio et al. (2012) Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines, In 29th Int’l Conf. on Machine Learning, John Langford and Joelle Pineau (Eds.). Int’l Conf. on Machine Learning (ICML), 1807–1814.
Biggio et al. (2013)
Battista Biggio, Ignazio
Pillai, Samuel Rota Bulò, Davide Ariu,
Marcello Pelillo, and Fabio Roli.
Is Data Clustering in Adversarial Settings
Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security(AISec ’13). ACM, New York, NY, USA, 87–98.
- Biggio et al. (2014) Battista Biggio, Konrad Rieck, Davide Ariu, Christian Wressnegger, Igino Corona, Giorgio Giacinto, and Fabio Roli. 2014. Poisoning Behavioral Malware Clustering. In 2014 Workshop on Artificial Intelligent and Security (AISec ’14). ACM, New York, NY, USA, 27–36.
B. Biggio and F.
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning.ArXiv e-prints (2018).
- Carlini and Wagner (2017a) Nicholas Carlini and David A. Wagner. 2017a. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. In 10th ACM Workshop on Artificial Intelligence and Security (AISec ’17), Bhavani M. Thuraisingham, Battista Biggio, David Mandell Freeman, Brad Miller, and Arunesh Sinha (Eds.). ACM, New York, NY, USA, 3–14.
- Carlini and Wagner (2017b) Nicholas Carlini and David A. Wagner. 2017b. Towards Evaluating the Robustness of Neural Networks. In IEEE Symposium on Security and Privacy. IEEE Computer Society, 39–57.
- Chen et al. (2017) X. Chen, C. Liu, B. Li, K. Lu, and D. Song. 2017. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. ArXiv e-prints abs/1712.05526 (2017).
- Dang et al. (2017) Hung Dang, Yue Huang, and Ee-Chien Chang. 2017. Evading Classifiers by Morphing in the Dark. In Proceedings of the 24th ACM SIGSAC Conference on Computer and Communications Security (CCS).
- Demontis et al. (ress) Ambra Demontis, Marco Melis, Battista Biggio, Davide Maiorca, Daniel Arp, Konrad Rieck, Igino Corona, Giorgio Giacinto, and Fabio Roli. In press. Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection. IEEE Trans. Dependable and Secure Computing (In press).
- Demontis et al. (2016) Ambra Demontis, Paolo Russu, Battista Biggio, Giorgio Fumera, and Fabio Roli. 2016. On Security and Sparsity of Linear Classifiers for Adversarial Settings. In Joint IAPR Int’l Workshop on Structural, Syntactic, and Statistical Pattern Recognition (LNCS), Antonio Robles-Kelly, Marco Loog, Battista Biggio, Francisco Escolano, and Richard Wilson (Eds.), Vol. 10029. Springer International Publishing, Cham, 322–332.
- Dong et al. (2018) Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Xiaolin Hu, and Jun Zhu. 2018. Discovering Adversarial Examples with Momentum. In CVPR.
- Fredrikson et al. (2015) Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS ’15). ACM, New York, NY, USA, 1322–1333.
- Galbally et al. (2010) Javier Galbally, Chris McCool, Julian Fierrez, Sebastien Marcel, and Javier Ortega-Garcia. 2010. On the vulnerability of face verification systems to hill-climbing attacks. Pattern Recogn. 43, 3 (2010), 1027–1038. https://doi.org/10.1016/j.patcog.2009.08.022
- Goodfellow et al. (2015) Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations.
- Grosse et al. (2017) Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick D. McDaniel. 2017. Adversarial Examples for Malware Detection. In ESORICS (2) (LNCS), Vol. 10493. Springer, 62–79.
- Gu et al. (2017) Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. In NIPS Workshop on Machine Learning and Computer Security, Vol. abs/1708.06733.
- Huang et al. (2011) L. Huang, A. D. Joseph, B. Nelson, B. Rubinstein, and J. D. Tygar. 2011. Adversarial Machine Learning. In 4th ACM Workshop on Artificial Intelligence and Security (AISec 2011). Chicago, IL, USA, 43–57.
- Jagielski et al. (2018) M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li. 2018. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. In IEEE Symposium on Security and Privacy (SP ’18). IEEE CS, 931–947. https://doi.org/10.1109/SP.2018.00057
- Kantchelian et al. (2016) Alex Kantchelian, J. D. Tygar, and Anthony D. Joseph. 2016. Evasion and Hardening of Tree Ensemble Classifiers. In 33rd ICML (JMLR Workshop and Conference Proceedings), Vol. 48. JMLR.org, 2387–2396.
- Koh and Liang (2017) P. W. Koh and P. Liang. 2017. Understanding Black-box Predictions via Influence Functions. In International Conference on Machine Learning (ICML).
- Liu et al. (2017) Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017. Delving into Transferable Adversarial Examples and Black-box Attacks. In ICLR.
- Lyu et al. (2015) Chunchuan Lyu, Kaizhu Huang, and Hai-Ning Liang. 2015. A Unified Gradient Regularization Family for Adversarial Examples. In 2015 IEEE International Conference on Data Mining (ICDM), Vol. 00. IEEE Computer Society, Los Alamitos, CA, USA, 301–309.
- Martinez-Diaz et al. (2011) Marcos Martinez-Diaz, Julian Fierrez, Javier Galbally, and Javier Ortega-Garcia. 2011. An evaluation of indirect attacks and countermeasures in fingerprint verification systems. Pattern Recognition Letters 32, 12 (2011), 1643 – 1651. https://doi.org/10.1016/j.patrec.2011.04.005
- Mei and Zhu (2015) Shike Mei and Xiaojin Zhu. 2015. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners. In 29th AAAI Conf. Artificial Intelligence (AAAI ’15).
- Melis et al. (2017) Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, and Fabio Roli. 2017. Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid. In ICCVW Vision in Practice on Autonomous Robots (ViPAR). IEEE, 751–759.
- Moosavi-Dezfooli et al. (2017) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In CVPR.
- Muñoz-González et al. (2017) Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, and Fabio Roli. 2017. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization. In 10th ACM Workshop on Artificial Intelligence and Security (AISec ’17), Bhavani M. Thuraisingham, Battista Biggio, David Mandell Freeman, Brad Miller, and Arunesh Sinha (Eds.). ACM, New York, NY, USA, 27–38.
- Nelson et al. (2008) Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D. Joseph, Benjamin I. P. Rubinstein, Udam Saini, Charles Sutton, J. D. Tygar, and Kai Xia. 2008. Exploiting machine learning to subvert your spam filter. In LEET’08: Proceedings of the 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats. USENIX Association, Berkeley, CA, USA, 1–9.
Newell et al. (2014)
Andrew Newell, Rahul
Potharaju, Luojie Xiang, and Cristina
On the Practicality of Integrity Attacks on Document-Level Sentiment Analysis. InProc. Workshop on Artificial Intelligence and Security (AISec).
- Newsome et al. (2006) James Newsome, Brad Karp, and Dawn Song. 2006. Paragraph: Thwarting signature learning by training maliciously. In Recent advances in intrusion detection. Springer, 81–105.
- Papernot et al. (2016a) Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016a. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. arXiv:1605.07277. (2016).
- Papernot et al. (2017) Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical Black-Box Attacks Against Machine Learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (ASIA CCS ’17). ACM, New York, NY, USA, 506–519.
- Papernot et al. (2016b) Nicolas Papernot, Patrick D. McDaniel, and Ian J. Goodfellow. 2016b. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. ArXiv e-prints abs/1605.07277 (2016).
- Pedregosa et al. (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2011), 2825–2830.
- Perdisci et al. (2006) R. Perdisci, D. Dagon, Wenke Lee, P. Fogla, and M. Sharif. 2006. Misleading worm signature generators using deliberate noise injection. In Proc. IEEE Security and Privacy Symposium (S&P).
- Ross and Doshi-Velez (2018) Andrew Slavin Ross and Finale Doshi-Velez. 2018. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients. In AAAI. AAAI Press.
- Rubinstein et al. (2009) Benjamin I.P. Rubinstein, Blaine Nelson, Ling Huang, Anthony D. Joseph, Shing-hon Lau, Satish Rao, Nina Taft, and J. D. Tygar. 2009. ANTIDOTE: understanding and defending against poisoning of anomaly detectors. In Proceedings of the 9th ACM SIGCOMM Internet Measurement Conference (IMC ’09). ACM, New York, NY, USA, 1–14.
- Russu et al. (2016) Paolo Russu, Ambra Demontis, Battista Biggio, Giorgio Fumera, and Fabio Roli. 2016. Secure Kernel Machines against Evasion Attacks. In 9th ACM Workshop on Artificial Intelligence and Security (AISec ’16). ACM, New York, NY, USA, 59–69.
et al. (2016)
Mahmood Sharif, Sruti
Bhagavatula, Lujo Bauer, and Michael K
Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. InProceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 1528–1540.
- Simon-Gabriel et al. (2018) C. J. Simon-Gabriel, Y. Ollivier, B. Schölkopf, L. Bottou, and D. Lopez-Paz. 2018. Adversarial Vulnerability of Neural Networks Increases with Input Dimension. ArXiv e-prints (2018).
- Sokolić et al. (2017) J. Sokolić, R. Giryes, G. Sapiro, and M. R. D. Rodrigues. 2017. Robust Large Margin Deep Neural Networks. IEEE Transactions on Signal Processing 65, 16 (Aug 2017), 4265–4280.
- Suciu et al. (2018) Octavian Suciu, Radu Marginean, Yigitcan Kaya, Hal Daume III, and Tudor Dumitras. 2018. When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks. In 27th USENIX Security Symposium (USENIX Security 18). USENIX Association, Baltimore, MD, 1299–1316. https://www.usenix.org/conference/usenixsecurity18/presentation/suciu
- Szegedy et al. (2014) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations. http://arxiv.org/abs/1312.6199
- Tramèr et al. (2017) F. Tramèr, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel. 2017. The Space of Transferable Adversarial Examples. ArXiv e-prints (2017).
- Tramèr et al. (2016) Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing Machine Learning Models via Prediction APIs. In 25th USENIX Security Symposium (USENIX Security 16). USENIX Association, Austin, TX, 601–618.
- Varga et al. (2017) D. Varga, A. Csiszárik, and Z. Zombori. 2017. Gradient Regularization Improves Accuracy of Discriminative Models. ArXiv e-prints ArXiv:1712.09936 (2017).
- Šrndic and Laskov (2014) Nedim Šrndic and Pavel Laskov. 2014. Practical Evasion of a Learning-Based Classifier: A Case Study. In Proc. 2014 IEEE Symp. Security and Privacy (SP ’14). IEEE CS, Washington, DC, USA, 197–211.
- Wu et al. (2018) Lei Wu, Zhanxing Zhu, Cheng Tai, and Weinan E. 2018. Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient. ArXiv e-prints (2018).
- Xiao et al. (2015) Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, and Fabio Roli. 2015. Is Feature Selection Secure against Training Data Poisoning?. In JMLR W&CP - Proc. 32nd Int’l Conf. Mach. Learning (ICML), Francis Bach and David Blei (Eds.), Vol. 37. 1689–1698.
- Xu et al. (2016) Weilin Xu, Yanjun Qi, and David Evans. 2016. Automatically Evading Classifiers A Case Study on PDF Malware Classifiers.. In Proceedings of the Network and Distributed System Security Symposium (NDSS). Internet Society.
- Zhang et al. (2016) F. Zhang, P.P.K. Chan, B. Biggio, D.S. Yeung, and F. Roli. 2016. Adversarial Feature Selection Against Evasion Attacks. IEEE Transactions on Cybernetics 46, 3 (2016), 766–777.