Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

05/24/2016 ∙ by Nicolas Papernot, et al. ∙ Penn State University OpenAI 0

Many machine learning models are vulnerable to adversarial examples: inputs that are specially crafted to cause a machine learning model to produce an incorrect output. Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task. An attacker may therefore train their own substitute model, craft adversarial examples against the substitute, and transfer them to a victim model, with very little information about the victim. Recent work has further developed a technique that uses the victim model as an oracle to label a synthetic training set for the substitute, so the attacker need not even collect a training set to mount the attack. We extend these recent techniques using reservoir sampling to greatly enhance the efficiency of the training procedure for the substitute model. We introduce new transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees. We demonstrate our attacks on two commercial machine learning classification systems from Amazon (96.19 misclassification rate) and Google (88.94 victim model, thereby showing that existing machine learning approaches are in general vulnerable to systematic black-box attacks regardless of their structure.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many classes of machine learning algorithms have been shown to be vulnerable to adversarial samples [22, 12, 19]; adversaries subtly alter legitimate inputs (call input perturbation) to induce the trained model to produce erroneous outputs. Adversarial samples can be used to, for example, subvert fraud detection, bypass content filters or malware detection, or to mislead autonomous navigation systems [20]. These attacks on input integrity exploit imperfections and approximations made by learning algorithms during training to control machine learning models outputs (see Figure 1).

Figure 1:

An adversarial sample (bottom row) is produced by slightly altering a legitimate sample (top row) in a way that forces the model to make a wrong prediction whereas a human would still correctly classify the sample 

[19].

Adversarial sample transferability111Note that this is distinct from knowledge transfer, which refers to techniques designed to transfer the generalization knowledge learned by a model during training—and encoded in its parameters—to another model  [13]. is the property that some adversarial samples produced to mislead a specific model can mislead other models —even if their architectures greatly differ [22, 12, 20]. A practical impact of this property is that it leads to oracle

-based black box attacks. In one such attack, Papernot et al. trained a local deep neural network (DNN) using crafted inputs and output labels generated by the target “victim” DNN 

[19]. Thereafter, the local network was used to generate adversarial samples that were highly effective on the original victim DNN. The key here was that the adversary has very limited information—they knew nothing about the architecture or parameters but only knew that the victim was a DNN—and had only oracle access that allowed it to obtain outputs for chosen inputs.

In this paper, we develop and validate a generalized algorithm for black box attacks that exploit adversarial sample transferability on broad classes of machine learning. In investigating these attacks, we explore transferability within and between different classes of machine learning classifier algorithms. We explore neural networks (DNNs), logistic regression (LR), support vector machines (SVM), decision trees (DT), nearest neighbors (kNN), and ensembles (Ens.). In this, we demonstrate that black-box attacks are generally applicable to machine learning and can effectively target classifiers not built using deep neural networks. The generalization is two-fold: we show that (1) the substitute model can be trained with other techniques than deep learning, and (2) transferability-based black box attacks are not restricted to deep learning targets and is in fact successful with targeted models of many machine learning types. Our contributions are summarized as follows:

  • We introduce adversarial sample crafting techniques for support vector machine as well as decision trees—which are non-differentiable machine learning models.

  • We study adversarial sample transferability across the machine learning space and find that samples largely transfer well across models trained with the same machine learning technique, and across models trained with different techniques or ensembles taking collective decisions. For example, a support vector machine and decision tree respectively misclassify and of adversarial samples crafted for a logistic regression model. Previous work on adversarial example transferability has primarily studied the case where at least one of the models involved in the transfer is a neural network [22, 12, 24], while we aim to more generally characterize the transferability between a diverse set of models chosen to capture most of the space of popular machine learning algorithms.

  • We generalize the learning of substitute models from deep learning to logistic regression and support vector machines. Furthermore, we show that it is possible to learn substitutes matching labels produced by many machine learning models (DNN, LR, SVM, kNN) at rates superior to . We improve the accuracy and computational cost of a previously proposed substitute learning technique by introducing a new hyper-parameter and the use of reservoir sampling.

  • We conduct black-box attacks against classifiers hosted by Amazon and Google. We show that despite our lack of knowledge of the classifier internals, we can force them to respectively misclassify 96.19% and 88.94% of their inputs using a logistic regression substitute model trained by making only queries to the target.

2 Approach Overview

In this section, we describe our approach, which is structured around the evaluation of two hypotheses relevant to the design of black-box attacks against machine learning classifiers.

Let us precisely define adversarial sample transferability. Consider an adversary interested in producing an adversarial sample misclassified in any class different from the class assigned by model to legitimate input . This can be done by solving222Finding a closed form solution to this problem is not always possible, as some machine learning models preclude the optimization problem from being linear or convex. Nevertheless, several approaches have been proposed to find approximative solutions to Equation 1. They yield adversarial samples effectively misleading non-linear and non-convex models like neural networks [22, 12, 19]. In addition, we introduce new techniques to craft adversarial samples against support vector machines and decision trees in Section 6. the following optimization problem [22]:

(1)

Samples solving Equation 1 are specifically computed to mislead model . However, as stated previously, such adversarial samples are in practice also frequently misclassified by models different from . To facilitate our discussion, we formalize this adversarial sample transferability notion as:

(2)

where set is representative of the expected input distribution for the task solved by models and . We partition adversarial sample transferability in two variants characterizing the pair of models . The first, intra-technique transferability, is defined across models trained with the same machine learning technique but different parameter initializations or datasets (e.g., and are both neural networks or both decision trees). The second, cross-technique transferability, considers models trained using two techniques (e.g., is a neural network and a decision tree).

Hypothesis 1: Both intra-technique and cross-technique adversarial sample transferabilities are consistently strong phenomena across the space of machine learning techniques.

In this first hypothesis, we explore how well both variants of transferability hold across classes of machine learning algorithms. The motivation behind this investigation is that adversarial sample transferability constitutes a threat vector against machine learning classifiers in adversarial settings. To identify the most vulnerable classes of models, we need to generate an accurate comparison of the attack surface of each class in constrained experimental settings.

To validate this hypothesis, we perform a large-scale study in Section 3. Each of the study’s two folds investigates one of the adversarial sample transferability variants: intra-technique and cross-technique. For completeness, we consider a collection of models representatively spanning the machine learning space, as demonstrated by Table 1. Models are trained on MNIST data [16] to solve the hand-written digit recognition task. In the first fold of the study, we measure intra-technique adversarial sample transferability rates , for each machine learning technique, across models trained on different subsets of the data. In the second fold of the study, we measure inter-technique adversarial sample transferability rates across models corresponding to all possible pairs of machine learning techniques.

ML Differentiable Linear Lazy
Technique Model Model Prediction
DNN Yes No No
LR Yes Log-linear No
SVM No No No
DT No No No
kNN No No Yes
Ens No No No
Table 1: Machine Learning Techniques studied in Section 3

Hypothesis 2: Black-box attacks are possible in practical settings against any unknown machine learning classifier.

Our motivation is to demonstrate that deployment of machine learning in settings where there are incentives for adversaries to have models misbehave must take into account the practical threat vector of adversarial samples. Indeed, if black-box attacks are realistic in practical settings, machine learning algorithm inputs must be validated as being part of the expected distribution of inputs. As is the case for SQL injections, the existence of adversarial samples calls for input validation in production systems using machine learning.

The verification of this second hypothesis is two-fold as well. In Section 4, we show how to transfer the generalization knowledge of any machine learning classifiers into a substitute model by querying the classifier for labels on carefully selected inputs. In Section 5, we perform black-box attacks against commercial machine learning classifiers hosted by Amazon and Google. As we validate the hypothesis throughout Sections 4 and 5, we operate under the specific threat model of an oracle, described in [20], which characterizes realistic adversarial settings. Instead of having full knowledge of the model’s architecture and its parameters , as was the case for the first hypothesis validation in Section 3, we now assume the adversary’s only capability is to observe the label predicted by the model on inputs of its choice.

3 Transferability of Adversarial Samples in Machine Learning

In this section, our working hypothesis is that intra-technique and cross-technique adversarial sample transferability are strong phenomena across the machine learning space. Thus, we empirically study these two phenomena across a range of machine learning techniques: deep neural networks (DNNs), logistic regression (LR), support vector machines (SVM), decision trees (DT), nearest neighbors (kNN), and ensembles (Ens.). All models are found vulnerable to intra-technique adversarial sample transferability—misclassification of samples by different models trained using the same machine learning technique, the phenomenon is stronger for differentiable models like DNNs and LR than for non-differentiable models like SVMs, DTs and kNNs. Then, we observe that DNNs and kNNs boast resilience to cross-technique transferability, misclassifications of adversarial samples by models trained with distinct machine learning techniques. We find that all other models, including LR, SVMs, DTs, and an ensemble of models collectively making predictions, are considerably more vulnerable to cross-technique transferability.

3.1 Experimental Setup

We describe here the dataset and machine learning models used in this section to study both types of transferability.

Dataset - We use the seminal MNIST dataset of handwritten digits [16]. This dataset has been well-studied in both the machine learning and security communities. We chose it because its dimensionality is suitable to the range of machine learning techniques included in our study, which all perform at least reasonably well on this dataset. The task associated with the dataset is classification of images in one of the classes corresponding to each possible digit ranging from to . The dataset includes training samples, validation samples, and test samples. Each x gray-scale pixel image is encoded as a vector of intensities whose real values range from (black) to (white).

Machine learning models - We selected five machine learning techniques: DNNs, LR, SVMs, DTs, and kNNs. All of these machine learning techniques, as well as the algorithms used to craft adversarial samples, are presented in Section 6 of this paper. As outlined in Table 1, DNNs were chosen for their state-of-the-art performance, LR for its simplicity, SVMs for their potential robustness stemming from the margin constraints when choosing decision boundaries at training, DTs for their non-differentiability, and kNNs for being lazy-classification333No model is learned during training. Predictions are made by finding points closest to the sample in the training data, and extrapolating its class from the class of these points.

models. To train DNN, LR, and kNN models, we use Theano 

[3] and Lasagne [2]. The DNN is made up of a hierarchy of 2 convolutional layers of x kernels, 2 convolutional layers of x kernels, 2 rectified linear layers of

units, and a softmax layer of

units. It is trained during 10 epochs with learning, momentum, and dropout rates of respectively

, , and decayed by after 5 epochs. The LR is performed using a softmax regression on the inputs. It is trained during 15 epochs at a learning rate of with a momentum rate of both decayed by after epochs. The linear SVM and DT are trained with scikit-Learn.

3.2 Intra-technique Transferability

We show that differentiable models like DNNs and LR are more vulnerable to intra-technique transferability than non-differentiable models like SVMs, DTs, and kNNs. We measure intra-technique transferability between models and , both learned using the same machine learning technique, as the proportion of adversarial samples produced to be misclassified by model that are misclassified by model .

To train different models using the same machine learning technique, we split the training set in disjoint subsets A,B,C,D,E of samples each, in order of increasing indices. For each of the machine learning techniques (DNN, LR, SVM, DT, kNN), we thus learn five different models referred to as A,B,C,D,E. Model accuracies, i.e. the proportion of labels correctly predicted by the model for the testing data, are reported in Figure 1(a). For each of the 25 models, we apply the suitable adversarial sample algorithm described in Section 3.2 and craft samples from the test set, which was unused during training. For adversarial sample algorithms with parameters, we fine-tune them to achieve a quasi-complete misclassification of the adversarial samples by the model on which they are crafted. Upon empirically exploring the input variation parameter space, we set it to for the fast gradient sign method algorithm, and for the SVM algorithm.

(a) Model Accuracies
(b) DNN models
(c) LR models
(d) SVM models
(e) DT models
(f) kNN models
Figure 2: intra-technique transferability for 5 ML techniques. Figure 1(a) reports the accuracy rates of the 25 models used, computed on the MNIST test set. Figures 1(b)-1(f) are such that cell reports the intra-technique transferability between models and , i.e. the percentage of adversarial samples produced using model misclassified by model .

Figures 1(b)-1(f) report intra-technique transferability rates for each of the five machine learning techniques. Rates on the diagonals indicate the proportion of adversarial samples misclassified precisely by the same model on which they were crafted. Off-diagonal rates indicate the proportion of adversarial samples misclassified by a model different from the model on which they were crafted. We first observe that all models are vulnerable to intra-technique transferability in a non-negligible manner. LR models are most vulnerable as adversarial samples transfer across models at rates larger than . DNN models display similarly important transferability, with rates of at least

. On the SVM, DT, and kNN matrices, the diagonals stand out more, indicating that these techniques are to some extent more robust to the phenomenon. In the case of SVMs, this could be explained by the explicit constraint during training on the choice of hyperplane decision boundaries that maximize the margins (i.e. support vectors). The robustness of both DTs and kNNs could simply stem from their non-differentiability.

3.3 Cross-technique Transferability

We define cross-technique transferability between models and , trained using different machine learning techniques, as the proportion of adversarial samples produced to be misclassified by model that are also misclassified by model . Hence, this is a more complex phenomenon than intra-technique transferability because it involves models learned using possibly very different techniques like DNNs and DTs. Yet, cross-technique transferability is surprisingly a strong phenomenon to which techniques like LR, SVM, DT, and ensembles are vulnerable, making it easy for adversaries to craft adversarial samples misclassified by models trained using diverse machine learning techniques.

We study the cross-technique transferability phenomenon across models trained using the five machine learning techniques already used in Section 3.2 and described in Section 3.1 and 6. To these, we add a 6th model: an ensemble . The ensemble is implemented using a collection of 5 experts, which are the 5 previously described models: the DNN denoted , LR denoted , SVM denoted , DT denoted , and kNN denoted . Each expert makes a decision and the ensemble outputs the most frequent choice (or the class with the lowest index if they all disagree):

(3)

where indicates whether classifier assigned class to input . Note that in this section, we only train one model per machine learning technique on the full MNIST training set of samples, unlike in Section 3.2.

In this experiment, we are interested in transferability across machine learning techniques. As such, to ensure our results are comparable, we fine-tune the parameterizable crafting algorithms to produce adversarial samples with similar perturbation magnitudes. To compare magnitudes across perturbation styles, we use the L1 norm: the sum of each perturbation component’s absolute value. Perturbation added to craft adversarial samples using the DNN, LR, and SVM have an average L1 norm of . To achieve this, we use an input variation parameter of with the fast gradient sign method on the DNN, LR, and kNN. To craft adversarial samples on the SVM, we use an input variation parameter of with the crafting method introduced in Section 6. Unfortunately, the attack on DT cannot be parameterized to match the L1 norm of DNN, LR, kNN and SVM attacks. Hence, perturbations selected have much lower average L1 norms of respectively .

We build a cross-technique transferability matrix where each cell holds the percentage of adversarial samples produced for classifier that are misclassified by classifier . In other words, rows indicate the machine learning technique that trained the model against which adversarial samples were crafted. The row that would correspond to the ensemble is not included because there is no crafting algorithm designed to produce adversarial samples specifically for an ensemble, although we address this limitation in Section 4 using insight gained in this experiment. Columns indicate the underlying technique of the classifier making predictions on adversarial samples. This matrix, plotted in Figure 3, shows that cross-technique transferability is a strong but heterogeneous phenomenon. The most vulnerable model is the decision tree (DT) with misclassification rates ranging from to while the most resilient is the deep neural network (DNN) with misclassification rates between and . Interestingly, the ensemble is not resilient to cross-technique transferability of adversarial samples with rates reaching for samples crafted using the LR model. This is most likely due to the vulnerability of each underlying expert to adversarial samples.

Figure 3: cross-technique Transferability matrix: cell is the percentage of adversarial samples crafted to mislead a classifier learned using machine learning technique that are misclassified by a classifier trained with technique .

We showed that all machine learning techniques we studied are vulnerable to two types of adversarial sample transferability. This most surprisingly results in adversarial samples being misclassified across multiple models learned with different machine learning techniques. This cross-technique transferability greatly reduces the minimum knowledge that adversaries must possess of a machine learning classifier in order to force it to misclassify inputs that they crafted. We leverage this observation, along with findings from Section 4, to justify design choices in the attack described in Section 5.

4 Learning Classifier Substitutes by Knowledge Transfer

In the previous section, we identified machine learning techniques (e.g., DNNs and LR) yielding models adequate for crafting samples misclassified across models trained with different techniques, i.e adversarial samples with strong cross-technique transferability. Thus, in order to craft adversarial samples misclassified by a classifier whose underlying model is unknown, adversaries can instead use a substitute model if it solves the same classification problem and its parameters are known. Therefore, efficiently learning substitutes is key to designing black-box attacks where adversaries target remote classifiers whose model, parameters, and training data are unknown to them. This is precisely the attack scenario evaluated against commercial machine learning platforms in Section 5, while we focus in this section on the prerequisite learning of substitutes for machine learning classifiers.

We enhance an algorithm introduced in [20] to learn a substitute model for a given classifier simply by querying it for labels on carefully chosen inputs. More precisely, we introduce two refinements to the algorithm: one improves its accuracy and the second reduces its computational complexity. We generalize the learning of substitutes to oracles using a range of machine learning techniques: DNNs, LR, SVMs, DTs, and kNNs. Furthermore, we show that both DNNs and LR can be used as substitute models for all machine learning techniques studied to the exception of decision trees.

4.1 Dataset Augmentation for Substitutes

The targeted classifier is designated as an oracle because adversaries have the minimal capability of querying it for predictions on inputs of their choice. The oracle returns the label (not

the probabilities) assigned to the sample. No other knowledge of the classifier (e.g., model type, parameters, training data) is available. To circumvent this, we build on a technique introduced in 

[20], which leverages a dataset augmentation technique to train the substitute model.

Jacobian-based dataset augmentation - We use this augmentation technique introduced in [20] to learn DNN and LR substitutes for oracles. First, one collects an initial substitute training set of limited size (representative of the task solved by the oracle) and labels it by querying the oracle. Using this labeled data, we train a first substitute model likely to perform poorly as a source of adversarial samples due to the small numbers of samples used for training. To select additional training points, we use the following:

(4)

where and are the previous and new training sets, a parameter fine-tuning the augmentation step size, the Jacobian matrix of substitute , and the oracle’s label for sample . We train a new instance of the substitute with the augmented training set , which we can label simply by querying oracle . By alternatively augmenting the training set and training a new instance of the substitute model for multiple iterations , Papernot et al. showed that substitute DNNs can approximate another DNNs [20].

Periodical Step Size - When introducing the technique, Papernot et al. used a fixed step size parameter throughout the substitute learning iterations . In this section, we show that by having a step size periodically alternating between positive and negative values, one can improve the quality of the oracle approximation made by the substitute, which we measure in terms of the number of labels matched with the original classifier oracle. More precisely, we introduce an iteration period after which the step size is multiplied by . Thus, the step size is defined as:

(5)

where is set to be the number of epochs after which the Jacobian-based dataset augmentation does not lead any substantial improvement in the substitute. A grid search can also be performed to find an optimal value for the period . We also experimented with a decreasing grid step amplitude , but did not find that it yielded substantial improvements.

Reservoir Sampling - We also introduce the use of reservoir sampling [23] as a mean to reduce the number of queries made to the oracle. This is useful when learning substitutes in realistic environments where the number of label queries an adversary can make without exceeding a quota or being detected by a defender is constrained. Reservoir sampling is a class of algorithms that randomly select samples from a list of samples. The total number of samples in the list can be both very large and unknown. In our case, we use reservoir sampling to select a limited number of new inputs when performing a Jacobian-based dataset augmentation. This prevents the exponential growth of queries made to the oracle at each augmentation iteration. At iterations (the first iterations are performed normally), when considering the previous set of substitute training inputs, we select inputs from to be augmented in . These inputs are selected using reservoir sampling, as described in Algorithm 1. This technique ensures that each input in has an equal probability to be augmented in . The number of queries made to the oracle is reduced from for the vanilla Jacobian-based augmentation to for the Jacobian-based augmentation with reservoir sampling. Our experiments show that the reduced number of training points in the reservoir sampling variant does not significantly degrade the quality of the substitute.

1:, , ,
2:
3:Initialize as array of items
4:
5:for  do
6:     
7:end for
8:for  do
9:      random integer between and
10:     if  then
11:         
12:     end if
13:end for
14:return
Algorithm 1 Jacobian-based augmentation with Reservoir Sampling: sets are considered as arrays for ease of notation.

4.2 Deep Neural Network Substitutes

In [20], the oracle classifier approximated was always a DNN. However, the authors concluded with preliminary results suggesting applicability to a nearest neighbors classifier. We here show that in fact the technique is generalizable and applicable to many machine learning techniques by evaluating its performance on 5 types of ML classifiers: a DNN, LR, SVM, DT, and kNN. This spectrum is representative of machine learning (cf. Section 3.1). Our experiments suggest that one can accurately transfer the knowledge from many machine learning classifiers to a DNN and obtain a DNN mimicking the decision boundaries of the original classifier.

Using the Jacobian-based augmentation technique, we train 5 different substitute DNNs to match the labels produced by 5 different oracles, one for each of the ML techniques mentioned. These classifiers serving as oracles are all trained on the sample MNIST training set using the models described previously in Section 3.1. To approximate them, we use the first samples from the MNIST test set (unseen during training) as the initial substitute training set and follow three variants of the procedure detailed in Section 4.1 with : (1) vanilla Jacobian-based augmentation, (2) with periodic step size, (3) with both periodic step size and reservoir sampling with parameters and . The substitute architecture is identical to the DNN architecture from Section 3.1. We allow experiments to train substitutes for 10 augmentation iterations, i.e. .

Figure 3(a) plots at each iteration

the share of samples on which the substitute DNNs agree with predictions made by the classifier oracle they are approximating. This proportion is estimated by comparing the labels assigned to the MNIST test set by the substitutes and oracles before each iteration

of the Jacobian-based dataset augmentation. The substitutes used in this figure were all trained with both a periodic step size and reservoir sampling, as described previously. Generally speaking, all substitutes are able to successfully approximate the corresponding oracle, after augmentation iterations, the labels assigned match for about to of the MNIST test set, except for the case of the DT oracle, which is only matched for of the samples. This difference could be explained by the non-differentiability of decisions trees. On the contrary, substitute DNNs are able to approximate the nearest neighbors oracle although it uses lazy classification: no model is learned at training time and predictions are made by finding close training sample(s).

The first three rows of Table 2 quantify the impact of the two refinements introduced above on the proportion of test set labels produced by the oracle that were matched by DNN substitutes. The first refinement, the periodic step size, allows substitutes to approximate more accurately their target oracle. For instance at iterations, the substitute DNN trained with a periodic ste size for the DNN oracle matches of the labels whereas the vanilla substitute DNN only matched . Similarly, the substitute DNN trained with a periodic ste size for the SVM oracle matches of the labels whereas the vanilla substitute only matched . The second refinement, reservoir sampling allows us to train substitutes for more augmentation iterations without making too many queries to the oracle. For instance, iterations with reservoir sampling (using and ) make queries to the oracle instead of queries with the vanilla technique. The reduced number of queries has an impact on the substitute quality compared to the periodic step size substitutes but it is still superior to the vanilla substitutes. For instance, when approximating a DNN oracle, the vanilla substitute matched labels, the periodic step size one , and the periodic step size with reservoir sampling one .

(a) DNN substitutes
(b) LR substitutes
Figure 4: Label predictions matched between the DNN and LR substitutes and their target classifier oracles on test data.
Substitute DNN LR SVM DT kNN
DNN 78.01 82.17 79.68 62.75 81.83
DNN+PSS 89.28 89.16 83.79 61.10 85.67
DNN+PSS+RS 82.90 83.33 77.22 48.62 82.46
LR 64.93 72.00 71.56 38.44 70.74
LR+PSS 69.20 84.01 82.19 34.14 71.02
LR+PSS+RS 67.85 78.94 79.20 41.93 70.92
Table 2: Impact of our refinements, Periodic Step Size (PSS) and Reservoir Sampling (RS), on the percentage of label predictions matched between the substitutes and their target classifiers on test data after substitute iterations.

4.3 Logistic Regression Substitutes

Having generalized substitute learning with a demonstration of the capacity of DNNs to approximate any machine learning model, we now consider replacing the substitute itself by another machine learning technique. Experiments in Section 3.3 led us to conclude that cross-technique transferability is not specific to adversarial samples crafted on DNNs, but instead applies to many learning techniques. Looking at Figure 3 again, a natural candidate is logistic regression, as it displays large cross-technique transferability rates superior to DNNs except when targeting DNNs themselves.

The Jacobian-based dataset augmentation’s implementation for DNNs is easily adapted to multi-class logistic regression. Indeed, multi-class logistic regression is analog to the softmax layer frequently used by deep neural networks to produce class probability vectors. We can easily compute the component of the Jacobian of a multi-class LR model:

(6)

where notations are the ones used in Equation 9.

Hence, we repeat the experiment from Section 4.2 but we now train multi-class logistic regression substitute models (instead of the DNN substitutes) to match the labels produced by the classifier oracles. Everything else is unchanged in the experimental setup. As illustrated in Figure 3(b), the change of model type for the substitute generally speaking degrades the approximation quality: the proportion of labels matched is reduced. Performances of LR substitutes are competitive with those of DNN substitutes for LR and SVM oracles. Here again, the substitutes perform poorly on the decision tree oracle, with match rates barely above .

The last three rows of Table 2 quantify the impact of the two refinements introduced above on the proportion of test set labels produced by the oracle that were matched by LR substitutes. The first refinement, the periodic step size, allows LR substitutes to approximate more accurately their target oracle, as was also the case for DNN substitutes. For instance at iterations, the LRsubstitute trained with a periodic ste size for the LR oracle matches of the labels whereas the vanilla LR substitute only matched . Similarly, the LR substitute trained with a periodic ste size for the SVM oracle matches of the labels whereas the vanilla substitute only matched . The second refinement, reservoir sampling allows us to reduce the number of queries with a limited impact on the substitute quality: less labels are match than the periodic step size substitutes but more than the vanilla substitutes. For instance, when approximating a SVM oracle, the vanilla substitute matched of the labels, the periodic step size one , and the periodic step size with reservoir sampling one .

The benefit of vanilla LR substitutes compared to DNN substitutes is that they achieve their asymptotic match rate faster, after only augmentation iterations, corresponding to oracle queries. Furthermore, LR models are much lighter in terms of computational cost. These two factors could justify the use of LR (instead of DNN)substitutes in some contexts. The reservoir sampling technique gives good performances, especially on LR and SVM oracles.

4.4 Support Vector Machines Substitutes

Having observed that deep learning and logistic regression were both relevant when approximating classifier oracles, we now turn to SVMs for substitute learning. This is motivated by the strong cross-technique transferability of adversarial sample crafted using an SVM observed in Section 3, making SVMs good candidates for substitutes in a black-box attack.

SVM-based dataset augmentation -

To train SVMs to approximate oracles in a manner analogous to the Jacobian-based dataset augmentation, we introduce a new augmentation technique. We replace the heuristic in Equation 

4 by the following, which is adapted to the specificities of SVMs:

(7)

where is the weight indicating the hyperplane direction of subclassifier used to implement a multi-class SVM with the one-vs-the-rest scheme as detailed in Equation 12. This heuristic selects new points in the direction orthogonal to the hyperplane acting as the decision boundary for the binary SVM subclassifier corresponding to the input’s label. This is precisely the direction used in Equation 13 to find adversarial samples but parameter is here generally set to lower values so as to find samples near the decision boundary instead of on the other side of the decision boundary.

Experimental Validation - We repeat the experiments from Sections 4.2 and 4.3 but we now train 18 different SVM models to match labels produced by the classifiers—instead of training DNN or LR substitutes. Unfortunately, our results suggest that SVMs are unable to perform knowledge transfer from oracles that are not SVMs themselves using the dataset augmentation technique introduced in Equation 7, as well as the refinements introduced previously: the periodic step size and reservoir sampling. Indeed, the SVM substitute matches of the SVM oracle labels, but only and of the DNN and LR oracle labels. These numbers are not improved by the use of a periodic step size and/or reservoir sampling. This could be due to the specificity of SVM training and the decision boundaries they learn. Future work should investigate the use of alternative augmentation techniques to confirm our findings.

In this section, we evaluated the capacity of DNN, LR, and SVM substitutes to approximate a classifier oracle by querying it for labels on inputs selected using a heuristic relying on the substitute’s Jacobian. We observed that predictions made by DNN and LR substitutes more accurately matched the targeted oracles than SVM substitute predictions. We emphasize that all experiments only required knowledge of samples from the MNIST test set. In other words, learning substitutes does not require knowledge of the targeted classifier’s type, parameters, or training data, and can thus be performed under realistic adversarial threat models.

5 Black-Box Attacks of Remote
Machine Learning Classifiers

Intra-technique and cross-technique transferability of adversarial samples, together with the learning of substitutes for classifier oracles, enable a range of attacks targeting remote machine learning based systems whose internals are unknown to adversaries. To illustrate the feasibility of black-box attacks on such remote systems, we target in an experiment two machine learning classifiers respectively trained and hosted by Amazon and Google. We find it is possible to craft samples misclassified by these commerical oracles at respective rates of and after making queries to learn substitute models approximating them.

5.1 The Oracle Attack Method

This section’s adversarial threat model is identical to the one used when learning substitutes in Section 4: adversaries have an oracle access to the remote classifier. Its type, parameters, or training set are all unknown to the adversary. The attack method leverages Sections 3 and 4 of this paper, and is a generalization of the approach introduced in [20].

The adversary first locally trains a substitute model to approximate the remotely hosted classifier, using queries to the oracle as described in Section 4. We consider the use of deep learning and logistic regression to learn substitutes for classifiers. We apply the two refinements introduced in this paper: a periodic step size and reservoir sampling. Since substitute models are locally trained, the adversary has full knowledge of their model parameters. Thus, one of the adversarial sample crafting algorithms introduced in Section 6 corresponding to the machine learning technique used to learn the substitute are employed to craft adversarial samples misclassified by the substitute model. The adversary than leverages either intra-technique or cross-technique transferability of adversarial samples—depending on the techniques with which the substitute and oracle were learned: the inputs misleading the locally trained substitute model are very likely to also deceive the targeted remotely hosted oracle.

Previous work conducted such an attack using a substitute and targeted classifier both trained using deep learning, demonstrating that the attack was realistic using the MetaMind API providing Deep Learning as a Service [20]. We generalize these results by performing the attack on Machine Learning as a Service platforms that employ techniques that are unknown to us: Amazon Web Services and Google Cloud Prediction. Both platforms automate the process of learning classifiers using a labeled dataset uploaded by the user. Unlike MetaMind, neither of these platforms claim to exclusively use deep learning to build classifiers. When analyzing our results, we found that Amazon uses logistic regression (cf. below) but to the best of our knowledge Google has never disclosed the technique they use to train classifiers, ensuring that our experiment is properly blind-folded.

5.2 Amazon Web Services Oracle

Amazon offers a machine learning service, Amazon Machine Learning,444https://aws.amazon.com/machine-learning as part of their Amazon Web Services platform. We used this service to train and host a ML classifier oracle. First, we uploaded a CSV encoded version of the MNIST training set to an S3 bucket on Amazon Web Services. We truncated the pixel values in the CSV file to decimal places. We then started the ML model training process on the Machine Learning service: we loaded the CSV training data from our S3 bucket, selected the multi-class model type, provided the target column in the CSV file, and kept the default configuration settings. Note that Amazon offers limited customization options: the settings allow one to customize the recipe (data transformations), specify a maximum model size and number of training epochs, disable training data shuffle, and change the regularization type between L1 and L2 or simply disable regularization. The training process takes a few minutes and outputs a classifier model achieving a accuracy on the MNIST test set. We have no way to improve that performance beyond the limited customizing options as the intent of the service is to automate model training. Finally, we activate real-time predictions to be able to query the model for labels from our local machine.

We then use the Python API provided with the Amazon Machine Learning service to submit prediction queries to our trained oracle model and retrieve the output label. Although confidence values are available for predictions, we only consider the label to ensure our threat model for adversarial capabilities remains realistic. We incorporate this oracle in our experimental setup and train two substitute models to approximate the labels produced by this oracle, a DNN and LR, as SVM substitutes were dismissed by the conclusions of Section 4. We train two variants of the DNN and LR substitutes. The first variant is trained with the vanilla dataset augmentation and the second variant with the enhanced dataset augmentation introduced in this paper, which uses both a periodic step size and reservoir sampling. Learning is initialized with a substitute training set of samples from the MNIST test set. For all substitutes, we measure the attack success as the proportion among the adversarial samples, produced using the fast gradient sign method with parameter (cf. Section 6) and the MNIST test set, misclassified by the Amazon oracle.

Substitute type DNN LR
(800 queries) 87.44% 96.19%
(6,400 queries) 96.78 % 96.43%
(PSS + RS) (2,000 queries) 95.68% 95.83%
Table 3: Misclassification rates of the Amazon oracle on adversarial samples () produced with DNN and LR substitutes after augmentation iterations. Substitutes are trained without and with refinements from Section 4: periodic step size (PSS) and reservoir sampling (RS).

Misclassification rates of the Amazon Machine Learning oracle on adversarial samples crafted using both the DNN and LR substitutes after dataset augmentation iterations are reported in Table 3. Results are given for models learned without and with the two refinements—periodic step size (PSS) and reservoir sampling (RS)—introduced in Section 4. With a misclassification rate of for an adversarial perturbation using a LR substitute trained with queries () to the oracle, the model trained by Amazon is easily misled. To understand why, we carefully read the online documentation and eventually found one page indicating that the type of model trained by the Amazon Machine Learning service is an “industry-standard” multinomial logistic regression.555http://docs.aws.amazon.com/machine-learning/latest/dg/types-of-ml-models.html As seen in Section 3, LR is extremely vulnerable to intra-technique and to a lesser extend vulnerable to cross-technique transferability. In fact, as pointed out by Goodfellow et al. [12], shallow models like logistic regression are unable to cope with adversarial samples and learn a classifier resistant to them. This explains why (1) the attack is very successful and (2) the LR substitute performs better than the DNN substitute.

Additionally, Table 3 shows how the use of a periodic step size (PSS) together with reservoir sampling (RS) allows us to reduce the number of queries made to the Amazon oracle while learning a DNN substitute producing adversarial samples with higher transferability to the targeted classifier. Indeed, we reduce by a factor of more than the number of queries made from to , while only degrading the misclassification rate from to —still larger than the rate of achieved after queries by the substitute learned without PSS and RS. For the LR substitutes, we do not see any positive impact from the use of PSS and RS, which is most likely to the fast convergence of LR substitute learning, as observed in Section 4.

5.3 Google Cloud Prediction Oracle

To test whether this poor performance is limited to the Amazon Web Services platform, we now target the Google Cloud Prediction API service666https://cloud.google.com/prediction/. The procedure to train a classifier on Google’s platform is similar to Amazon’s. We first upload to Google’s Cloud Storage service the CSV encoded file of the MNIST training data identical to the one used to train the oracle on Amazon Machine Learning. We then activate the Prediction API on Google’s Cloud Platform and train a model using the API’s method named prediction.trainedmodels.insert. The only property we are able to specify is the expected multi-class nature of our classifier model as well as the column in the CSV indicating target labels. We then evaluate the resulting model using the API method prediction.trainedmodels.predict and an uploaded CSV file of the MNIST test set. The API reports an accuracy of on this test set for the model trained.

We now use the Google Cloud Python API to connect our experimental setup to the Prediction API, thus allowing our algorithms to make queries to the Google classifier oracle. As we did for Amazon, we train two substitute models (DNN and LR) using an initial substitute training set of 100 samples from the MNIST test set. For each substitute type, we train two model variants: the first one without periodic step size (PSS) or reservoir sampling (RS), the second one with both PSS and RS. Table 4 reports the rate of adversarial samples produced by each of the four resulting substitutes and misclassified by the Google Prediction API oracle.

Substitute type DNN LR
(800 queries) 84.50% 88.94%
(6,400 queries) 97.17% 92.05%
(PSS + RS) (2,000 queries) 91.57% 97.72%
Table 4: Misclassification rates of the Google oracle on adversarial samples () produced with DNN and LR substitutes after augmentation iterations.. Substitutes are trained without and with refinements from Section 4: periodic step size (PSS) and reservoir sampling (RS).

The model trained using Google’s machine learning service is a little more robust to adversarial samples than the one trained using Amazon’s service, but is still vulnerable to a large proportion of samples: of adversarial samples produced with a perturbation using a LR substitute trained with queries to the oracle are misclassified. This confirms the above demonstration of the feasibility of black-box attacks against the classifier hosted by Amazon. Furthermore, if we use PSS and RS, the misclassification rate is for the DNN substitute and for the LR substitute, which again demonstrates that combining PSS and RS increases misclassification compared to the original method for , and reduces by a factor of the number of queries () compared to the original method for .

A brief discussion of defenses - In an effort to evaluate possible defenses against such attacks, we now add these adversarial samples to the MNIST training dataset and train a new instance of the classifier oracle with the same procedure. The new oracle has an accuracy of on the MNIST test set. Adversarial samples crafted by training a new DNN substitute, even without PSS and RS, are still misclassified at a rate of after iterations and after

. This defense is thus not effective to protect the oracle from adversaries manipulating inputs. This is most likely due to the fact that the Google Prediction API uses shallow techniques to train its machine learning models, but we have no means to verify this. One could also try to deploy other defense mechanisms like defensive distillation 

[21]. Unfortunately, as we do not have any control on the training procedure used by Google Cloud, we cannot do so. To the best of our knowledge, Google has not disclosed the machine learning technique they use to train models served by their Google Cloud Prediction API service. As such, we cannot make any further recommendations on how to better secure models trained using this service.

6 Adversarial Sample Crafting

This section describes machine learning techniques used in this paper, along with methods used to craft adversarial samples against classifiers learned using these techniques. Building on previous work [22, 12, 19] describing how adversaries can efficiently select perturbations leading deep neural networks to misclassify their inputs, we introduce new crafting algorithms for adversaries targeting Support Vector Machines (SVMs) and Decision Trees (DTs).

6.1 Deep Neural Networks

Deep Neural Networks (DNNs) learn hierarchical representations of high dimensional inputs used to solve ML tasks [11]

, including classification. Each representation is modeled by a layer of neurons—elementary parameterized computing units—behaving like a multi-dimensional function. The input of each layer

is the output of the previous layer multiplied by a set of weights, which are part of the layer’s parameter . Thus, a DNN can be viewed as a composition of parameterized functions

whose parameters are learned during training. For instance, in the case of classification, the network is given a large collection of known input-label pairs and adjusts its parameters to reduce the label prediction error on these inputs. At test time, the model extrapolates from its training data to make predictions on unseen inputs.

To craft adversarial samples misclassified by DNNs, an adversary with knowledge of the model and its parameters can use the fast gradient sign method introduced in [12] or the Jacobian-based iterative approach proposed in [19]. We only provide here a brief description of the fast gradient sign method, which is the one we use in this work. To find an adversarial sample approximatively solving the optimization problem stated in Equation 1, Goodfellow et al. [12] proposed to compute the following perturbation:

(8)

where is the targeted DNN, its associated cost, and the correct label of input . In other words, perturbations are evaluated as the sign of the model’s cost function gradient with respect to inputs. An adversarial sample is successfully crafted when misclassified by model —it satisfies —while its perturbation remains indistinguishable to humans. The input variation sets the perturbation magnitude: higher input variations yield samples more likely to be misclassified by the DNN model but introduce more perturbation, which can be easier to detect.

6.2 Multi-class Logistic Regression

Multi-class logistic regression is the generalization of logistic regression to classification problems with classes [18]

. Logistic regression seeks to find the hypothesis best matching the data among the class of hypothesis that are a composition of a sigmoid function over the class of linear functions. A multi-class logistic regression model

can be written as:

(9)

where is the set of parameters learned during training, e.g., by gradient descent or Newton’s method.

Adversaries can also craft adversarial samples misclassified by multi-class logistic regression models using the fast gradient sign method [12]. In the case of logistic regression, the method finds the most damaging perturbation (according to the max norm) by evaluating Equation 8, unlike the case of deep neural networks where it found an approximation.

6.3 Nearest Neighbors

The k nearest neighbor (kNN) algorithm is a lazy-learning non-parametric classifier [18]: it does not require a training phase. Predictions are made on unseen inputs by considering the points in the training sets that are closest according to some distance. The estimated class of the input is the one most frequently observed among these points. When is set to , as is the case in this paper, the classifier is:

(10)

which outputs one row of , the matrix of indicator vectors encoding labels for the training data .

Although the kNN algorithm is non-parametric, it is still vulnerable to adversarial samples as pointed out in [20, 24]. In this paper, we used the fast gradient sign method to craft adversarial samples misclassified by nearest neighbors. To be able to differentiate the models, we use a smoothed variant of the nearest neighbor classifiers, which replaces the argmin operation in Equation 11 by a soft-min, as follows:

(11)

6.4 Multi-class Support Vector Machines

One possible implementation of a multiclass linear Support Vector Machine classifier is the one-vs-the-rest scheme. For each class of the machine learning task, a binary Support Vector Machine classifier is trained with samples of class labeled as positive and samples from other classes labeled as negative [8]. To classify a sample, each binary linear SVM classifier makes a prediction and the overall multiclass classifier outputs the class assigned the strongest confidence. Each of these underlying linear SVMs is a model classifying unseen samples using the following:

(12)
Figure 5: SVM Adversarial Samples: to move a sample away from its legitimate class in a binary SVM classifier , we perturb it by along the direction orthogonal to .

We now introduce an algorithm to find adversarial samples misclassified by a multi-class linear SVM . To the best of our knowledge, this method is more computationally efficient than previous [4]: it does not require any optimization. To craft adversarial samples, we perturb a given input in a direction orthogonal to the decision boundary hyperplane. More precisely, we perturb legitimate samples correctly classified by model in the direction orthogonal to the weight vector corresponding to the binary SVM classifier that assigned the correct class output by the multiclass model . The intuition, illustrated in Figure 5 with a binary SVM classifier, can be formalized as follows: for a sample belonging to class , an adversarial sample misclassified by the multiclass SVM model can be computed by evaluating:

(13)

where is the Frobenius norm, the weight vector of binary SVM , and the input variation parameter. The input variation parameter controls the amount of distortion introduced as is the case in the fast gradient sign method.

6.5 Decision Trees

Decision trees are defined by recursively partitioning the input domain [18]. Partitioning is performed by selecting a feature and a corresponding condition threshold that best minimize some cost function over the training data. Each node is a if-else statement with a threshold condition corresponding to one of the sample’s features. A sample is classified by traversing the decision tree from its root to one of its leaves accordingly to conditions specified in intermediate tree nodes. The leaf reached indicates the class assigned.

Adversaries can also craft adversarial inputs misclassified by decision trees. To the best of our knowledge, this is the first adversarial sample crafting algorithm proposed for decision trees. The intuition exploits the underlying tree structure of the classifier model. To find an adversarial sample, given a sample and a tree, we simply search for leaves with different classes in the neighborhood of the leaf corresponding to the decision tree’s original prediction for the sample. We then find the path from the original leaf to the adversarial leaf and modify the sample accordingly to the conditions on this path so as to force the decision tree to misclassify the sample in the adversarial class specified by the newly identified leaf.

Figure 6: Decision Tree Adversarial Samples: leaves indicate output classes (here the problem has 3 output classes) whereas intermediate nodes with letters indicate binary conditions (if condition do else do). To misclassify the sample from class denoted by the green leaf, the adversary modifies it such that conditions and evaluate accordingly for the sample to be classified in class denoted by the red leaf.

This intuition, depicted in Figure 6, is formalized by Algorithm 2. The algorithm takes a decision tree , a sample , the legitimate_class for sample , and outputs an adversarial sample misclassified by decision tree . The algorithm does not explicitly minimize the amount of perturbation introduced to craft adversarial samples, but as shown in Section 3.3, we found in practice that perturbations found involve a minuscule proportion of features.

1:, , legitimate_class
2:
3: find leaf in corresponding to
4:
5:
6:while  do
7:     if  then
8:          find leaf under ancestor.right
9:     else
10:          find leaf under ancestor.left
11:     end if
12:      nodes from legit_leaf to advers_leaf
13:     
14:end while
15:for  do
16:     perturb to change node’s condition output
17:end for
18:return
Algorithm 2 Crafting Decision Tree Adversarial Samples

7 Discussion and Related Work

Upon completion of their training on collections of known input-label pairs , classifiers make label predictions on unseen inputs  [18]. Models extrapolate from knowledge extracted by processing input-label pairs during training to make label predictions. Several factors, including (1) imperfections in the training algorithms, (2) the linearity of many underlying components used to built machine learning models, and (3) the limited amount of training points not always representative of the entire plausible input domain, leave numerous machine learning models exposed to adversarial manipulations of their inputs despite having excellent performances on legitimate—expected—inputs.

Our work builds on a practical method for attacks against black-box deep learning classifiers [20]. Learning substitute models approximating the decision boundaries of targeted classifiers alleviates the need of previous attacks [22, 12, 19] for knowledge of the target architecture and parameters. We generalized this method and showed that it can target any machine learning classifier. We also reduced its computational cost by (1) introducing substitute models trained using logistic regression instead of deep learning and (2) decreasing the number of queries made with reservoir sampling. Learning substitutes is an instance of knowledge transfer, a set of techniques to transfer the generalization knowledge learned by a model into another model [9, 10].

This paper demonstrates that adversaries can reliably target classifiers whose characteristics are unknown, deployed remotely, e.g., by machine learning as a service platforms. The existence of such a threat vector calls for the design of defensive mechanisms [17]. Unfortunately, we found that defenses proposed in the literature—such as training with adversarial samples [12]—were noneffective, or we were unable to deploy them because of our lack of access to the machine learning model targeted—for instance distillation [21]. This failure is most likely due to the shallowness of models like logistic regression, which support the services offered by Amazon and Google, although we are unable to confirm that statement in Google’s case using available documentation.

This work is part of a series of security evaluations of machine learning algorithms [1, 5]. Unlike us, previous work in this field assumed knowledge of the model architecture and parameters [6, 14]. Our threat model considered adversaries interested in misclassification at test time, once the model has been deployed. Other largely unexplored threat models exist. For instance poisoning the training data used to learn models was only considered in the context of binary SVMs whose training data is known [7]

or anomaly detection systems whose underlying model is known 

[15].

8 Conclusions

Our work first exposed the strong phenomenon of adversarial sample transferability across the machine learning space. Not only do we find that adversarial samples are misclassified across models trained using the same machine learning technique, but also across models trained by different techniques. We then improved the accuracy and reduced the computational complexity of an existing algorithm for learning models substitutes of machine learning classifiers. We showed that DNNs and LR could both effectively be used to learn a substitute model for many classifiers trained with a deep neural network, logistic regression, support vector machine, decision tree, and nearest neighbors. In a final experiment, we demonstrated how all of these findings could be used to target online classifiers trained and hosted by Amazon and Google, without any knowledge of the model design or parameters, but instead simply by making label queries for inputs. The attack successfully forces these classifiers to misclassify and of their inputs.

These findings call for some validation of inputs used by machine learning algorithms. This remains an open problem. Future work should continue to improve the learning of substitutes to maximize their accuracy and the transferability of adversarial samples crafted to targeted models. Furthermore, poisoning attacks at training time remain largely to be investigated, leaving room for contributions to the field.

References

  • [1] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar. Can machine learning be secure? In Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pages 16–25. ACM, 2006.
  • [2] E. Battenberg, S. Dieleman, and al. Lasagne: Lightweight library to build and train neural networks in theano, 2015.
  • [3] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, and al. Theano: a cpu and gpu math expression compiler. In Proceedings of the Python for scientific computing conference (SciPy), volume 4, page 3. Austin, TX, 2010.
  • [4] B. Biggio, I. Corona, and al. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases, pages 387–402. Springer, 2013.
  • [5] B. Biggio, G. Fumera, and F. Roli. Security evaluation of pattern classifiers under attack. Knowledge and Data Engineering, IEEE Transactions on, 26(4):984–996, 2014.
  • [6] B. Biggio, B. Nelson, and P. Laskov. Support vector machines under adversarial label noise. In ACML, pages 97–112, 2011.
  • [7] B. Biggio, B. Nelson, and L. Pavel. Poisoning attacks against support vector machines. In Proceedings of the 29th International Conference on Machine Learning, 2012.
  • [8] C. M. Bishop. Pattern recognition. Machine Learning, 2006.
  • [9] C. Bucila, R. Caruana, and A. Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535–541. ACM, 2006.
  • [10] T. Chen, I. Goodfellow, and J. Shlens. Net2net: Accelerating learning via knowledge transfer. In Proceedings of the 2016 International Conference on Learning Representations. Computational and Biological Learning Society, 2016.
  • [11] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. Book in preparation for MIT Press, 2016.
  • [12] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In Proceedings of the 2015 International Conference on Learning Representations. Computational and Biological Learning Society, 2015.
  • [13] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In Deep Learning and Representation Learning Workshop at NIPS 2014. arXiv preprint arXiv:1503.02531, 2014.
  • [14] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. Tygar. Adversarial machine learning. In

    Proceedings of the 4th ACM workshop on Security and artificial intelligence

    , pages 43–58. ACM, 2011.
  • [15] M. Kloft and P. Laskov. Online anomaly detection under adversarial impact. In International Conference on Artificial Intelligence and Statistics, pages 405–412, 2010.
  • [16] Y. LeCun and C. Cortes.

    The mnist database of handwritten digits, 1998.

  • [17] P. McDaniel, N. Papernot, and Z. B. Celik. Machine Learning in Adversarial Settings. IEEE Security & Privacy Magazine, 14(3), May/June 2016.
  • [18] K. P. Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
  • [19] N. Papernot, P. McDaniel, and al. The limitations of deep learning in adversarial settings. In Proceedings of the 1st IEEE European Symposium on Security and Privacy. IEEE, 2016.
  • [20] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, and al. Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:1602.02697, 2016.
  • [21] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In Proceedings of the 37th IEEE Symposium on Security and Privacy. IEEE, 2016.
  • [22] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, , et al. Intriguing properties of neural networks. In Proceedings of the 2014 International Conference on Learning Representations. Computational and Biological Learning Society, 2014.
  • [23] J. S. Vitter. Random sampling with a reservoir. ACM Transactions on Mathematical Software (TOMS), 11(1):37–57, 1985.
  • [24] D. Warde-Farley and I. Goodfellow. Adversarial perturbations of deep neural networks. In T. Hazan, G. Papandreou, and D. Tarlow, editors, Advanced Structured Prediction. 2016.

9 Acknowledgments

Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.