Deep learning is ubiquitous: deep neural networks achieve exceptional performance on challenging tasks like machine translation [1, 2], diagnosing medical conditions such as diabetic retinopathy [3, 4] or pneumonia , malware detection [6, 7, 8], and classification of images [9, 10]. This success is often attributed in part to developments in hardware (e.g., GPUs  and TPUs 
) and availability of large datasets (e.g., ImageNet
), but more importantly also to the architectural design of neural networks and the remarkable performance of stochastic gradient descent. Indeed, deep neural networks are designed to learn a hierarchical set ofrepresentations of the input domain . These representations project the input data in increasingly abstract spaces—or embeddings—eventually sufficiently abstract for the task to be solved (e.g., classification) with a linear decision function.
Despite the breakthroughs they have enabled, the adoption of deep neural networks (DNNs) in security and safety critical applications remains limited in part because they are often considered as black-box
models whose performance is not entirely understood and are controlled by a large set of parameters–modern DNN architectures are often parameterized with over a million values. This is paradoxical because of the very nature of deep learning: an essential part of the design philosophy of DNNs is to learn a modular model whose components (layers of neurons) are simple in isolation yet powerful and expressive in combination—thanks to their orchestration as a composition of non-linear functions.
In this paper, we harness this intrinsic modularity of deep learning to address three well-identified criticisms directly relevant to its security: the lack of reliable confidence estimates , model interpretability  and robustness . We introduce the Deep k-Nearest Neighbors (DkNN) classification algorithm, which enforces conformity of the predictions made by a DNN on test inputs with respect to the model’s training data. For each layer in the DNN, the DkNN performs a nearest neighbor search to find training points for which the layer’s output is closest to the layer’s output on the test input of interest. We then analyze the label of these neighboring training points to ensure that the intermediate computations performed by each layer remain conformal with the final model’s prediction.
In adversarial settings, this yields an approach to defense that differs from prior work in that it addresses the underlying cause of poor model performance on malicious inputs rather than attempting to make particular adversarial strategies fail. Rather than nurturing model integrity by attempting to correctly classify all legitimate and malicious inputs, we ensure the integrity of the model by creating a novel characterization of confidence, called credibility, that spans the hierarchy of representations within of a DNN: any credible classification must be supported by evidence from the training data. Conversely, a lack of credibility indicates that the sample must be ambiguous or adversarial. Indeed, the large error space of ML models 
exposes a large attack surface, which is exploited by attacks through threat vectors like adversarial examples (see below).
Our evaluation shows that the integrity of the DkNN classifier is maintained when its prediction is supported by the underlying training manifold. This support is evaluated as level of “confidence” in the prediction’s agreement with the nearest neighbors found at each layer of the model and analyzed with conformal prediction [19, 20]. Returning to desired properties of the model; (a) confidence can be viewed as estimating the distance between the test input and the model’s training points, (b) interpretability is achieved by finding points on the training manifold supporting the prediction, and (c) robustness is achieved when the prediction’s support is consistent across the layers of the DNN, i.e., prediction has high confidence.
Intuition for the Deep k-Nearest Neighbors
The intuition behind DkNN is presented in Figure 1. Discussed below, this gives rise to explorations of the definition and importance of confidence, interpretability and robustness and their role in machine learning in adversarial settings.
There have been recent calls from the security and ML communities to more precisely calibrate the confidence of predictions made by DNNs . This is critical in tasks like pedestrian detection for self-driving cars  or automated diagnosis of medical conditions 
. Probabilities output by DNNs are commonly used as a proxy for their confidence. Yet, these probabilities are not faithful indicators of model confidence (see SectionIII-A). A notable counter-example is the one of adversarial examples, which are often classified with more “confidence” (per the DNN’s output probabilities) than their legitimate counterpart, despite the model prediction being erroneous on these inputs [17, 23]. Furthermore, when a DNN assigns equal probabilities to two candidate labels (i.e., it has low confidence in either outcomes), it may do so for at least two different reasons: (a) the DNN has not analyzed similar inputs during training and is extrapolating, or (b) the input is ambiguous—perhaps as a result of an adversary attempting to subvert the system or the sample being collected with a naturally noisy observation process.
In the DkNN, the number of nearest neighboring training points whose label does not match the prediction made on a test input defines an estimate of the input’s nonconformity to the training data. The larger that number is, the weaker the training data supports the prediction. To formalize this, we operate in the framework of conformal prediction  and compute both the confidence and credibility of DkNN predictions. The former quantifies the likelihood of the prediction being correct given the training set, while the later characterizes how relevant the training set is to the prediction. In our experiments (see Sections V and VII), we find that credibility is able to reliably identify the lack of support from the training data when predicting far from the training manifold.
This property is the ability to construct an explanation for model predictions that can be easily understood by a human observer , or put another way to rationalize DNN predictions based on evidence—and answer the question: “Why did the model decide that?”. DNN decisions are difficult to interpret because neurons are arranged in a complex sequence of computations and the output representation of each layer is high-dimensional. This limited interpretability inhibits applications of deep learning in domains like healthcare , where trust in model predictions is key. In contrast, the DkNN algorithm is more interpretable by design because the nearest neighbors themselves provide explanations, for individual layer and overall DNN predictions, that are easily understood by humans because they lie in the input domain.
Robustness to input perturbation is another important requirement for security [26, 27, 28, 29, 30] and safety  in ML systems. While DNNs are robust to random perturbations of their inputs, they are vulnerable to small intentional perturbations of their inputs at test time—known as adversarial examples [17, 23]. This attack vector allows an adversary to fully control a DNN’s predictions, even if it has no access to the model’s training data or internal parameters . Small perturbations introduced by adversarial examples are able to arbitrarily change the DNN’s output because they are gradually magnified by non-linearities successively applied by each layer in the model. Put another way, when the DNN is misclassifying an input, there is necessarily one of its layers that transformed the input’s representation, which was by definition in the correct class initially. In contrast, the DkNN classifier prevents this by identifying changes in the labels of nearest neighboring training points between lower and higher layers of the DNN as an indicator that the DNN is mispredicting (see Figure 1
). In essence, the DkNN removes exploitable degrees of freedom available to adversaries attempting to manipulate the system’s predictions—thus offering a form of robustness to adversarial example attacks (see SectionVII). Note that this is not simply an ensemble approach that combines the predictions from multiple models; our DkNN algorithm inspects the intermediate computations of a single DNN to ensure its predictions are conformal with its training data.
To summarize, we make the following contributions:
We introduce the Deep k-Nearest Neighbors (DkNN) algorithm that measures the nonconformity of a prediction on a test input with the training data as an indirect estimate of the credibility of the model predictions (see Section IV).
We empirically validate that predictions made by a DkNN are more reliable estimates of credibility than a DNN on naturally occurring out-of-distribution inputs. On inputs geometrically transformed or from classes not included in the training data, the DkNN’s credibility measurement is below versus - for a given DNN.
We demonstrate DkNN interpretability through a study of racial bias and fairness in a well known DNN (Section VI).
We show that the DkNN is able to identify adversarial examples generated using existing algorithms because of their low credibility (see Section VII). We also show that adaptive attacks against the DkNN often need to perturb input semantics to change the DkNN’s prediction.
We find these results encouraging and note that they highlight the benefit of analyzing confidence, interpretability and robustness as related properties of a DNN. Here, we exploit the DNN’s modularity and verify the conformity of predictions with respect to training data at each layer of abstraction, and therein ensure that the DNN converges toward a rational and interpretable output. Interestingly and as explored in Section VII, Sabour et al.  investigated the vulnerability of internal representations as a vehicle for creating malicious inputs. This suggests that in addition to enforcing these properties at the level of the model as a whole, it is important to defend each abstraction from malicious manipulation. Indeed the work discussed throughout suggests that is is not only necessary, but also a useful tool in providing a potential defense against existing adversarial algorithms.
Ii Background on Deep Learning
Machine learning refers to a set of techniques that automate the analysis of large scale data. In this paper, we consider classification tasks where ML models are designed to learn mappings between an input domain and a predefined set of outputs called classes. For instance, the input domain may be PDF files and the classes “benign” or “malicious” when the task of interest is malware detection in PDF documents. Techniques like support vector machines , and more recently deep learning —revisiting neural networks architectures  are common choices to learn supervised models from data.
In this paper we build on deep neural networks  (DNNs). DNNs are designed to learn hierarchical—and increasingly abstract—representations of the data. For instance, a neural network trained to recognize objects when presented with samples will typically first learn representations of the images that indicate the presence of various geometric shapes and colors, compose these to identify subsets of objects before it reaches its final representation which is the prediction . Specifically, a deep neural network is a composition of parametric functions referred to as layers. Each layer can be seen as a representation of the input domain. A layer is made up of neurons—small units that compute one dimension of the layer’s output. The layer indexed (with ) takes as its input the output of previous layer
and applies a non-linear transformation to compute its own output. The behavior of these non-linearities is controlled through a set of parameters , which are specific to each layer. These parameters, also called weights, link the neurons of a given layer to the neurons of the layer that precedes it. They encode knowledge extracted by the model from the training data (see below). Hence, given an input , a neural network performs the following computation to predict its class:
When possible, we simplify the notation by omitting the vector parameters , in which case we write .
During training, the model is presented with a large collection of known input-output pairs . To begin, initial values for the weights are drawn randomly. We then take a forward pass through the model: given an input and label , we compute its current belief , which is a vector whose components are the estimated probability of belonging to each of the classes: e.g., the 5-th component is . The model’s prediction error is estimated by computing the value of a cost—or loss—function given the current prediction and the true label . In the backward pass, this error is differentiated with respect to all of the parameters in , and their values are updated to improve the predictions of neural network
. By iteratively taking forward and backward passes, values of the model parameters that (approximately) minimize the loss function on the training data are found.
The model is then deployed to predict on data unseen during training. This is the inference phase: the model takes a forward pass on the new—test—input and outputs a label prediction. One hopes that the model will generalize to this test data to infer the true label from the patterns it has encountered in its training data. This is often but not always the case however, as evidenced by adversarial examples (see Section III-C below).
Iii On Confidence, Interpretability & Robustness
We systematize knowledge from previous efforts that tackled the problem of confidence in machine learning. Their strengths and limitations motivate the choices made in Section IV to design our approach for measuring confidence in DNNs. Finally, we position our work among existing literature on interpretability and robustness in (deep) machine learning.
Iii-a Confidence in Machine Learning
There exist several sources of uncertainty in ML applications . Observations made to collect the dataset introduce aleatoric uncertainty by not including all necessary explanatory variables in the data. For instance, a spam dataset that only contains email metadata but not their content would introduce substantial aleatoric uncertainty. In this paper, we focus on epistemic uncertainty—or model uncertainty—introduced by a ML model because it is learned from limited data.
Below, we survey approaches for estimating the confidence of DNNs. The most frequently used, adding a softmax layer to the DNN, is not reliable for inputs that fall off the model’s training manifold. Other approaches like Bayesian deep learning remain computationally expensive. In SectionIV, we thus introduce an approach to provide more reliable model uncertainty estimates.
The output of DNNs used for classification is a vector that is typically interpreted as estimates of the model’s confidence for classifying input in class
of the task considered. This vector is almost always obtained by adding a softmax layer that processes the logits—or class scores—output by the penultimate model layer. Known as Platt scaling
, the approach is equivalent to fitting a logistic regression to the classifier’s class scores:
The logits (i.e., class scores) are originally floating point values whose values are unbounded, whereas the output of the softmax is a vector of floating point values that sum up to and are individually bounded between and .
Contrary to the popular belief, this approach is not a reliable estimator of confidence . Adversarial examples are good counter-examples to illustrate the weakness of this metric: they are inputs crafted by adding a perturbation that force ML models to misclassify inputs that were originally correctly classified [23, 17]. However, DNNs output a larger confidence for the wrong class when presented with an adversarial example than the confidence they assigned to the correct class when presented with a legitimate input : In other words, the softmax indicates that the DNN is more confident when it is mistaken than when its predicts the correct answer.
Another popular class of techniques estimate the model uncertainty that deep neural network architectures introduce because they are learned from limited data by involving the Bayesian formalism. Bayesian deep learning
introduces a distribution over models or their parameters (e.g., the weights that link different layers of a DNN) in order to offer principled uncertainty estimates. Unfortunately, Bayesian inference remains computationally hard for neural networks. Hence, different degrees of approximations are made to reduce the computational overhead[40, 41, 42, 43], including about the prior that is specified for the parameters of the neural network. Despite these efforts, it remains difficult to implement Bayesian neural networks; thus radically different directions have been proposed recently. They require less modifications to the learning algorithm. One such proposal is to use dropout at test time to estimate uncertainty . Dropout was originally introduced as a regularization technique for training deep neural networks: for each forward and backward pass pairs (see above for details), the output of a random subset of neurons is set to : i.e., they are dropped . Gal et al. instead proposed to use dropout at test time and cast it as approximate Bayesian inference . Because dropout can be seen as an ensembling method, this approach naturally generalizes to using ensembles of models as a proxy to estimate predictive uncertainty .
Iii-B Interpretability in Machine Learning
It is not only difficult to calibrate DNN predictions to obtain reliable confidence estimates, but also to present a human observer with an explanation for model outputs. Answering the question “Why did the model decide that?” can be a complex endeavor for DNNs when compared to somewhat more interpretable models like decision trees (at least trees that are small enough for a human to understand their decision process). The ability to explain the logic followed by a model is also key to debug ML-driven autonomy.11todo: 1Patrick: DARPA example?
Progress in this area of interpretability for ML (or sometimes explainable AI), remains limited because the criteria for success are ill-defined and difficult to quantify . Nevertheless, legislation like the European Union’s General Data Protection Regulation require that companies deploying ML and other forms of analysis on certain sensitive data provide such interpretable outputs if the prediction made by a ML model is used to make decisions without a human in the loop ; and as a result, there is a growing body of literature addressing interpretability [48, 49, 50, 51, 50, 52, 53, 54].
A by-product of the approach introduced in Section IV is that it returns exemplar inputs, also called prototypes [55, 56], to interpret predictions through training points that best explain the model’s output because they are processed similarly to the test input considered. This approach to interpretability through an explanation by example was pioneered by Caruena et al. , who suggested that a comparison of the representation predicted by a single layer neural network with the representations learned on its training data would help identify points in the training data that best explain the prediction made. Among notable follow-ups [58, 59, 60], this technique was also applied to visualize relationships learned between words by the word2vec language model . As detailed in Section IV, we search for nearest neighboring training points not at the level of the embedding layer only but at the level of each layer within the DNN and use the labels of the nearest neighboring training points to provide confidence, interpretability and robustness.
Evaluating interpretability is difficult because of the involvement of humans. Doshi-Velez and Kim  identify two classes of evaluation for interpretability: (1) the model is useful for a practical (and perhaps simplified) application used as a proxy to test interpretability [50, 62, 49] or (2) the model is learned using a specific hypothesis class already established to be interpretable (e.g., a sparse linear model or a decision tree) . Our evaluation falls under the first category: exemplars returned by our model simplify downstream practical applications.
Iii-C Robustness in Machine Learning
The lack of confidence and interpretability of DNN outputs is also illustrated by adversarial examples: models make mistake on these malicious inputs, yet their confidence is often higher in the mistake than when predicting on a legitimate input.
. Despite the inputs being originally correctly classified, the perturbation added to craft an adversarial example changes the output of a model. In computer vision applications, because the perturbation added is so small in the pixel space, humans are typically unaffected: adversarial images are visually indistinguishable. This not only shows that ML models lack robustness to perturbations of their inputs, but also again that their predictions lack human interpretability.
Learning models robust to adversarial examples is a challenging task. Defining robustness is difficult, and the community has resorted to optimizing for robustness in a norm ball around the training data (i.e., making sure that the model’s predictions are constant in a neighborhood of each training point defined using a norm). Progress has been made by discretizing the input space , training on adversarial examples [72, 18] or attempting to remove the adversarial perturbation [73, 74]. Using robust optimization would be ideal but is difficult—often intractable—in practice because rigorous definitions of the input domain region that adversarial examples make up are often non-convex and thus difficult to optimize for. Recent investigations showed that a convex region that includes this non-convex region defined by adversarial examples can be used to upper bound the potential loss inflicted by adversaries and thus perform robust optimization over it. Specifically, robust optimization is performed over a convex polytope that includes the non-convex space of adversarial examples [75, 76].
Following the analysis of Szegedy et al.  suggesting that each layer of a neural network has large Lipschitz constants, there has been several attempts at making the representations better behaved, i.e., to prove small Lipschitz constants per layer, which would imply robustness to adversarial perturbations: small changes to the input of a layer would be guaranteed to produce bounded changes to the output of said layer. However, techniques proposed have either restricted the ability to train a neural network (e.g., RBF units ) or demonstrated marginal improvements in robustness (e.g., Parseval networks ). The approach introduced in Section IV instead uses a nearest neighbors operation to ensure representations output by layers at test time are consistent with those learned during training.
Iv Deep k-Nearest Neighbors Algorithm
The approach we introduce takes a DNN trained using any standard DNN learning algorithm and modifies the procedure followed to have the model predict on test data: patterns identified in the data at test time by internal components (i.e., layers) of the DNN are compared to those found during training to ensure that any prediction made is supported by the training data. Hence, rather than treating the model as a black-box and trusting its predictions obliviously, our inference procedures ensures that each intermediate computation performed by the DNN is consistent with its final output—the label prediction.
The pseudo-code for our Deep k-Nearest Neighbors (DkNN) procedure is presented in Algorithm 1. We first motivate why analyzing representations internal to the deep neural network (DNN) that underlies it allows the DkNN algorithm to strengthen the interpretability and robustness of its predictions. This is the object of Section IV-A. Then, in Section IV-B, we inscribe our algorithm in the framework of conformal prediction to estimate and calibrate the confidence of DkNN predictions. The confidence, interpretability and robustness of the DkNN are empirically evaluated respectively in Sections V, VI and VII.
Iv-a Predicting with Neighboring Representations
As we described in Section II, DNNs learn a hierarchical set of representations. In other words, they project the input in increasingly abstract spaces, and eventually in a space where the classification decision can be made using a logistic regression—which is the role of the softmax layer typically used as the last layer of neural network classifiers (see Section III-A). In many cases, this hierarchy of representations enables DNNs to generalize well on data presented to the model at test time. However, phenomena like adversarial examples—especially those produced by feature adversaries , which we cover extensively in Section VII-C—or the lack of invariance to translations  indicate that representations learned by DNNs are not as robust as the technical community initially expected. Because DNN training algorithms make the implicit assumption that test data is drawn from the same distribution than training data, this has not been an obstacle to most developments of ML. However, when one wishes to deploy ML in settings where safety or security are critical, it becomes necessary to invent mechanisms suitable to identify when the model is extrapolating too much from the representations it has built with its training data. Hence, the first component of our approach analyzes these internal representations at test time to detect inconsistencies with patterns analyzed in the training data.
Briefly put, our goal is to ensure that each intermediate computation performed by the deep neural network is conformal with the final prediction it makes. Naturally, we rely on the layered structure afforded by DNNs to define these intermediate computation checks. Indeed, each hidden layer, internal to the DNN, outputs a different representation of the input presented to the model. Each layer builds on the representation output by the layer that precedes it to compute its own representation of the input. When the final layer of a DNN indicates that the input should be classified in a particular class, it outputs in a way an abstract representation of the input which is the class itself. This representation is computed based on the representation output by the penultimate layer, which itself must have been characteristic of inputs from this particular class. The same reasoning can be recursively applied to all layers of the DNN until one reaches the input layer, i.e., the input itself.
In the event where a DNN mistakenly predicts the wrong class for an input, there is necessarily one of its layers that transformed the input’s representation, which was by definition in the correct class initially because the input is itself a representation in the input domain, into a representation that is closer to inputs from the wrong class eventually predicted by the DNN. This behavior, depicted in Figure 1, is what our approach sets out to algorithmically characterize to make DNN predictions more confident, interpretable and robust.
Iv-A2 A nearest neighbors approach
In its simplest form, our approach (see Figure 1) can be understood as creating a nearest neighbors classifiers in the space defined by each DNN layer. While prior work has considered fitting linear classifiers  or support vector machines , we chose the nearest neighbors because it explicits the relationship between predictions made by the model and its training data. We later leverage this aspect to characterize the nonconformity of model predictions.
Once the neural network has been trained, we record the output of its layers for on each of its training points. For each layer , we thus have a representation of the training data along with the corresponding labels, which allows us to construct a nearest neighbors classifier. Because the output of layers is often high-dimensional (e.g., the first layers of DNNs considered in our experiments have tens of thousands of neuron activations), we use the algorithm of Andoni et al.  to efficiently perform the lookup of nearest neighboring representations in the high-dimensional spaces learned by DNN layers. It implements data-dependent locality-sensitive hashing [81, 82]
to find nearest neighbors according to the cosine similarity between vectors. Locality-sensitive hashing (LSH) functions differ from cryptographic hash functions; they are designed tomaximize the collision of similar items. This property is beneficial to the nearest neighbors problem in high dimensions. Given a query point, locality-sensitive hashing functions are first used to establish a list of candidate nearest neighbors: these are points that collide (i.e., are similar) with the query point. Then, the nearest neighbors can be found among this set of candidate points. In short, when we are presented with a test input :
We run input through the DNN to obtain the representations output by its layers:
For each of these representations , we use a nearest neighbors classifier based on locality-sensitive hashing to find the training points whose representations at layer are closest to the one of the test input (i.e., ).
For each layer , we collect the multi-set of labels assigned in the training dataset to the nearest representations found at the previous step.
We use all multi-sets to compute the prediction of our DkNN according to the framework of conformal prediction (see Section IV-B).
The comparison of representations predicted by the DNN at test time with neighboring representations learned during training allows us to make progress towards providing certain desirable properties for DNNs such as interpretability and robustness. Indeed, we demonstrate in Section VI that the nearest neighbors offer a form of natural—and most importantly human interpretable—explanations for the intermediate computations performed by the DNN at each of its layers. Furthermore, in order to manipulate the predictions of our DkNN algorithm with malicious inputs like adversarial examples, adversaries have to force inputs to closely align with representations learned from the training data by all layers of the underlying DNN. Because the first layers learn low-level features and the adversary is no longer able to exploit non-linearities to gradually change the representation of an input from the correct class to the wrong class, it becomes harder to produce adversarial examples with perturbations that don’t change the semantics (and label) of the input. We validate these claims in Section VII but first present the second component of our approach, from which stem our predictions and their calibrated confidence estimates.
Iv-B Conformal Predictions for DkNN Confidence Estimation
Estimating DNN confidence is difficult, and often an obstacle to their deployment (e.g., in medical applications or security-critical settings). Literature surveyed in Section II and our experience concludes that probabilities output by the softmax layer and commonly used as a proxy for DNN confidence are not well calibrated . In particular, they often overestimate the model’s confidence when making predictions on inputs that fall outside the training distribution (see Section V-C for an empirical evaluation). Here, we leverage ideas from conformal prediction and the nearest neighboring representations central to the DkNN algorithm to define how it makes predictions accompanied with confidence and credibility estimates. While confidence indicates how likely the prediction is to be correct according to the model’s training set, credibility quantifies how relevant the training set is to make this prediction. Later, we demonstrate experimentally that credibility is well calibrated when the DkNN predicts in both benign (Section V) and adversarial (Section VII) environments.
Iv-B1 Inductive Conformal Prediction
Conformal prediction builds on an existing ML classifier to provide a probabilistically valid measure of confidence and credibility for predictions made by the underlying classifier [19, 20, 24]. In its original variant, which we don’t describe here in the interest of space, conformal prediction required that the underlying classifier be trained from scratch for each test input. This cost would be prohibitive in our case, because the underlying ML classifier is a DNN. Thus, we use an inductive variant of conformal prediction [83, 84] that does not require retraining the underlying classifier for each query because it assumes the existence of a calibration set—holdout data that does not overlap with the training or test data.
Essential to all forms of, including inductive, conformal prediction is the existence of a nonconformity measure, which indicates how different a labeled input is from previous observations of samples from the data distribution. Nonconformity is typically measured with the underlying machine learning classifier. In our case, we would like to measure how different a test input with a candidate label is from previous labeled inputs that make up the training data. In essence, this is one of the motivations of the DkNN. For this reason, a natural proxy for the nonconformity of a labeled input is the number of nearest neighboring representations found by the DkNN in its training data whose label is different from the candidate label. When this number is low, there is stronger support for the candidate label assigned to the test input in the training data modeled by the DkNN. Instead, when this number is high, the DkNN was not able to find training points that support the candidate label, which is likely to be wrong. In our case, nonconformity of an input with the label is defined as:
where is the multi-set of labels for the training points whose representations are closest to the test input’s at layer .
Before inference can begin and predictions made on unlabeled test inputs, we compute the nonconformity of the calibration dataset , which is labeled. The calibration data is sampled from the same distribution than training data but is not used to train the model. The size of this calibration set should strike a balance between reducing the number of points that need to be heldout from training or test datasets and increasing the precision of empirical -values computed (see below) Let us denote with the nonconformity values computed on the set of calibration data, that is .
Once all of these values are computed, the nonconformity score of a test input is compared with the scores computed on the calibration dataset through a form of hypothesis testing. Specifically, given a test input , we perform the following for each candidate label :
We use neighbors identified by the DkNN and Equation 3 to compute the nonconformity where is the test input and the candidate label.
We calculate the fraction of nonconformity measures for the calibration data that are larger than the test input’s. This is the empirical -value of candidate label :
The predicted label for the test input is the one assigned the largest empirical -value, i.e., . Although two classes could be assigned identical empirical -values, this does not happen often in practice. The prediction’s confidence is set to minus the second largest empirical -value. Indeed, this quantity is the probability that any label other than the prediction is the true label. Finally, the prediction’s credibility is the empirical -value of the prediction: it bounds the nonconformity of any label assigned to the test input with the training data. In our experiments, we are primarily interested in the second metric, credibility.
Overall, the approach described in this Section yields the inference procedure outline in Algorithm 1.
V Evaluation of the Confidence of DkNNs
The DkNN leverages nearest neighboring representations in the training data to define the nonconformity of individual predictions. Recall that in Section IV, we applied the framework of conformal prediction to define two notions of what commonly falls under the realm of “confidence”: confidence and credibility. In our experiments, we measured high confidence on a large majority of inputs. In other words, the nonconformity of the runner-up candidate label is high, making it unlikely—according to the training data—that this second label be the true answer. However, we observe that credibility varies across both in- and out-of-distribution samples.
Because our primary interest is the support (or lack of thereof) that training data gives to DkNN predictions, which is precisely what credibility characterizes, we tailor our evaluation to demonstrate that credibility is well-calibrated, i.e., identifies predictions not supported by the training data. For instance, we validate the low credibility of DkNN predictions on out-of-distribution inputs. Here, all experiments are conducted in benign settings, whereas an evaluation of the DkNN in adversarial settings is found later in Section VII.
V-a Experimental setup
We experiment with three datasets. First, the handwritten recognition task of MNIST is a classic ML benchmark: inputs are grayscale images of zip-code digits written on postal mail and the task classes (i.e., the model’s expected output), digits from 0 to 9 . Due to artifacts it possesses, e.g., redundant features, we use MNIST as a “unit test” to validate our DkNN implementation. Second, the SVHN dataset is another seminal benchmark collected by Google Street View: inputs are colored images of house numbers and classes, digits from 0 to 9 . This task is harder than MNIST because inputs are not as well pre-processed: e.g., images in the same class have different colors and light conditions. Third, the GTSRB dataset is a collection of traffic sign images to be classified in 43 classes (each class corresponds to a type of traffic sign) . For all datasets, we use a DNN that stacks convolutionnal layers with one or more fully connected layers. Specific architectures used for each of the three datasets are detailed in the Appendix.
For each model, we implement the DkNN classifier described in Algorithm 1. We use a grid parameter search to set the number of neighbors and the size (750 for MNIST and SVHN, 850 for GTSRB because the task has more classes) of the calibration set, which is obtained by holding out a subset of the test data not used for evaluation. Table I offers a comparison of classification accuracies for the DNN and DkNN. The impact of the DkNN on performance is minimal when it does not improve performance (i.e., increase accuracy).
V-B Credibility on in-distribution samples
Given that there exists no ground-truth for the credibility of a prediction, qualitative or quantitative evaluations of credibility estimates is often difficult. Indeed, datasets include the true label along with training and test inputs but do not indicate the expected credibility of a model on these inputs. Furthermore, these inputs may for instance present some ambiguities and as such make credibility a non-desirable outcome.
One way to characterize well-behaved credibility estimates is that they should truthfully convey the likelihood of a prediction being correct: high confident predictions should almost always be correct and low confident predictions almost always wrong. Hence, we plot reliability diagrams to visualize the calibration of our credibility . They are histograms presenting accuracy as a function of credibility estimates. Given a set of test points , we group them in bins . A point is placed in bin if the model’s credibility on is contained in the interval . Each bin is assigned the model’s mean accuracy as its value:
Ideally, the credibility of a well-calibrated model should increase linearly with the accuracy of predictions, i.e., the reliability diagram should approach a linear function: .
|Dataset||DNN Accuracy||DkNN Accuracy|
Reliability diagrams are plotted for the MNIST, SVHN and GTSRB dataset in Figure 2. On the left, they visualize confidence estimates output by the DNN softmax; that is the probability assigned to the most likely class. On the right, they plot the credibility of DkNN predictions, as defined in Section IV. At first, it may appear that the softmax is better calibrated than its DkNN counterpart: its reliability diagrams are closer to the linear relation between accuracy and DNN confidence. However, if one takes into account the distribution of DkNN credibility values across the test data (i.e., the number of test points found in each credibility bin reflected by the red line overlaid on the bars), it surfaces that the softmax is almost always very confident on test data with a confidence above . Instead, the DkNN uses the range of possible credibility values for datasets like SVHN, whose test set contains a larger number of inputs that are difficult to classify (reflected by the lower mean accuracy of the underlying DNN). We will see how this behavior is beneficial when processing out-of-distribution samples in Section V-C below and adversarial examples later in Section VII.
In addition,the credibility output by the DkNN provides insights into the test data itself. For instance, we find that credibility is sufficiently reliable to find test inputs whose label in the original dataset is wrong. In both the MNIST and SVHN test sets, we looked for images that were were assigned a high credibility estimate by the DkNN for a label that did not match the one found in the dataset: i.e., the DkNN “mispredicted” the input’s class according to the label included in the dataset. Figure 3
depicts some of the images returned by this heuristic. It is clear for all of them that the dataset label is either wrong (e.g., the MNIST 4 labeled as a 9 and SVHN 5 labeled as a 1) or ambiguous (e.g., the penultimate MNIST image is likely to be a 5 corrected into a 0 by the person who wrote it, and two of the SVHN images were assigned the label of the digit that is cropped on the left of the digit that is in the center). We were not able to find mislabeled inputs in the GTSRB dataset.
V-C Credibility on out-of-distribution samples
We now validate the DkNN’s prediction credibility on out-of-distribution samples. Inputs considered in this experiment are either from a different classification task (i.e., drawn from another distribution) or generated by applying geometrical transformations to inputs sampled from the distribution. Due to the absence of support for these inputs in our training manifold, we expect the DkNN’s credibility to be low on these inputs: the training data used to learn the model is not relevant to classify the test inputs we ask the model to classify.
For MNIST, the first set of out-of-distribution samples contains images from the NotMNIST dataset, which are images of unicode characters rendered using computer fonts . Images from NotMNIST have an identical format to MNIST but the classes are non-overlapping: none of the classes from MNIST (digits from 0 to 9) are included in the NotMNIST dataset (letters from A to J) and vice-versa. For SVHN, the analog set of out-of-distribution samples contains images from the CIFAR-10 dataset: they have the same format but again there is no overlap between SVHN and the objects and animals from CIFAR-10 . For both the MNIST and SVHN datasets, we rotate all test inputs by an angle of to generate a second set of out-of-distribution samples. Indeed, despite the presence of convolutional layers to encourage invariance to geometrical transformations, DNNs poorly classify rotated data unless they are explicitly trained on examples of such rotated inputs .
The credibility of the DkNN on these out-of-distribution samples is compared with the probabilities predicted by the underlying DNN softmax on MNIST (left) and SVHN (right) in Figure 4. The DkNN algorithm assigns an average credibility of and to inputs from the NotMNIST and rotated MNIST test sets respectively, compared to and for the softmax probabilities. Similar observations hold for the SVHN model: the DkNN assigns a mean credibility of and to CIFAR-10 and rotated SVHN inputs, in contrast with and for the softmax probabilities.
DkNN credibility is better calibrated on out-of-distribution samples than softmax probabilities: outliers to the training distribution are assigned low credibility reflecting a lack of support from training data.
Again, we tested here the DkNN only on “benign” out-of-distribution samples. Later, we make similar observations when evaluating the DkNN on adversarial examples in Section VII.
Vi Evaluation of the Interpretability of DkNNs
The internal logic of DNNs is often controlled by a large set of parameters, as is the case in our experiments, and is thus difficult to understand for a human observer. Instead, the nearest neighbors central to the DkNN are an instance of explanations by example . Training points whose representations are near the test input’s representation afford evidence relevant for a human observer to rationalize the DNN’s prediction. Furthermore, research from the neuroscience community suggests that locality-sensitivity hashing, the technique used in Section IV) to find nearest neighbors in the DkNN, may be a general principle of computation in the brain [91, 92].
Defining interpretability is difficult and we thus follow one of the evaluation methods outlined by Doshi and Kim . We demonstrate interpretability through a downstream practical application of the DkNN: fairness in ML .
Machine learning models reflect human biases, such as the ones encoded in their training data, which raises concerns about their lack of fairness towards minorities [94, 95, 96, 97, 98, 99, 100]. This is for instance undesirable when the model’s predictions are used to make decisions or influence humans that are taking them: e.g., admitting a student to university , predicting if a defendant awaiting trial can be released safely , modeling consumer credit risk .
We show that nearest neighbors identified by the DkNN help understand how training data yields model biases. This is a step towards eliminating sources of bias during DNN training.
Here, we consider model bias to the skin color of a person. Models for computer vision potentially exhibit bias towards people of dark skin . In a recent study, Stock and Cissé  demonstrate how an image of former US president Barack Obama throwing an American football in a stadium (see Figure 5) is classified as a basketball by a popular model for computer vision—a residual neural network (ResNet ).
In the following, we reproduce their experiment and apply the DkNN algorithm to this architecture to provide explanations by example of the prediction made on this input. To do so, we downloaded the pre-trained model from the TensorFlow repository, as well as the ImageNet training dataset .
We plot the nearest neighbors from the training data for the class predicted by the model in Figure 5. These neighbors are computed using the representation output by the last hidden layer of the ResNet model. On the left, the test image that is processed by the DNN is the same than the one used by Stock and Cissé . Its neighbors in the training data are images of 7 black and 3 white basketball players (female and male). Note how the basketball is similar in appearance to the football in the image of Obama: it is of similar color and located in the air (often towards the top of the image). Hence, we conjecture that the ball may play an important factor in the prediction.
We repeat the experiment with the same image cropped to remove the football (see Figure 5, right). The prediction changes to racket. Neighbors in this new training class are all white (female and male) tennis players. Here again, images share several characteristics of the test image: most noticeably the background is always green (and a lawn) but also more subtly the player is dressed in white and holding one of her or his hands in the air. While this does not necessarily contradict the bias identified in prior work, it offers alternative—perhaps complementary—explanations for the prediction made by the model. In this particular example, in addition to the skin color of Barack Obama, the position and appearance of the football contributed to the model’s original basketball prediction.
The simplicity of the heuristic suggests that beyond immediate benefits for human trust in deep learning, techniques that enable interpretability—like the DkNN—will make powerful debugging tools for deep learning practitioners to better identify the limitations of their algorithms and models in a semi-automated way . This heuristic also suggests steps towards eliminating bias in DNN training. For instance, one could remove ambiguous training points or add new points to prevent the model from learning an undesired correlation between a feature of the input (e.g., skin color) and one of the class labels (e.g., a sport). In the interest of space, we leave a more detailed exploration of this aspect to future work.
Vii Evaluation of the Robustness of DkNNs
The lack of robustness to perturbations of their inputs is a major criticism faced by DNNs. Like other ML techniques, deep learning is for instance vulnerable to adversarial examples. Defending against these malicious inputs is difficult because DNNs extrapolate with too much confidence from their training data (as we reported in Section V-C). To illustrate, consider the example of adversarial training [17, 64]. The resulting models are robust to some classes of attacks because they are trained on inputs generated by these attacks during training but they remain vulnerable to adapted strategies . In a direction orthogonal to defenses like adversarial training, which attempt to have the model always output a correct prediction, we show here that the DkNN is a step towards correctly handling malicious inputs like adversarial examples because it:
outputs more reliable confidence estimates on adversarial examples than the softmax (Section VII-A)
provides insights as to why adversarial examples affect undefended DNNs. In the applications considered, they target the layers that automate feature extraction to introduce ambiguity that eventually builds up to significantly change the end prediction of the model despite the perturbation magnitude being small in the input domain (SectionVII-B)
is robust to adaptive attacks we considered, which modify the input to align its internal representations with the ones of training points from a class that differs from the correct class of the input (see Section VII-C)
Vii-a Identifying Adversarial Examples with the DkNN Algorithm
In Section V, we found that outliers to the training distribution modeled by a DNN could be identified at test time by ensuring that the model’s internal representations are neighbored in majority by training points whose labels are consistent with the prediction. This is achieved by the conformal prediction stage of the DkNN. Here, we show that this technique is also applicable to detect malicious inputs: e.g., adversarial examples. In practice, we find that the DkNN algorithm yields well-calibrated responses on these adversarial inputs—meaning that the DkNN assigns low credibility to adversarial examples unless it can recover their correct class.
We craft adversarial examples using three representative algorithms: the Fast Gradient Sign Method (FGSM) , the Basic Iterative Method (BIM) , and the Carlini-Wagner attack (CW) . Parameters specific to each attack are reported in Table II. We also include the accuracy of both the undefended DNN and the DkNN algorithm on these inputs. From this, we conclude that even though attacks successfully evade the undefended DNN, when this DNN is integrated with the DkNN inference algorithm, some accuracy on adversarial examples is recovered because the first layers of the DNN output representations on adversarial examples whose neighbors in the training data are in the original class (the true class of the image from which adversarial examples are crafted).
We will come back to this aspect in Section VII-B and conclude that the ambiguity introduced by adversarial examples is marked by a large multi-set of candidate labels in the first layers compared to non-adversarial inputs. However, the DkNN’s error rate remains high, despite the improved performance with respect to the underlying DNN. We now turn to the credibility of these predictions, which we left out of consideration until now, and show that because the DkNN’s credibility on these inputs is low, they can largely be identified.
In Figure 6, we plot reliability diagrams comparing the DkNN credibility on GTSRB adversarial examples with the softmax probabilities output by the DNN. Similar graphs for the MNIST and SVHN datasets are found in the Appendix. Credibility is low across all attacks for the DkNN, when compared to legitimate test points considered in Section V—unless the DkNN’s predicted label is correct as indicated by the quasi-linear diagrams. Recall that the number of points in each bin is reflected by the red line. Hence, the DkNN outputs a credibility below 0.5 for most inputs because predictions on adversarial examples are not conformal with pairs of inputs and labels found in the training data. This behavior is a sharp departure from softmax probabilities, which classified most adversarial examples in the wrong class with a confidence often above 0.9 for the FGSM and BIM attacks. We also observe that the BIM attack is more successful than the FGSM or the CW attacks at introducing perturbations that mislead the DkNN inference procedure. We hypothesize that it outputs adversarial examples that encode some characteristics of the wrong class, which would also explain its previously observed strong transferability across models [108, 66, 18].
DkNN credibility is better calibrated than softmax probabilities. When the DkNN outputs a prediction with high credibility, this often implies the true label of an adversarial example was recovered.
We conclude that the good performance of credibility estimates on benign out-of-distribution data observed in Section V is also applicable to adversarial test data considered here. The DkNN degrades its confidence smoothly as adversarial examples push its inputs from legitimate points to the underlying DNN’s error region. It is even able to recover some of the true labels of adversarial examples when the number of nearest neighboring representations in that class is sufficiently large.
Vii-B Explaining DNN Mispredictions on Adversarial Examples
Nearest neighboring representations offer insights into why DNNs are vulnerable to small perturbations introduced by adversarial examples. We find that adversarial examples gradually exploit poor generalization as they are successively processed by each layer of the DNN: small perturbations are able to have a large impact on the model’s output because of the non-linearities applied by each of its layers. Recall Figure 1, where we illustrated how this behavior is reflected by neighboring representations, which gradually change from being in the correct class—the one of the corresponding unperturbed test input—to the wrong class assigned to the adversarial example.
To illustrate this, we analyze the labels of the nearest neighboring training representations for each layer when predicting on adversarial data. We expect this number to be larger for adversarial examples than for their legitimate counterpart because this ambiguity would ultimately lead to the model making a mistake. This is what we observe in Figure 7. For both clean and adversarial examples, the number of candidate labels (i.e., the multi-set of labels for the training points with nearest neighboring representations) decreases as we move up the neural architecture from its input layer towards its output layer: the model is projecting the input in increasingly more abstract spaces that are better suited to classify the input. However, adversarial examples have more candidate labels for lower layers than legitimate inputs: they introduce ambiguity that is later responsible for the model’s mistake.
In addition, the number of candidate labels (those of the nearest neighboring training representations) that match the final prediction made by the DNN is smaller for some attacks compared to other attacks, and for all attacks compared to legitimate inputs. This is particularly the case for the CW attack, which is likely the reason why the true label of adversarial examples it produces is often recovered by the DkNN (see Table II). Again, this lack of conformity between neighboring representations at different layers explicitly characterizes weak support for the model’s prediction in its training data.
Nearest neighbors provide a new avenue for measuring the strength of an attack: if an adversarial example is able to force many of the nearest neighboring representations to have labels in the wrong class eventually predicted by the model, it is less likely to be detected (e.g., by a DkNN or other techniques that may appear in the future) and also more likely to transfer across models.
Targeting internal representations of the DNN is the object of Section VII-C, where we consider such an adaptive attack that targets the internal representation of the DNN.
Vii-C Robustness of the DkNN Algorithm to Adaptive Attacks
Our experimental results suggest that we should not only study the vulnerability of DNNs as a whole, but also at the level of their hidden layers. This is the goal of feature adversaries introduced by Sabour et al. . Rather than forcing a model to misclassify, these adversaries produce adversarial examples that force a DNN to output an internal representation that is close to the one it outputs on a guide input. Conceptually, this attack may be deployed for any of the hidden layers that make up modern DNNs. For instance, if the adversary is interested in attacking layer , it solves the following optimization problem:
where the norm is typically chosen to be the norm.
This strategy is a natural candidate for an adaptive attack against our DkNN classification algorithm. An adversary aware that the defender is using the DkNN algorithm to strengthen the robustness of its predictive model needs to produce adversarial examples that are not only (a) misclassified by the DNN that underlies the DkNN algorithm but also (b) closely aligned with internal representations of the training data for the class that the DNN is mistakenly classifying the adversarial example in.
Hence, we evaluate the DkNN against feature adversaries. We assume a strong, perhaps insider, adversary with knowledge of the training set used by the defender. This ensures that we consider the worst-case setting for deploying our DkNN algorithm and not rely on security by obscurity .
Specifically, given a test point , we target the first layer analyzed by the DkNN, e.g., the output of the first convolutional layer in our experiments on MNIST and SVHN. In our feature adversaries attack, we let the guide input be the input from a different class whose representation at layer is closest from the input we are attempting to attack. This heuristic returns a guide input that is already semantically close to the input being attacked, making it easier to find smaller pertubations in the input domain (here, the pixel domain) that forces the predicted representation of the input being attacked to match the representation of the guide input. We then run the attack proposed by Sabour et al.  to find an adversarial input according to this guide and n test the prediction of our DkNN when it is presented with the adversarial input.
Figure 8 shows a set of adversarial images selected according to their order of appearance SVHN test set (a similar figure for MNIST is found in the Appendix). Images are laid out on the grid such that the rows indicate the class they were originally from (that is the correct class for the input that was attacked) and columns correspond to the model’s prediction on the adversarial image. Although the adversarial images depicted evade our DkNN algorithm in the sense that the images are not classified in the class of the original input they were computed from, the perception of a human is also affected significantly: all images are either ambiguous or modified so much that the wrong (predicted) class is now drawn in the image. In other words, when the attack succeeded on MNIST (19.6% of the inputs at ) and SVHN (70.0% of the inputs at ), it altered some semantics in order to have the adversarial input’s representation match the representation of the guide input from a different class. This can be explained by the fact that the adversary needs to target the representation output by the first layer in order to ensure the DkNN algorithm will find nearest neighbors in the predicted class when analyzing this layer.
The ambiguity of many adversarial images returned by the feature adversarial attack—together with the small perturbation in norm that these images have ( for MNIST and for SVHN)—raises some questions about the methodology commonly followed in the literature to evaluate adversarial example attacks and defenses. Indeed, the robustness of a machine learning model for a computer vision application (e.g., image classification) is often defined as its ability to constantly predict the same class for all inputs found within an norm ball centered in any of the test set’s inputs. The existence of inputs that are ambiguous to the human’s visual system in this ball suggests that we should establish a different definition of the adversarial space that characterizes human perception more accurately (perhaps one of the metrics used to evaluate compression algorithms for instance ). Note that this is not the case in other application domains, such as adversarial examples for malware detection [111, 6], where an algorithmic oracle is often available—such as a virtual machine running the executable as shown in Xu et al. . This suggests that future work should propose new definitions that not only characterize robustness with respect to inputs at test time but also in terms of the model’s training data.
We introduced the Deep k-Nearest Neighbors (DkNN) algorithm, which inspects the internals of a deep neural network (DNN) at test time to provide confidence, interpretability and robustness properties. The DkNN algorithm compares layer representation predictions with the nearest neighbors used to train the model. The resulting credibility measure assesses conformance of representation prediction with the training data. When the training data and prediction are in agreement, the prediction is likely to be accurate. If the prediction and training data are not in agreement, then the prediction does not have the training data support to be credible. This is the case with inputs that are ambiguous (e.g., some inputs contain multiple classes or are partly occluded due to imperfect preprocessing) or were maliciously perturbed by an adversary to produce an adversarial example. Hence, this characterization of confidence that spans the hierarchy of representations within of a DNN ensures the integrity of the model. The neighbors also enable interpretability of model predictions because they are points in the input domain that serve as support for the prediction and are easily understood and interpreted by human observers.
Our findings highlight the benefits of integrating simple inference procedures as ancillary validation of the predictions of complex learning algorithms. Such validation is a potentially new avenue to provide security in machine learning systems. We anticipate that many open problems at the intersection of machine learning and security will benefit from this perspective, including availability and integrity. We are excited to explore these and other related areas in the near future.
The authors thank Úlfar Erlingsson for essential discussions on internal DNN representations and visualization of the DkNN. We also thank Ilya Mironov for very helpful discussions on the nearest neighbors in high dimensional spaces and detailed comments on a draft. The authors are grateful for comments by Martín Abadi, Ian Goodfellow, Harini Kannan, Alex Kurakin and Florian Tramèr on a draft. We also thank Megan McDaniel for taking good care of our diet in the last stages of writing.
Nicolas Papernot is supported by a Google PhD Fellowship in Security. Some of the GPU equipment used in our experiments was donated by NVIDIA. Research was supported by the Army Research Laboratory, under Cooperative Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA), and the Army Research Office under grant W911NF-13-1-0421. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation hereon.
-  I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in neural information processing systems, 2014, pp. 3104–3112.
-  D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
-  G. Gardner, D. Keating, T. H. Williamson, and A. T. Elliott, “Automatic detection of diabetic retinopathy using an artificial neural network: a screening tool.” British journal of Ophthalmology, vol. 80, no. 11, pp. 940–944, 1996.
-  V. Gulshan, L. Peng, M. Coram, M. C. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams, J. Cuadros et al., “Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs,” JAMA, vol. 316, no. 22, pp. 2402–2410, 2016.
-  R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad, “Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission,” in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2015, pp. 1721–1730.
-  K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel, “Adversarial perturbations against deep neural networks for malware classification,” in 22nd European Symposium on Research in Computer Security, 2017.
-  J. Saxe and K. Berlin, “Deep neural network based malware detection using two dimensional binary program features,” in Malicious and Unwanted Software (MALWARE), 2015 10th International Conference on. IEEE, 2015, pp. 11–20.
-  Z. Zhu and T. Dumitras, “Featuresmith: Automatically engineering features for malware detection by mining the security literature,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016, pp. 767–778.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” inAdvances in neural information processing systems, 2012, pp. 1097–1105.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
-  J. D. Owens, M. Houston, D. Luebke, S. Green, J. E. Stone, and J. C. Phillips, “Gpu computing,” Proceedings of the IEEE, vol. 96, no. 5, pp. 879–899, 2008.
N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates,
S. Bhatia, N. Boden, A. Borchers et al.
, “In-datacenter performance analysis of a tensor processing unit,” inProceedings of the 44th Annual International Symposium on Computer Architecture. ACM, 2017, pp. 1–12.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 248–255.
-  G. E. Hinton, “Learning multiple layers of representation,” Trends in cognitive sciences, vol. 11, no. 10, pp. 428–434, 2007.
-  C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” arXiv preprint arXiv:1706.04599, 2017.
-  Z. C. Lipton, “The mythos of model interpretability,” arXiv preprint arXiv:1606.03490, 2016.
-  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
-  F. Tramèr, A. Kurakin, N. Papernot, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” arXiv preprint arXiv:1705.07204, 2017.
-  C. Saunders, A. Gammerman, and V. Vovk, “Transduction with confidence and credibility,” 1999.
-  V. Vovk, A. Gammerman, and C. Saunders, “Machine-learning applications of algorithmic randomness,” 1999.
-  M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang et al., “End to end learning for self-driving cars,” arXiv preprint arXiv:1604.07316, 2016.
-  X. Jiang, M. Osl, J. Kim, and L. Ohno-Machado, “Calibrating predictive model estimates to support personalized medicine,” Journal of the American Medical Informatics Association, vol. 19, no. 2, pp. 263–274, 2011.
-  B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2013, pp. 387–402.
-  G. Shafer and V. Vovk, “A tutorial on conformal prediction,” Journal of Machine Learning Research, vol. 9, no. Mar, pp. 371–421, 2008.
-  F. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,” 2017.
-  N. Dalvi, P. Domingos, S. Sanghai, D. Verma et al., “Adversarial classification,” in Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2004, pp. 99–108.
-  D. Lowd and C. Meek, “Adversarial learning,” in Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. ACM, 2005, pp. 641–647.
-  M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?” in Proceedings of the 2006 ACM Symposium on Information, computer and communications security. ACM, 2006, pp. 16–25.
-  M. Barreno, B. Nelson, A. D. Joseph, and J. Tygar, “The security of machine learning,” Machine Learning, vol. 81, no. 2, pp. 121–148, 2010.
-  N. Papernot, P. McDaniel, A. Sinha, and M. Wellman, “Towards the science of security and privacy in machine learning,” arXiv preprint arXiv:1611.03814, 2016.
-  D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané, “Concrete problems in ai safety,” arXiv preprint arXiv:1606.06565, 2016.
-  N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. ACM, 2017, pp. 506–519.
-  S. Sabour, Y. Cao, F. Faghri, and D. J. Fleet, “Adversarial manipulation of deep representations,” International Conference on Learning Representations, 2016.
-  C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995.
-  I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, http://www.deeplearningbook.org.
-  J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” in Spin Glass Theory and Beyond: An Introduction to the Replica Method and Its Applications. World Scientific, 1987, pp. 411–415.
-  A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” arXiv preprint arXiv:1703.04977, 2017.
-  J. Platt et al., “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” Advances in large margin classifiers, vol. 10, no. 3, pp. 61–74, 1999.
-  Y. Gal, “Uncertainty in deep learning,” Ph.D. dissertation, PhD thesis, University of Cambridge, 2016.
-  D. J. MacKay, “Bayesian methods for adaptive models,” Ph.D. dissertation, California Institute of Technology, 1992.
-  A. Graves, “Practical variational inference for neural networks,” in Advances in Neural Information Processing Systems, 2011, pp. 2348–2356.
-  R. M. Neal, Bayesian learning for neural networks. Springer Science & Business Media, 2012, vol. 118.
-  C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, “Weight uncertainty in neural networks,” arXiv preprint arXiv:1505.05424, 2015.
-  Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in international conference on machine learning, 2016, pp. 1050–1059.
-  N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
-  B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” in Advances in Neural Information Processing Systems, 2017, pp. 6405–6416.
-  B. Goodman and S. Flaxman, “European union regulations on algorithmic decision-making and a” right to explanation”,” arXiv preprint arXiv:1606.08813, 2016.
-  D. Erhan, Y. Bengio, A. Courville, and P. Vincent, “Visualizing higher-layer features of a deep network,” University of Montreal, vol. 1341, no. 3, p. 1, 2009.
B. Kim, J. A. Shah, and F. Doshi-Velez, “Mind the gap: A generative approach to interpretable feature selection and extraction,” inAdvances in Neural Information Processing Systems, 2015, pp. 2260–2268.
-  M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should i trust you?: Explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016, pp. 1135–1144.
-  G. Alain and Y. Bengio, “Understanding intermediate layers using linear classifier probes,” arXiv preprint arXiv:1610.01644, 2016.
-  B. Kim, J. Gilmer, F. Viegas, U. Erlingsson, and M. Wattenberg, “Tcav: Relative concept importance testing with linear concept activation vectors,” arXiv preprint arXiv:1711.11279, 2017.
-  D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba, “Network dissection: Quantifying interpretability of deep visual representations,” in Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017, pp. 3319–3327.
-  P. Dabkowski and Y. Gal, “Real time image saliency for black box classifiers,” in Advances in Neural Information Processing Systems, 2017, pp. 6970–6979.
-  J. Bien and R. Tibshirani, “Prototype selection for interpretable classification,” The Annals of Applied Statistics, pp. 2403–2424, 2011.
-  O. Li, H. Liu, C. Chen, and C. Rudin, “Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions,” arXiv preprint arXiv:1710.04806, 2017.
-  R. Caruana, H. Kangarloo, J. Dionisio, U. Sinha, and D. Johnson, “Case-based explanation of non-case-based learning methods.” in Proceedings of the AMIA Symposium. American Medical Informatics Association, 1999, p. 212.
-  B. Kim, C. Rudin, and J. A. Shah, “The bayesian case model: A generative approach for case-based reasoning and prototype classification,” in Advances in Neural Information Processing Systems, 2014, pp. 1952–1960.
-  F. Doshi-Velez, B. C. Wallace, and R. Adams, “Graph-sparse lda: A topic model with structured sparsity.” in Aaai, 2015, pp. 2575–2581.
-  L. A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, and T. Darrell, “Generating visual explanations,” in European Conference on Computer Vision. Springer, 2016, pp. 3–19.
T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” inAdvances in neural information processing systems, 2013, pp. 3111–3119.
-  T. Lei, R. Barzilay, and T. Jaakkola, “Rationalizing neural predictions,” arXiv preprint arXiv:1606.04155, 2016.
-  Y. Lou, R. Caruana, and J. Gehrke, “Intelligible models for classification and regression,” in Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012, pp. 150–158.
-  I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
-  N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. IEEE, 2016, pp. 372–387.
-  Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial examples and black-box attacks,” arXiv preprint arXiv:1611.02770, 2016.
M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” inProceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016, pp. 1528–1540.
-  J. Hayes and G. Danezis, “Machine learning as an adversarial service: Learning black-box adversarial examples,” arXiv preprint arXiv:1708.05207, 2017.
-  N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in Security and Privacy (SP), 2017 IEEE Symposium on. IEEE, 2017, pp. 39–57.
-  C. Sitawarin, A. N. Bhagoji, A. Mosenia, P. Mittal, and M. Chiang, “Rogue signs: Deceiving traffic sign recognition with malicious ads and logos,” arXiv preprint arXiv:1801.02780, 2018.
-  Anonymous, “Thermometer encoding: One hot way to resist adversarial examples,” International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=S18Su--CW
-  A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
-  W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” arXiv preprint arXiv:1704.01155, 2017.
-  D. Meng and H. Chen, “Magnet: a two-pronged defense against adversarial examples,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017, pp. 135–147.
-  J. Z. Kolter and E. Wong, “Provable defenses against adversarial examples via the convex outer adversarial polytope,” arXiv preprint arXiv:1711.00851, 2017.
-  A. Raghunathan, J. Steinhardt, and P. Liang, “Certified defenses against adversarial examples,” arXiv preprint arXiv:1801.09344, 2018.
-  M. Cisse, P. Bojanowski, E. Grave, Y. Dauphin, and N. Usunier, “Parseval networks: Improving robustness to adversarial examples,” in International Conference on Machine Learning, 2017, pp. 854–863.
-  L. Engstrom, D. Tsipras, L. Schmidt, and A. Madry, “A rotation and a translation suffice: Fooling cnns with simple transformations,” arXiv preprint arXiv:1712.02779, 2017.
-  C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu, “Deeply-supervised nets,” in Artificial Intelligence and Statistics, 2015, pp. 562–570.
-  A. Andoni, P. Indyk, T. Laarhoven, I. Razenshteyn, and L. Schmidt, “Practical and optimal lsh for angular distance,” in Advances in Neural Information Processing Systems, 2015, pp. 1225–1233.
-  A. Gionis, P. Indyk, R. Motwani et al., “Similarity search in high dimensions via hashing,” in Vldb, vol. 99, no. 6, 1999, pp. 518–529.
-  A. Andoni and P. Indyk, “Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions,” in Foundations of Computer Science, 2006. FOCS’06. 47th Annual IEEE Symposium on. IEEE, 2006, pp. 459–468.
-  H. Papadopoulos, K. Proedrou, V. Vovk, and A. Gammerman, “Inductive confidence machines for regression,” in European Conference on Machine Learning. Springer, 2002, pp. 345–356.
-  H. Papadopoulos, “Inductive conformal prediction: Theory and application to neural networks,” in Tools in artificial intelligence. InTech, 2008.
-  Y. LeCun, C. Cortes, and C. Burges, “Mnist handwritten digit database,” AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist, vol. 2, 2010.
-  Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS workshop on deep learning and unsupervised feature learning, vol. 2011, no. 2, 2011, p. 5.
-  J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition,” Neural Networks, no. 0, pp. –, 2012. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0893608012000457
A. Niculescu-Mizil and R. Caruana, “Predicting good probabilities with supervised learning,” inProceedings of the 22nd international conference on Machine learning. ACM, 2005, pp. 625–632.
-  Y. Bulatov, “Notmnist dataset,” Google (Books/OCR), Tech. Rep.[Online]. Available: http://yaroslavvb. blogspot. it/2011/09/notmnist-dataset. html, 2011.
-  A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009.
-  S. Dasgupta, C. F. Stevens, and S. Navlakha, “A neural algorithm for a fundamental computing problem,” Science, vol. 358, no. 6364, pp. 793–796, 2017.
-  L. G. Valiant, “What must a global theory of cortex explain?” Current opinion in neurobiology, vol. 25, pp. 15–19, 2014.
-  M. Kearns, “Fair algorithms for machine learning,” in Proceedings of the 2017 ACM Conference on Economics and Computation. ACM, 2017, pp. 1–1.
-  B. T. Luong, S. Ruggieri, and F. Turini, “k-nn as an implementation of situation testing for discrimination discovery and prevention,” in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2011, pp. 502–510.
-  M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi, “Fairness constraints: Mechanisms for fair classification,” arXiv preprint arXiv:1507.05259, 2017.
-  R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork, “Learning fair representations,” in International Conference on Machine Learning, 2013, pp. 325–333.
-  J. Kleinberg, S. Mullainathan, and M. Raghavan, “Inherent trade-offs in the fair determination of risk scores,” arXiv preprint arXiv:1609.05807, 2016.
-  M. Hardt, E. Price, N. Srebro et al., “Equality of opportunity in supervised learning,” in Advances in neural information processing systems, 2016, pp. 3315–3323.
-  S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq, “Algorithmic decision making and the cost of fairness,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2017, pp. 797–806.
-  S. Barocas and A. D. Selbst, “Big data’s disparate impact,” Cal. L. Rev., vol. 104, p. 671, 2016.
-  A. Waters and R. Miikkulainen, “Grade: Machine learning support for graduate admissions,” AI Magazine, vol. 35, no. 1, p. 64, 2014.
-  J. Kleinberg, H. Lakkaraju, J. Leskovec, J. Ludwig, and S. Mullainathan, “Human decisions and machine predictions,” The Quarterly Journal of Economics, vol. 133, no. 1, pp. 237–293, 2017.
-  A. E. Khandani, A. J. Kim, and A. W. Lo, “Consumer credit-risk models via machine-learning algorithms,” Journal of Banking & Finance, vol. 34, no. 11, pp. 2767–2787, 2010.
-  J. Kasperkevic, “Google says sorry for racist auto-tag in photo app,” The Guardian, pp. 07–01, 2015.
-  P. Stock and M. Cisse, “Convnets and imagenet beyond accuracy: Explanations, bias detection, adversarial examples and model criticism,” arXiv preprint arXiv:1711.11443, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European Conference on Computer Vision. Springer, 2016, pp. 630–645.
-  M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
-  A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” arXiv preprint arXiv:1611.01236, 2016.
-  A. Kerckhoffs, “La cryptographic militaire,” Journal des sciences militaires, pp. 5–38, 1883.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
-  N. Šrndić and P. Laskov, “Practical evasion of a learning-based classifier: A case study,” in IEEE Symposium on Security and Privacy, 2014, pp. 197–211.
-  W. Xu, Y. Qi, and D. Evans, “Automatically evading classifiers: A case study on PDF malware classifiers,” in Network and Distributed Systems Symposium, 2016.
-  N. Papernot, N. Carlini, I. Goodfellow, R. Feinman, F. Faghri, A. Matyasko, K. Hambardzumyan, Y.-L. Juang, A. Kurakin, R. Sheatsley et al., “cleverhans v2. 0.0: an adversarial machine learning library,” arXiv preprint arXiv:1610.00768, 2016.