1 Introduction
“Awareness of ignorance is the beginning of wisdom.”
– Socrates
We wish to give neural networks the ability to know when they do not know. In addition, we want networks to be able to explain to a human “why” they do not know. Our approach to this ambitious task is to view neural networks as data generating systems and detect anomalous patterns in the “activation space” of their hidden layers.
The three goals of applying anomalous pattern detection techniques to data generated by neural networks are to: Quantify the anomalousness of activations within a neural network; Detect when anomalous patterns are present for a given input; and Characterize the anomaly by identifying the nodes participating in the anomalous pattern.
Furthermore, we approach these goals without specialized (re)training techniques or novel network architectures. These methods can be applied to any offtheshelf, pretrained model. We also emphasize that the adversarial noise detection task is conducted in an unsupervised form, without labeled examples of the noised images.
The primary contribution of this work is to demonstrate that nonparametric scan statistics, efficiently optimized over activations in a neural network, are able to quantify the anomalousness of a highdimensional input into a realvalued “score”. This definition of anomalousness is with respect to a given network model and a set of “background” inputs that are assumed to generate normal or expected patterns in the activation space of the network. Our novel method measures the deviance between the activations of a given input under evaluation and the activations generated by the background inputs. A higher measured deviance result in a higher anomalousness score for the evaluation input.
The challenging aspect of measuring deviances in the activation space of neural networks is dealing with highdimensional data on the order of the number of nodes in a network. Our baseline example in this work is a convolutional neural network trained on CIFAR10 images with seven hidden layers and contains 96,800 nodes. Therefore, the measure of anomalousness must be effective in capturing (potentially subtle) deviances in a highdimensional space and be computationally tractable. Subset scanning meets both of these requirements (see Section
2).The reward for addressing this difficult problem is an unsupervised, anomalousinput detector that can be applied to any input and to any type of neural network architecture. This is because neural networks rely on their activation space to encode the features of their inputs and therefore quantifying deviations from expected behavior in the activation space has universal appeal and potential.
We are not analyzing the inputs directly (i.e. the pixelspace) nor performing dimensionality reduction to make the problem more tractable. We are identifying anomalous patterns at the nodelevel of networks by scanning over subsets of activations and quantifying their anomalousness.
The next contributions of this work focus on detection and characterization of adversarial noise added to inputs in order to change the labels [Szegedy et al.2013, Goodfellow, Shlens, and Szegedy2014, Papernot and McDaniel2016].
We do not claim state of the art results and do not compare against the numerous and varied approaches in the expanding literature. Rather, these results demonstrate that the “subset score” of anomalous activations within a neural network is able to detect the presence of subtle patterns in high dimensional space. Also note that a proper “adversarial noise defense” is outside the scope of this paper.
In addition to quantifying the anomalousness of a given input to a network, subset scanning identifies the subset of nodes that contributed to that score. This data can then be used for characterizing the anomalous pattern. These approaches have broad implications for explainable A.I. by aiding the human interpretability of network models.
For characterizing patterns in this work, we analyze the distribution of nodes (identified as anomalous under the presence of adversarial noise) across the layers of the network. We identify an “interference” pattern in deeper layers of the network that suggests the structure of activations normally present in clean images have been suppressed, presumably by the anomalous activations from shallower layers. These types of insights are only possible through the subset scanning approach to anomalous pattern detection.
The final contribution of this work is laying out a line of research that extends subset scanning further into the deep learning domain. This current paper introduces how to efficiently identify the most anomalous
unconstrained subset of nodes in a neural network for a single input. The subset scanning literature has shown that the unconstrained subset has weak detection power compared to constrained searches where the constraints reflect domainspecific knowledge on the type of anomalous patterns to be detected [Neill2012, Speakman et al.2016].The rest of this paper is organized in the following sections. Section 2 reviews subset scanning and highlights the Linear Time Subset Scanning property originally introduced in [Neill2012]. The section goes on to introduce our novel method that combines subset scanning techniques and nonparametric scan statistics in order to detect anomalous patterns in neural network activations. Detection experiments are covered in Section 3. We provide quantitative detection and characterization results on adversarial noise applied to CIFAR10 images. Future methodological extensions and new domains of application are covered in Section 4 and finally, Section 5 provides a summary of the contributions and insights of this work.
2 Subset Scanning
Subset scanning treats pattern detection as a search for the “most anomalous” subset of observations in the data where anomalousness is quantified by a scoring function, (typically a loglikelihood ratio). Therefore, we wish to efficiently identify over all subsets of the data . The particular scoring functions used in this work are covered in the next subsection.
Subset scanning has been shown to succeed where other heuristic approaches may fail
[Neill2012]. “Topdown” methods look for globally interesting patterns and then identifies subpartitions to find smaller anomalous groups of records. These approaches may fail when the true anomaly is not evident from global aggregates.Similarly, “Bottomup” methods look for individually anomalous data points and attempt to aggregate them into clusters. These methods may fail when the pattern is only evident by evaluating a group of data points collectively.
Treating the detection problem as a subset scan has desirable statistical properties for maximizing detection power but the exhaustive search is infeasible for even moderately sized data sets. However, a large class of scoring functions satisfy the “Linear Time Subset Scanning” (LTSS) property which allows for exact, efficient maximization over all subsets of data without requiring an exhaustive search [Neill2012]. The following subsections highlight a class of functions that satisfy LTSS and describe how the efficient maximization process works for scanning over activations from nodes in a neural network.
Nonparametric Scan Statistics
Subset scanning and scan statistics more broadly consider scoring functions that are members of the exponential family and make explicit parametric assumptions on the data generating process. To avoid these assumptions, this work uses nonparametric scan statistics (NPSS) that have been used in other pattern detection methods [Neill and Lingwall2007, McFowland III, Speakman, and Neill2013, McFowland, Somanchi, and Neill2018, Chen and Neill2014].
NPSS require baseline or background data to inform their data distribution under the null hypothesis
of no anomaly present. The evaluation input (different than the background inputs) computes empirical values by comparing it to the empirical baseline distribution. NPSS then searches for subsets of data in the evaluation input that contain the most evidence for not having been generated under . This evidence is quantified by an unexpectedly large number of low empirical values generated by the evaluation input.In our specific context, the baseline data is the node activations of 9000 clean CIFAR10 images from the validation set. Each background image, , generates an activation at each network node ; and likewise, an evaluation image (which is potentially contaminated with adversarial noise) will produce activations .
For a given evaluation image , a collection of background images , and a network with nodes, we can obtain an empirical value for each node . This is the proportion of activations from the background inputs, , that are larger than the activation from the evaluation input at node . We extend this notion to value ranges which posses improved statistical properties [McFowland III, Speakman, and Neill2013].
We use the following two terms to form a value range at node .
The range is then defined as
(1) 
The empirical value for node
may then be viewed as random variable uniformly distributed between
and under . [McFowland, Somanchi, and Neill2018].As an example, consider a node with activations = for background images. An evaluation image may create an activation at node of and would be given a value range for node . A different evaluation image may activate node at and would be given a value range of . Finally, a third evaluation image may produce an activation of at node and would be assigned a range of .
Intuitively, if an evaluation image is “normal” (its activations are drawn from the same distribution as the baseline images) then few value ranges will be extreme. The key assumption for subset scanning approaches is that under the alternative hypothesis of an anomaly present in the data then at least some subset of the activations will appear extreme.
The value ranges from an evaluation input are processed by a nonparametric scan statistic in order to identify the subset of node activations that maximizes the scoring function , as this is the subset with the most statistical evidence for having been effected by an anomalous pattern.
The general form of the NPSS score function is
(2) 
where represents the number of empirical value ranges contained in subset and
is the total probability mass less than (significance level)
in these ranges.The level defines a threshold which value ranges can be compared against. Specifically, we calculate the portion of the range that falls below the threshold. This may be viewed as the probability that a value from that range would be significant at that level and is defined as
(3) 
bounded between 0 and 1.
This generalizes to a subset of nodes, intuitively. and .
Moreover, it has been shown that for a subset consisting of empirical value ranges, [McFowland III, Speakman, and Neill2013]. Therefore, we assume an anomalous process will result in some where the observed significance is higher than the expected, , for some .
There are wellknown goodnessoffit statistics that can be utilized in NPSS [McFowland, Somanchi, and Neill2018], the most popular is the KolmogorovSmirnov test [Kolmogorov1933]. Another option is HigherCriticism [Donoho and Jin2004].
In this work we use the BerkJones test statistic
[Berk and Jones1979]: , where is the KullbackLiebler divergence between the observed and expected proportions of significant values. BerkJones can be interpreted as the loglikelihood ratio for testing whether the values are uniformly distributed on as compared to following a piecewise constant alternative distribution, and has been shown to fulfill several optimality properties.Efficient Maximization of NPSS
Although NPSS provides a means to evaluate the anomalousness of a subset of activations for a given input, discovering which of the possible subsets provides the most evidence of an anomalous pattern is computationally infeasible for large . However, NPSS has been shown to satisfy the lineartime subset scanning (LTSS) property [Neill2012], which allows for efficient and exact maximization of .For a pair of functions and representing the score of a given subset and the “priority” of data record respectively, we have a guarantee that the subset maximizing the score will be one consisting only of the top highest priority records, for some between and .
For NPSS the priority of a node activation is the proportion of its value range that is less than and was introduced in Equation 3. .
Figure 1 shows a sample problem of maximizing a NPSS scoring function over 4 example nodes and two different thresholds. The leftmost graphic shows value ranges for 4 nodes. We highlight node that has and . For we observe as 40% of node ’s value range is below the threshold. A larger proportion of node ’s value range falls below 0.2 and therefore node has a higher priority than .
We emphasize that a node’s priority (and therefore, also the priority ordering) is induced by the threshold value. We demonstrate this by considering in the example, as well. The priority of node increases to . Furthermore, the priority ordering of the nodes has changed for the different values. Node had a lower priority than node under and a higher priority than node under .
The next takeaway from the example in Figure 1 is how the priority ordering over nodes creates at most 4 subsets (linearly many) that must be scored for each threshold in order to identify the highestscoring subset overall. Recall the general form of NPSS scoring functions , where represents the number of empirical value ranges contained in subset and is the total probability mass less than (significance level) in these ranges. When scoring the subset under we evaluate where 1.1 is the sum of 0.7 and 0.4 and 2 is the size of the subset. The scoring function is then quantifying how “anomalous” it is to observe 1.1 significant values when the expectation is .
We conclude the toy example by providing intuition behind the efficient maximization of scoring functions that satisfy LTSS. Notice under we do not consider the subset . This is because we can guarantee a higher score by either including or removing . The priority ordering over the nodes guides this inclusion sequence which results in only linearly many subsets needing to be scored.
Network Details
Layer  Details 


Conv 1  32, 3x3  32,768  
Conv 2  32, 3x3  28,800  
Pool 1  2x2  7,200  
Dropout  
Conv 3  64, 3x3  14,400  
Conv 4  64, 3x3  10,816  
Pool 2  2x2  2304  
Dropout  
Flat  512  512  
Dropout 
We briefly describe the training process and network architecture before discussing adversarial attacks. We trained a standard convolutional neural network on 50,000 CIFAR10 training images. The architecture consists of seven hidden layers summarized in Table 1
. The first two layers are each composed of 32 3x3 convolution filters. The third layer is a 2x2 max pooling followed by a dropout of
. The next three layers repeat this pattern but with 64 filters in each of the two convolution layers. Finally there is a flattened layer of 512 nodes with dropout of before the output layer. The model was trained using activation functions and reached a top1 classification accuracy of 74%. The accuracy is within expectation for a simple network.activation functions did achieve slightly higher accuracy for the same architecture, however an accurate model is not the focus of this paper and functions can be difficult to identify an “extreme” activation due to many of the values being 0 for a given input . This was evident even if the preactivation value is anomalously high for the background but still 0 after . It is possible to perform subset scanning with functions with additional constraints. For example, only allowing positive activations to be considered as part of the most anomalous subset. These constraints clouded the story and we proceeded with instead.
3 Detecting Adversarial Noise with Subset Scanning
Machine Learning models are susceptible to adversarial perturbations of their input data that can cause the input to be misclassified [Szegedy et al.2013, Goodfellow, Shlens, and Szegedy2014, Kurakin, Goodfellow, and Bengio2016b, Dalvi et al.2004]
. There are a variety of methods to make neural networks more robust to adversarial noise. Some require retraining with altered loss functions so that adversarial images must have a higher perturbation in order to be successful
[Papernot et al.2015, Papernot and McDaniel2016]. Other detection methods rely on a supervised approach and treat the problem as classification rather than anomaly detection by training on noised examples
[Grosse et al.2017, Gong, Wang, and Ku2017, Huang et al.2015]. Another supervised approach is to use activations from hidden layers as features used by the detector. [Metzen et al.2017]In contrast, our work treats the problem as anomalous pattern detection and operates in an unsupervised manner without apriori knowledge of the attack or labeled examples. We also do not rely on training data augmentation or specialized training techniques. These constraints make it a more difficult problem, but also more realistic in the adversarial noise domain as new attacks are constantly being created.
A defense in [Feinman et al.2017]
is more similar to our work. They build a kernel density estimate over background activations from the nodes in only the last hidden layer and report when an image falls in a low density part of the density estimate. This works well on MNIST, but performs poorly on CIFAR10
[Carlini and Wagner2017a]. Our novel subset scanning approach looks at anomalousness at the nodelevel and throughout the whole network.



All Nodes  

FGSM  0.9997  0.9990  
0.9420  0.8246  
0.5201  0.4980  
BIM  0.9913  0.9682  
0.8755  0.6961  
0.5177  0.4969  
CW  0.5005  0.5035  
0.5182  0.5020  
0.5970  0.5230 
Training and Experiment Setup
For our adversarial experiments, we trained the network described in Section 2 on 50,000 CIFAR10 images. We then took of the 10000 validation images and used them to generate the background activation distribution () at each of the 96,800 nodes in the network. The remaining 1000 images were used to form two groups: “Clean” (C) and “Adversarial” (A) with Adversarial being a noised version for various attack types. Group C did not change for each attack type. We then score the 2000 images contained in A and C. We emphasize their measure of anomalousness is not between the A and C noised counterparts but rather to the background originally formed by , respectively.
We do not calculate a score threshold for which any input above that score is classified as noise. Rather we report the area under the ROC curve which is a measure of how well the score separates the classes A and C. A value of 1.0 means the score perfectly separates the classes and a value of 0.5 is equivalent to random guessing.
Results and Discussion
Table 2 provides detection power results (as measured by area under the ROC curve) for a variety of attacks and their parameters.The three attack types are the Fast Gradient Sign Method (FGSM) [Goodfellow, Shlens, and Szegedy2014] and its iterative extension (BIM) [Kurakin, Goodfellow, and Bengio2016a]. These two attacks have an parameter which is the maximum distance any pixel in the original image may be changed. Note this value is in the [0,1] scaled pixel space rather than [0,255].
The third attack was proposed by Carlini and Wagner (CW) [Carlini and Wagner2017b] and has a corresponding parameter that can create “high confidence” attacks. High confidence is measured by the
difference between the highest and second highest logit values in the output layer (presoftmax). All attacks were generated with CleverHans package
[Papernot et al.2016, Papernot et al.2018].All detection results are for the BerkJones NPSS scoring function introduced in Section 2.
The first numeric column shows the detection power when subset scanning is used to identify the most anomalous subset of activations for the input under evaluation. The last column shows the detection power when all nodes are considered together rather than the highest scoring subset. Detection power is higher when scanning over subsets of node and demonstrates early promise for expanding subset scanning methods in future work.
Although still overall low detection power, we point out that our method has a higher probability of detecting the “higher confidence ” CW attacks than the less confident versions. This is because the higher confidence attacks require more deviations in the activation space than their lower confidence versions.
Some attack types did not have 100% success rate. BIM failed to change the predicted label on 0.6% of images for of 0.01. FGSM failed to change the predicted label on 6.1% ,9.8%, and 18.8% of images for 0.10, 0.05, 0.01, respectively. CW failed to to generate “high confidence” attacks for 17.3% for . In all of these cases the failed attacks were removed before calculating detection power.
In addition to subset scanning over the entire network we also performed separate searches over individual layers of the network. This may be thought of as a rudimentary constraint put on the search process, requiring subsets of nodes to be contained in a single layer. More sophisticated constraints are proposed in detail in Section 4.
Figure 2 shows the detection power of the BerkJones scoring function when scanning over individual layers of the network. We make two observations on these results of increasing importance. The first is the increase in detection power when scanning over just the first pooling layer compared to scanning over subsets of nodes in the entire network. Changes to the pixel space are best captured by the pooling layer that condenses the first two convolution layers.
Second, we note the unexpected behavior at Layer Conv 4 and partly Pool 2. The AUC values less than 0.5 are due to the score of the most anomalous subset of nodes in Conv 4 from noised images being less than the score of the most anomalous subset of nodes in Conv 4 for clean images. In other words normal activity is anomalously absent in those layers for noised images. We hypothesize that adversarial noise may more easily confuse neural networks by deconstructing the signal of the original image rather than overpowering it with a rogue signal. This “interference” approach results in large amounts of noninteresting activations in the presence of noise compared to the structure of activations for clean images. Further work is needed on different network architectures to explore this phenomenon.
We conclude the adversarial noise results by locating where (i.e. which layer) in the network the most anomalous activations are triggering. For this approach, we return to subset scanning over the entire network and define a representation metric for each subset and Layer of the network . Representation of a subset and layer has a value of 1 if the proportion of anomalous nodes in the subset and the layer is proportional to the relative size of the layer within the network. This metric allows measuring the relative “size” of the subset within a single layer despite layers varying in the number of nodes.
Figure 3 plots the representation for each subset of the 1000 noised (BIM ) and clean images.
We again make two observations of increasing importance. First, we see that anomalous activity (as identified by subset scanning) of clean images is equally represented across all layers with most subsets having representation centered over 1.0.
Second, and more consequential, adversarial images have anomalous activity overrepresented in Pool 1 and underrepresented in Conv 4 and Pool 2. This characterization of anomalous activity, as identified by our method, also suggests the “interference” theory of adversarial noise: Anomalous activations in the shallower layers of the network suppress the activation structure of the original image in deeper layers.
4 Extensions
To enable further clarity and applicability of the current work, many extensions of the method have been left for future work. These extensions may increase detection power or characterization or both.
Simple Extensions
We note that 2tailed testing is a reasonable approach for tanh and sigmoid activation functions. The definition of “extreme” activations carries over to either larger or smaller than expected intuitively. It is also possible to calculate densitybased values where anomalousness is measured by activations from a low density area of the background activations. This is particularly relevant in deeper nodes where bimodal distributions are likely. This extension requires learning a univariate kernel density estimate at each node, but this can be done offline on background data only.
Additionally, we note that it may be worth calculating conditional values where every label has its own set of background activations. Then at the time of evaluation only the predicted class’s background is used for calculating value ranges. This may be particularly powerful for the adversarial noise setting, but it does reduce the size of the background activations by the number of classes.
Finally, the NPSS scoring functions have an additional tuning parameter that has been left at 1.0 for this work. This means we were able to identify very large subsets of the activations that were all slightly anomalous. Smaller values of limit the search space to a smaller number of more anomalous activations within the network. Smaller values have been shown to increase detection power further if the prior belief is a small fraction of the data records are participating in the pattern [McFowland III, Speakman, and Neill2013].
Enforcing Hard Constraints
Constraints on the search space are essential parts of subset scanning. Without constraints, it is likely that inputs drawn from the null distribution will look anomalous by chance by “matching to the noise”. This hurts detection power despite there being a clear anomalous pattern in the alternative. In short, scanning over all subsets may be computationally tractable for scoring functions satisfying LTSS, but it is likely too broad of a search space to capture statistical significance.
This work briefly demonstrated one hard constraint on the search space by performing scans on individual layers of the network as shown in Figure 2. This simple extension increased detection power when scanning over the first pooling layer compared to scanning over all subsets of the network. Furthermore, on results not shown in this paper, we are able to increase the detection of CW from 0.5978 to 0.6553 by scanning over the first 3 layers combined.
Hard connectivity constraints on subset scanning have been used to identify an anomalous subset of data records that are connected in an underlying graph structure that is either known [Speakman, McFowland III, and Neill2015, Speakman, Zhang, and Neill2013] or unknown [Somanchi and Neill2017]. Unfortunately, identifying the highestscoring connected subset is exponential in the number of nodes and a heuristic alternative could be used to identify a highscoring connected subset [Chen and Neill2014].
Enforcing Soft Constraints
In addition to changing the search space, we may also alter the definition of anomalousness by making changes to the scoring function itself. For example, we may wish to increase the score of a subset that contains nodes in Pool 1 Layer while decreasing the score of a subset that contains nodes in the Conv 4 Layer. These additional terms can be interpreted as the prior logodds that a given node will be included in the most anomalous subset
[Speakman et al.2016].Adversarial Noise and Additional Domains
We emphasize that this work is not proposing a proper defense to adversarial noise. However, detection is a critical component of a strong defense. Additional work is needed to turn detection into robustness by leveraging the most anomalous subset activations.
Continuing in the unsupervised fashion, we could mask activations from nodes in certain layers that were deemed anomalous to prevent them from propagating the anomalous pattern. In a supervised setting, the information contained in the most anomalous subset could be used as features for training a separate classifier. For example, systematically counting the number of nodes in the most anomalous subset in Pool 1 and Conv 4 could be powerful features.
We note potential for using the subset score to formulate an attack, rather than a tool for detection. Incorporating the subset score into the loss function of an iterative attack would minimize both the perturbation to the pixel space as well as deviations in the activation space [Carlini and Wagner2017a].
Continuing in the security space, we can also apply subset scanning to data poisoning [Biggio, Nelson, and Laskov2012, Biggio et al.2013]. This current work has considered each image individually, but it is possible to expand it so that the method identifies a group of images that are all anomalous for the same reasons in the activation space. This is the original intention of the Fast Generalized Subset Scan[McFowland III, Speakman, and Neill2013].
Leaving the security domain, anomalous pattern detection on neural network activations can be expanded to more general settings of detecting outofdistribution samples. This view has implications for detecting bias in classifiers, distribution shift for temporal data, and identifying when new class labels may appear in lifelong learning domain.
Finally, we acknowledge that subset scanning over activations of neural networks may have uses in capturing patterns in normal, nonanomalous data. Identifying which subset of nodes activate higher than expected in a given network while processing normal inputs has implications for explainable A.I. [Olah et al.2018].
5 Conclusion
This work uses the Adversarial Noise domain as an effective narrative device to demonstrate that anomalous patterns in the activation space of neural networks can be Quantified, Detected, and Characterized.
The primary contribution of this work to the deep learning literature is a novel, unsupervised anomaly detector that can be applied to any pretrained, offtheshelf neural network model. The method is based on subset scanning which treats the detection problem as a search for the highest scoring (most anomalous) subset of node activations as measured by nonparametric scan statistics. These scoring functions satisfy the Linear Time Subset Scanning property which allows for exact, efficient maximization over all possible subsets of nodes in a network containing nodes.
Our method is able to quantify activation data on the order of 100,000 dimensions into a single realvalued anomalousness “score”. We then used this score to detect images that had been perturbed by an adversary in order to change the network’s class label of the input. Finally, we used the identified subset of anomalous nodes in the network to characterize the adversarial noise pattern. This analysis highlighted a possible “interference” mode of adversarial noise that uses anomalous activations in the shallow layers to suppress the the true activation pattern of the original image.
We concluded the work by highlighting multiple extensions of subset scanning into the deep learning space. Many of these extensions attempt to overcome the relative weak detection power of unconstrained subset scanning that was introduced in this work. This is accomplished by enforcing constraints on the search space or alterations to the scoring functions, or both.
Additional domains outside of adversarial noise and security will also benefit from identifying anomalous activity within neural networks. Lifelong learning models need to recognize when a new class of inputs become available and production level systems must always guard against distribution shift over time.
References
 [Berk and Jones1979] Berk, R. H., and Jones, D. H. 1979. Goodnessoffit test statistics that dominate the Kolmogorov statistics. Zeitschrift fär Wahrscheinlichkeitstheorie und Verwandte Gebiete 47:47–59.
 [Biggio et al.2013] Biggio, B.; Didaci, L.; Fumera, G.; and Roli, F. 2013. Poisoning attacks to compromise face templates. In Biometrics (ICB), 2013 Int. Conference on, 1–7. IEEE.
 [Biggio, Nelson, and Laskov2012] Biggio, B.; Nelson, B.; and Laskov, P. 2012. Poisoning attacks against support vector machines. In International Conference on Machine Learning (ICML). Omnipress (arXiv preprint arXiv:1206.6389).
 [Carlini and Wagner2017a] Carlini, N., and Wagner, D. 2017a. Adversarial examples are not easily detected: Bypassing ten detection methods. CoRR abs/1705.07263.
 [Carlini and Wagner2017b] Carlini, N., and Wagner, D. 2017b. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy.
 [Chen and Neill2014] Chen, F., and Neill, D. B. 2014. Nonparametric scan statistics for event detection and forecasting in heterogeneous social media graphs. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, 1166–1175.
 [Dalvi et al.2004] Dalvi, N.; Domingos, P.; Mausam; Sanghai, S.; and Verma, D. 2004. Adversarial classification. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), KDD ’04, 99–108. New York, NY, USA: ACM.
 [Donoho and Jin2004] Donoho, D., and Jin, J. 2004. Higher criticism for detecting sparse heterogeneous mixtures. Annals of Statistics 32(3):962–994.
 [Feinman et al.2017] Feinman, R.; Curtin, R. R.; Shintre, S.; and Gardner, A. B. 2017. Detecting adversarial samples from artifacts. CoRR 1703.00410.
 [Gong, Wang, and Ku2017] Gong, Z.; Wang, W.; and Ku, W. 2017. Adversarial and clean data are not twins. CoRR abs/1704.04960.
 [Goodfellow, Shlens, and Szegedy2014] Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. CoRR abs/1412.6572.
 [Grosse et al.2017] Grosse, K.; Manoharan, P.; Papernot, N.; Backes, M.; and McDaniel, P. D. 2017. On the (statistical) detection of adversarial examples. CoRR abs/1702.06280.
 [Huang et al.2015] Huang, R.; Xu, B.; Schuurmans, D.; and Szepesvári, C. 2015. Learning with a strong adversary. CoRR abs/1511.03034.
 [Kolmogorov1933] Kolmogorov, A. N. 1933. Sulla determinazione empirica di una legge di distribuzione. na.
 [Kurakin, Goodfellow, and Bengio2016a] Kurakin, A.; Goodfellow, I. J.; and Bengio, S. 2016a. Adversarial examples in the physical world. CoRR abs/1607.02533.
 [Kurakin, Goodfellow, and Bengio2016b] Kurakin, A.; Goodfellow, I. J.; and Bengio, S. 2016b. Adversarial machine learning at scale. CoRR abs/1611.01236.
 [McFowland III, Speakman, and Neill2013] McFowland III, E.; Speakman, S. D.; and Neill, D. B. 2013. Fast generalized subset scan for anomalous pattern detection. The Journal of Machine Learning Research 14(1):1533–1561.
 [McFowland, Somanchi, and Neill2018] McFowland, III, E.; Somanchi, S.; and Neill, D. B. 2018. Efficient Discovery of Heterogeneous Treatment Effects in Randomized Experiments via Anomalous Pattern Detection. ArXiv eprints.
 [Metzen et al.2017] Metzen, J. H.; Genewein, T.; Fischer, V.; and Bischoff, B. 2017. On detecting adversarial perturbations. CoRR abs/1702.04267.
 [Neill and Lingwall2007] Neill, D. B., and Lingwall, J. 2007. A nonparametric scan statistic for multivariate disease surveillance. Advances in Disease Surveillance 4:106.
 [Neill2012] Neill, D. B. 2012. Fast subset scan for spatial pattern detection. Journal of the Royal Statistical Society (Series B: Statistical Methodology) 74(2):337–360.
 [Olah et al.2018] Olah, C.; Satyanarayan, A.; Johnson, I.; Carter, S.; Schubert, L.; Ye, K.; and Mordvintsev, A. 2018. The building blocks of interpretability. Distill. https://distill.pub/2018/buildingblocks.
 [Papernot and McDaniel2016] Papernot, N., and McDaniel, P. D. 2016. On the effectiveness of defensive distillation. CoRR abs/1607.05113.
 [Papernot et al.2015] Papernot, N.; McDaniel, P. D.; Wu, X.; Jha, S.; and Swami, A. 2015. Distillation as a defense to adversarial perturbations against deep neural networks. CoRR abs/1511.04508.
 [Papernot et al.2016] Papernot, N.; Goodfellow, I.; Sheatsley, R.; Feinman, R.; and McDaniel, P. 2016. cleverhans v1.0.0: an adversarial machine learning library. arXiv preprint arXiv:1610.00768.
 [Papernot et al.2018] Papernot, N.; Faghri, F.; Carlini, N.; Goodfellow, I.; Feinman, R.; Kurakin, A.; Xie, C.; Sharma, Y.; Brown, T.; Roy, A.; Matyasko, A.; Behzadan, V.; Hambardzumyan, K.; Zhang, Z.; Juang, Y.L.; Li, Z.; Sheatsley, R.; Garg, A.; Uesato, J.; Gierke, W.; Dong, Y.; Berthelot, D.; Hendricks, P.; Rauber, J.; and Long, R. 2018. Technical report on the cleverhans v2.1.0 adversarial examples library. arXiv preprint arXiv:1610.00768.
 [Somanchi and Neill2017] Somanchi, S., and Neill, D. B. 2017. Graph structure learning from unlabeled data for early outbreak detection. IEEE Intelligent Systems 32(2):80–84.
 [Speakman et al.2016] Speakman, S.; Somanchi, S.; III, E. M.; and Neill, D. B. 2016. Penalized fast subset scanning. Journal of Computational and Graphical Statistics 25(2):382–404.
 [Speakman, McFowland III, and Neill2015] Speakman, S. D.; McFowland III, E.; and Neill, D. B. 2015. Scalable Detection of Anomalous Patterns With Connectivity Constraints. Journal of Computational and Graphical Statistics 24(4):1014–1033.
 [Speakman, Zhang, and Neill2013] Speakman, S.; Zhang, Y.; and Neill, D. B. 2013. Dynamic pattern detection with temporal consistency and connectivity constraints. In 2013 IEEE 13th International Conference on Data Mining, 697–706.
 [Szegedy et al.2013] Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I. J.; and Fergus, R. 2013. Intriguing properties of neural networks. CoRR abs/1312.6199.
Comments
There are no comments yet.