Detecting anomalous activity refers to determining if we are observing merely noise (business as usual) or if there is some signal in the noise (anomalous activity). Classically, anomaly detection focused on identifying rare behaviors and aberrant bursts in activity over a single data source or channel. With the advent of large surveillance projects, social networks, and mobile computing, data sources often are high-dimensional and have a network structure. With this in mind, statistics needs to comprehensively address the detection of anomalous activity in graphs. In this paper, we will study the detection of elevated activity in a graph with Gaussian noise.
In reality, very little is known about the detection of activity in graphs, despite a variety of real-world applications such as activity detection in social networks, network surveillance, disease outbreak detection, biomedical imaging, sensor network detection, gene network analysis, environmental monitoring and malware detection. Sensor networks might be deployed for detecting nuclear substances, water contaminants, or activity in video surveillance. By exploiting the sensor network structure (based on proximity), one can detect activity in networks when the activity is very faint. Recent theoretical contributions in the statistical literature[arias2011detection, addario2010combinatorial]
have detailed the inherent difficulty of such a testing problem but have positive results only under restrictive conditions on the graph topology. By combining knowledge from high-dimensional statistics, graph theory and mathematical programming, the characterization of detection algorithms over any graph topology by their statistical properties is possible.
Aside from the statistical challenges, the computational complexity of any proposed algorithms must be addressed. Due to the combinatorial nature of graph based methods, problems can easily shift from having polynomial-time algorithms to having running times exponential in the size of the graph. The applications of graph structured inference require that any method be scalable to large graphs. As we will see, the ideal statistical procedure will be intractable, suggesting that approximation algorithms and relaxations are necessary.
1.1 Problem Setup
Consider a connected, possibly weighted, directed graph defined by a set of vertices () and directed edges (
) which are ordered pairs of vertices. Furthermore, the edges may be assigned weights,, that determine the relative strength of the interactions of the adjacent vertices. For each vertex, , we assume that there is an observation
that has a Normal distribution with mean
. This is called the graph-structured normal means problem, and we observe one realization of the random vector
where , . The signal will reflect the assumption that there is an active cluster () in the graph, by making if and otherwise. Furthermore, the allowable clusters, , must have a small boundary in the graph. Specifically, we assume that there are parameters (possibly dependent on such that the class of graph-structured activation patterns is given as follows.
Here is the total weight of edges leaving the cluster . In other words, the set of activated vertices have a small cut size in the graph . While we assume that the noise variance is in (1), this is equivalent to the more general model in which with known. If we wanted to consider known then we would apply all our algorithms to and replace with in all of our statements. For this reason, we call the signal-to-noise ratio (SNR), and proceed with .
In graph-structured activation detection we are concerned with statistically testing the null against the alternative hypotheses,
represents business as usual (such as sensors returning only noise) while encompasses all of the foreseeable anomalous activity (an elevated group of noisy sensor observations). Let a test be a mapping , where
indicates that we reject the null. It is imperative that we control both the probability of false alarm, and the false acceptance of the null. To this end, we define our measure of risk to be
where denote the expectation with respect to . These terms are also known as the probability of type 1 and type 2 error respectively. This setting should not be confused with the Bayesian testing setup (e.g. as considered in [addario2010combinatorial, arias2008searching]) where the patterns, , are drawn at random. We will say that and are asymptotically distinguished by a test, , if in the setting of large graphs, . If such a test exists then and are asymptotically distinguishable, otherwise they are asymptotically indistinguishable (which occurs whenever the risk does not tend to ). We will be characterizing regimes for in which our test asymptotically distinguishes from .
Throughout the study, let the edge-incidence matrix of be such that for , , and is elsewhere. For directed graphs, vertex degrees refer to . Let denote the norm, be the norm, and be the positive components of the vector . Let , and we will be using the notation, namely if non-negative sequences satisfy then and .
Section 3 highlights what is known about the hypothesis testing problem 2, particularly we provide a regime for in which and are asymptotically indistinguishable. In section 4.1, we derive the graph scan statistic from the generalized likelihood ratio principle which we show to be a computationally intractable procedure. In section 4.2, we provide a relaxation of the graph scan statistic (GSS), the Lovász extended scan statistic (LESS), and we show that it can be computed with successive minimum cut programs (a graph cut that separates a source vertex from a sink vertex). In section 5, we give our main result, Theorem 5
, that provides a type 1 error control for both test statistics, relating their performance to electrical network theory. In section 6, we show that GSS and LESS can asymptotically distinguishand in signal-to-noise ratios close to the lowest possible for some important graph models. All proofs are in the Appendix.
2 Related Work
Graph structured signal processing. There have been several approaches to signal processing over graphs. Markov random fields (MRF) provide a succinct framework in which the underlying signal is modeled as a draw from an Ising or Potts model [cevher2009sparse, ravikumar2006quadratic]. We will return to MRFs in a later section, as it will relate to our scan statistic. A similar line of research is the use of kernels over graphs. The study of kernels over graphs began with the development of diffusion kernels [kondor2002diffusion], and was extended through Green’s functions on graphs [smola2003kernels]
. While these methods are used to estimate binary signals (where) over graphs, little is known about their statistical properties and their use in signal detection. To the best of our knowledge, this paper is the first connection made between anomaly detection and MRFs.
Normal means testing. Normal means testing in high-dimensions is a well established and fundamental problem in statistics. Much is known when derives from a smooth function space such as Besov spaces or Sobolev spaces[ingster1987minimax, ingster2003nonparametric]. Only recently have combinatorial structures such as graphs been proposed as the underlying structure of . A significant portion of the recent work in this area [arias2005nearoptimal, arias2008searching, arias2011detection, addario2010combinatorial] has focused on incorporating structural assumptions on the signal, as a way to mitigate the effect of high-dimensionality and also because many real-life problems can be represented as instances of the normal means problem with graph-structured signals (see, for an example, [jacob2010gains]).
Graph scan statistics. In spatial statistics, it is common, when searching for anomalous activity to scan over regions in the spatial domain, testing for elevated activity[neill2004rapid, agarwal2006spatial]. There have been scan statistics proposed for graphs, most notably the work of [priebe2005scan] in which the authors scan over neighborhoods of the graphs defined by the graph distance. Other work has been done on the theory and algorithms for scan statistics over specific graph models, but are not easily generalizable to arbitrary graphs [yi2009unified, arias2011detection]. More recently, it has been found that scanning over all well connected regions of a graph can be computationally intractable, and so approximations to the intractable likelihood-based procedure have been studied [sharpnack2012changepoint, sharpnack2012detecting]. We follow in this line of work, with a relaxation to the intractable generalized likelihood ratio test.
3 A Lower Bound and Known Results
In this section we highlight the previously known results about the hypothesis testing problem (2). This problem was studied in [sharpnack2012detecting], in which the authors demonstrated the following lower bound, which derives from techniques developed in [arias2008searching]. [sharpnack2012detecting] Hypotheses and defined in Eq. (2) are asymptotically indistinguishable if
where is the maximum degree of graph . Now that a regime of asymptotic indistinguishability has been established, it is instructive to consider test statistics that do not take the graph into account (viz. the statistics are unaffected by a change in the graph structure). Certainly, if we are in a situation where a naive procedure perform near-optimally, then our study is not warranted. As it turns out, there is a gap between the performance of the natural unstructured tests and the lower bound in Theorem 3.
(1) The thresholding test statistic, , asymptotically distinguishes from if .
(2) The sum test statistic, , asymptotically distinguishes from if .
As opposed to these naive tests one can scan over all clusters in
performing individual likelihood ratio tests. This is called the scan statistic, and it is known to be a computationally intractable combinatorial optimization. Previously, two alternatives to the scan statistic have been developed: the spectral scan statistic[sharpnack2012changepoint], and one based on the uniform spanning tree wavelet basis [sharpnack2012detecting]. The former is indeed a relaxation of the ideal, computationally intractable, scan statistic, but in many important graph topologies, such as the lattice, provides sub-optimal statistical performance. The uniform spanning tree wavelets in effect allows one to scan over a subclass of the class, , but tends to provide worse performance (as we will see in section 6) than that presented in this work. The theoretical results in [sharpnack2012detecting] are similar to ours, but they suffer additional log-factors.
As we have noted the fundamental difficulty of the hypothesis testing problem is the composite nature of the alternative hypothesis. Because the alternative is indexed by sets, , with a low cut size, it is reasonable that the test statistic that we will derive results from a combinatorial optimization program. In fact, we will show we can express the generalized likelihood ratio (GLR) statistic in terms of a modular program with submodular constraints. This will turn out to be a possibly NP-hard program, as a special case of such programs is the well known knapsack problem [papadimitriou1998combinatorial]. With this in mind, we provide a convex relaxation, using the Lovász extension, to the ideal GLR statistic. This relaxation conveniently has a dual objective that can be evaluated with a binary Markov random field energy minimization, which is a well understood program. We will reserve the theoretical statistical analysis for the following section.
Submodularity. Before we proceed, we will introduce the reader to submodularity and the Lovász extension. (A very nice introduction to submodularity can be found in [bach2010convex].) For any set, which we may as well take to be the vertex set , we say that a function is submodular if for any , . (We will interchangeably use the bijection between and defined by .) In this way, a submodular function experiences diminishing returns, as additions to large sets tend to be less dramatic than additions to small sets. But while this diminishing returns phenomenon is akin to concave functions, for optimization purposes submodularity acts like convexity, as it admits efficient minimization procedures. Moreover, for every submodular function there is a Lovász extension defined in the following way: for let denote the th largest element of , then
Submodular functions as a class is similar to convex functions in that it is closed under addition and non-negative scalar multiplication. The following facts about Lovász extensions will be important. [bach2010convex] Let be submodular and be its Lovász extension. Then is convex, if , and
We are now sufficiently prepared to develop the test statistics that will be the focus of this paper.
4.1 Graph Scan Statistic
It is instructive, when faced with a class of probability distributions, indexed by subsets, to think about what techniques we would use if we knew the correct set
(which is often called oracle information). One would in this case be only testing the null hypothesisagainst the simple alternative . In this situation, we would employ the likelihood ratio test because by the Neyman-Pearson lemma it is the uniformly most powerful test statistic. The maximum likelihood estimator for is (the MLE of is ) and the likelihood ratio turns out to be
Hence, the log-likelihood ratio is proportional to and thresholding this at gives us a size test.
This reasoning has been subject to the assumption that we had oracle knowledge of . A natural statistic, when is unknown, is the generalized log-likelihood ratio (GLR) defined by . We will work with the graph scan statistic (GSS),
which is nearly equivalent to the GLR. (We can in fact evaluate for and , taking a maximum and obtain the GLR, but statistically this is nearly the same.) Notice that there is no guarantee that the program above is computationally feasible. In fact, it belongs to a class of programs, specifically modular programs with submodular constraints that is known to contain NP-hard instantiations, such as the ratio cut program and the knapsack program [papadimitriou1998combinatorial]. Hence, we are compelled to form a relaxation of the above program, that will with luck provide a feasible algorithm.
4.2 Lovász Extended Scan Statistic
It is common, when faced with combinatorial optimization programs that are computationally infeasible, to relax the domain from the discrete to a continuous domain, such as . Generally, the hope is that optimizing the relaxation will approximate the combinatorial program well. First we require that we can relax the constraint to the hypercube . This will be accomplished by replacing it with its Lovász extension . We then form the relaxed program, which we will call the Lovász extended scan statistic (LESS),
We will find that not only can this be solved with a convex program, but the dual objective is a minimum binary Markov random field energy program. To this end, we will briefly go over binary Markov random fields, which we will find can be used to solve our relaxation.
Binary Markov Random Fields. Much of the previous work on graph structured statistical procedures assumes a Markov random field (MRF) model, in which there are discrete labels assigned to each vertex in , and the observed variables are conditionally independent given these labels. Furthermore, the prior distribution on the labels is drawn according to an Ising model (if the labels are binary) or a Potts model otherwise. The task is to then compute a Bayes rule from the posterior of the MRF. The majority of the previous work assumes that we are interested in the maximum a-posteriori (MAP) estimator, which is the Bayes rule for the -loss. This can generally be written in the form,
where is a data dependent log-likelihood. Such programs are called graph-representable in [kolmogorov2004energy], and are known to be solvable in the binary case with - graph cuts. Thus, by the min-cut max-flow theorem the value of the MAP objective can be obtained by computing a maximum flow. More recently, a dual-decomposition algorithm has been developed in order to parallelize the computation of the MAP estimator for binary MRFs [strandmark2010parallel, sontag2011introduction].
We are now ready to state our result regarding the dual form of the LESS program, (4). Let , and define the dual function of the LESS,
The LESS estimator is equal to the following minimum of convex optimizations
is the objective of a MRF MAP problem, which is poly-time solvable with - graph cuts.
5 Theoretical Analysis
So far we have developed a lower bound to the hypothesis testing problem, shown that some common detectors do not meet this guarantee, and developed the Lovász extended scan statistic from first principles. We will now provide a thorough statistical analysis of the performance of LESS. Previously, electrical network theory, specifically the effective resistances of edges in the graph, has been useful in describing the theoretical performance of a detector derived from uniform spanning tree wavelets [sharpnack2012detecting]. As it turns out the performance of LESS is also dictated by the effective resistances of edges in the graph.
Effective Resistance. Effective resistances have been extensively studied in electrical network theory [lyons2000probability]. We define the combinatorial Laplacian of to be ( is the diagonal degree matrix). A potential difference is any such that it satisfies Kirchoff’s potential law: the total potential difference around any cycle is . Algebraically, this means that such that . The Dirichlet principle states that any solution to the following program gives an absolute potential that satisfies Kirchoff’s law:
for source/sinks and some voltage constraints . By Lagrangian calculus, the solution to the above program is given by where is over and over , and indicates the Moore-Penrose pseudoinverse. The effective resistance between a source and a sink is the potential difference required to create a unit flow between them. Hence, the effective resistance between and is , where is the Dirac delta function. There is a close connection between effective resistances and random spanning trees. The uniform spanning tree (UST) is a random spanning tree, chosen uniformly at random from the set of all distinct spanning trees. The foundational Matrix-Tree theorem [kirchhoff1847ueber, lyons2000probability] states that the probability of an edge, , being included in the UST is equal to the edge weight times the effective resistance . The UST is an essential component of the proof of our main theorem, in that it provides a mechanism for unravelling the graph while still preserving the connectivity of the graph.
We are now in a position to state the main theorem, which will allow us to control the type 1 error (the probability of false alarm) of both the GSS and its relaxation the LESS. Let be the maximum effective resistance of the boundary of a cluster . The following statements hold under the null hypothesis :
The graph scan statistic, with probability at least , is smaller than
The Lovász extended scan statistic, with probability at least is smaller than
The implication of Theorem 5 is that the size of the test may be controlled at level by selecting thresholds given by (5) and (6) for GSS and LESS respectively. Notice that the control provided for the LESS is not significantly different from that of the GSS. This is highlighted by the following Corollary, which combines Theorem 5 with a type 2 error bound to produce an information theoretic guarantee for the asymptotic performance of the GSS and LESS.
Both the GSS and the LESS asymptotically distinguish from if
To summarize we have established that the performance of the GSS and the LESS are dictated by the effective resistances of cuts in the graph. While the condition in Cor. 5 may seem mysterious, the guarantee in fact nearly matches the lower bound for many graph models as we now show.
6 Specific Graph Models
Theorem 5 shows that the effective resistance of the boundary plays a critical role in characterizing the distinguishability region of both the the GSS and LESS. On specific graph families, we can compute the effective resistances precisely, leading to concrete detection guarantees that we will see nearly matches the lower bound in many cases. Throughout this section, we will only be working with undirected, unweighted graphs.
Recall that Corollary 5 shows that an SNR of is sufficient while Theorem 3 shows that is necessary for detection. Thus if we can show that , we would establish the near-optimality of both the GSS and LESS. Foster’s theorem lends evidence to the fact that the effective resistances should be much smaller than the cut size: (Foster’s Theorem [foster1949average, tetali1991random])
Roughly speaking, the effective resistance of an edge selected uniformly at random is so the effective resistance of a cut is . This intuition can be formalized for specific models and this improvement by the average degree bring us much closer to the lower bound.
6.1 Edge Transitive Graphs
An edge transitive graph, , is one for which there is a graph automorphism mapping to for any pair of edges . Examples include the -dimensional torus, the cycle, and the complete graph . The existence of these automorphisms implies that every edge has the same effective resistance, and by Foster’s theorem, we know that these resistances are exactly . Moreover, since edge transitive graphs must be -regular, we know that so that . Thus as a corollary to Theorem 5 we have that both the GSS and LESS are near-optimal (optimal modulo logarithmic factors whenever ) on edge transitive graphs: Let be an edge-transitive graph with common degree . Then both the GSS and LESS distinguish from provided that:
6.2 Random Geometric Graphs
Another popular family of graphs are those constructed from a set of points in drawn according to some density. These graphs have inherent randomness stemming from sampling of the density, and thus earn the name random geometric graphs. The two most popular such graphs are symmetric k-nearest neighbor graphs and -graphs. We characterize the distinguishability region for both.
In both cases, a set of points are drawn i.i.d. from a density support over , or a subset of . Our results require mild regularity conditions on , which, roughly speaking, require that is topologically equivalent to the cube and has density bounded away from zero (See [vonluxburg2010] for a precise definition). To form a -nearest neighbor graph , we associate each vertex with a point and we connect vertices if is amongst the -nearest neighbors, in , of or vice versa. In the the -graph, we connect vertices if for some metric .
The relationship , which we used for edge-transitive graphs, was derived in Corollaries 8 and 9 in [vonluxburg2010] The precise concentration arguments, which have been done before [sharpnack2012detecting], lead to the following corollary regarding the performance of the GSS and LESS on random geometric graphs: Let be a -NN graph with , and suppose the density meets the regularity conditions in [vonluxburg2010]. Then both the GSS and LESS distinguish from provided that:
If is an -graph with , then both distinguish from provided that:
The corollary follows immediately form Corollary 5 and the proofs in [sharpnack2012detecting]. Since under the regularity conditions, the maximum degree is and in -NN and -graphs respectively, the corollary establishes the near optimality (again provided that ) of both test statistics.
A comparison of detection procedures: spectral scan statistic (SSS), UST wavelet detector (Wavelet), and LESS. The graphs used are the square 2D Torus, kNN graph (), and -graph (with ); with respectively, , and .
We performed some experiments using the MRF based algorithm outlined in Prop. 4.2. Each experiment is made with graphs with vertices, and we report the true positive rate versus the false positive rate as the threshold varies (also known as the ROC.) For each graph model, LESS provides gains over the spectral scan statistic[sharpnack2012changepoint] and the UST wavelet detector[sharpnack2012detecting], each of the gains are significant except for the -graph which is more modest.
To summarize, while Corollary 5 characterizes the performance of GSS and LESS in terms of effective resistances, in many specific graph models, this can be translated into near-optimal detection guarantees for these test statistics. We have demonstrated that the LESS provides guarantees similar to that of the computationally intractable generalized likelihood ratio test (GSS). Furthermore, the LESS can be solved through successive graph cuts by relating it to MAP estimation in an MRF. Future work includes using these concepts for localizing the activation, making the program robust to missing data, and extending the analysis to non-Gaussian error.
This research is supported in part by AFOSR under grant FA9550-10-1-0382 and NSF under grant IIS-1116458. AK is supported in part by a NSF Graduate Research Fellowship. We would like to thank Sivaraman Balakrishnan for his valuable input in the theoretical development of the paper.
Let us introduce the following notation: is the total weight of edges with a tail in and a head in .
The Lovász extension of is .
1. Let us partition all of the relevant edges: . Let us then evaluate ,
2. Let be the Lovász extension of . Let , and be such that . Furthermore, let . Then, we see that takes the form,
Let us consider then the components attributable to the edge ; these are because there is no contribution if . This gives us our result. ∎
Proof of Proposition 4.2.
We begin with the LESS form in (4),
Define Lagrangian parameters and the Lagrangian function, and notice that it is convex in and concave in . Also, the domain is bounded and each domain of is non-empty closed and convex.
This follows from a saddlepoint result in [rockafellar1997convex] (p.393 Cor. 37.3.2). All that remains is to notice that is the Lovász extension of for . Hence, by Proposition 4, there exists a minimizer that lies within , and so
This follows from the fact that is equal to for . The program takes the form of a modular term and a cut term, which is solvable by graph cuts [cormen2001introduction]. ∎
8.1 Proof of Theorem 5
We will begin by establishing some facts about uniform spanning trees (UST). In a directed graph, a spanning tree is a tree in the graph that contains each vertex such that all the vertices but one (the root) are tails of edges in the tree. If the directed graph is not connected (i.e. there are two vertices such that there is no directed path between them) then we would have to generalize our results to a spanning forest. We will therefore assume this is not the case, for ease of presentation. Notice that in the case that we have a weighted graph, then the UST makes the probability of selecting a tree proportional to the product of the constituent edge weights. [fung2010graph] Let and let be a draw from the UST. If , for any ,
This implies that with probability , [sharpnack2012detecting]. Moreover, the probability that an edge is included in is its effective resistance times the edge weight, [lyons2000probability].
Proof of Theorem 5 (1).
In the following proof, for some class , let (this is known as a Gaussian complexity). Furthermore let be the incidence matrix restricted to the edges in (note that this is an unweighted directed graph). Let and then under the UST for any , . (This follows from Lemma 8.1.)
For any , because is unweighted. By Gaussianity and the fact that ,
Furthermore, where . Setting we have the following bound on the Gaussian complexity,
By Cirelson’s theorem [ledoux2001concentration], with probability at least ,
Proof of Theorem 5 (2).
Let . It remains the case that, by the previous Lemma 8.1, , where .
These follow from Jensen’s inequality and Fubini’s theorem.
We will proceed to prove the above claim. In words it follows from the fact that solutions to the program are integral by the generic chaining.
The second equality holds because the solution to the optimization with fixed is the top coordinates of . The third equality holds because and so is integer. Hence, if is a solution for the objective with fixed and then it holds for the objective with , and the overall objective is increased. Thus at the optimum, .
Denote . For any spanning tree ,
This will follow from weak duality and a clever choice of dual parameters.
The above display follows by selecting and and using Prop. 4.
The above display follows from the fact that for any , . We know that with probability at least for all ,
So we can bound the above,