Outlier Detection from Network Data with Subnetwork Interpretation

09/30/2016 ∙ by Xuan-Hong Dang, et al. ∙ Raytheon Company The Regents of the University of California 0

Detecting a small number of outliers from a set of data observations is always challenging. This problem is more difficult in the setting of multiple network samples, where computing the anomalous degree of a network sample is generally not sufficient. In fact, explaining why the network is exceptional, expressed in the form of subnetwork, is also equally important. In this paper, we develop a novel algorithm to address these two key problems. We treat each network sample as a potential outlier and identify subnetworks that mostly discriminate it from nearby regular samples. The algorithm is developed in the framework of network regression combined with the constraints on both network topology and L1-norm shrinkage to perform subnetwork discovery. Our method thus goes beyond subspace/subgraph discovery and we show that it converges to a global optimum. Evaluation on various real-world network datasets demonstrates that our algorithm not only outperforms baselines in both network and high dimensional setting, but also discovers highly relevant and interpretable local subnetworks, further enhancing our understanding of anomalous networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Detecting and characterizing exceptional patterns is an important task in many domains ranging from fraud detection, environmental surveillance, to various health care applications [37, 4]. This problem is often referred to as outlier or anomaly detection in the literature. In contrast to other popular data mining tasks like clustering, classification or frequent patterns mining that all discover prevalent patterns, outlier identification aims at uncovering a small set of inconsistent objects (outliers) that deviate significantly from the larger number of regular objects (inliers) in the data.

Although identifying anomalous subjects has been widely studied in high dimensional data 

[37] and recently extended to the network context [4], the problem remains very challenging. One of the most challenging issues lies in the fact that the number of anomalous objects is considerably smaller than the large population of regular ones, which limits the learning capability of most data mining algorithms. Another challenge comes from the notion of “inconsistency” which is hard to precisely define, quantify and interpret, especially when entities are connected in a network. In the network setting, most existing works focus on searching individual nodes [19], or groups of linked nodes [15] whose structures or behaviors are irregular. Though these studies have provided intuitive concepts about outlying patterns defined in the respect of network connectivity, most results are limited to the setting of a single static network. Other recent studies have extended the scope of analysis to evolving networks [7, 17], but the focus is on event/change detection where the temporal dimension is a key factor for defining outliers.

In this paper, we address the problem of identifying anomalous networks from a database of multiple network samples while at the same time investigating why a network is exceptional. An outlier is defined at the global level of an entire network sample but we use local subnetworks to explain its exceptionality. Although the outlierness of a network sample can be quantified via the outlier degree, such a single measure only bears limited explanatory information [26, 12] since it lacks the capability of showing in what data view, i.e. local subnetworks, an anomalous network is most exceptional. Moreover, although two networks may have similar outlier degrees, the local subnetworks that make them abnormal might be quite different since the anomalous networks themselves are usually not homogeneous. For example, exploring a database of gene networks for outliers can lead to the isolation of subjects suffering from cancer. However, the gene pathway (local subnetwork) that causes the disease can vary from subject to subject due to the complexity of the disease [23], or even depending on different stages of the disease. Spotting an unhealthy subject is generally not sufficient. Figuring out what abnormal gene subnetwork leads to the disease is usually more important since it helps to develop possible and effective treatments.

We develop a novel algorithm that exploits network regression models combined with network topology regularization to concurrently address the two important problems mentioned above. Specifically, we treat each network sample as a potential outlier and determine local subnetworks that help discriminate it from nearby regular network samples. Our objective function is formulated under the framework of network regression where we first upsample the outlier candidate network in order to make the binary regression problem balanced. The objective function is then regularized by the network topology and further penalized by L1-norm shrinkage to perform subnetwork discovery. It can be shown that the combined objective function has a form closely related to the dual SVM [20, 18], which can be further optimized in the primal form using Newton’s method. The objective function is proven to be convex, which is key to guaranteeing the convergence of the algorithm. Our algorithm, therefore, goes beyond the simple strategy of subspaces/subgraphs examination by directly learning the most discriminative subnetworks with respect to each network sample. Consequently, the outlier degree can be appropriately computed within the space spanned by these selected subnetworks and, collectively, they form a ranking of all network samples based on the outlier scores.

In summary, we make the following contributions in this work: (i) We address a challenging problem of both identifying and explaining anomalous networks from a database of network samples. The explanations are expressed in the form of local subnetworks, which play a key role in understanding the abnormal properties behind the observed network data; (ii) We formulate the problem under the regression framework with network regularization for subnetwork discovery, and develop a novel algorithm to efficiently mine most relevant subnetworks to discriminate and explain network outliers from their nearby network inliers; (iii) We demonstrate the effectiveness of our algorithm against typical techniques developed for both dynamic network data and high dimensional data using various real world datasets. Experimental results show that our algorithm is not only competitive in producing outlier ranking quality but further outputs highly relevant and interpretable local subnetworks, leading to better understanding of why the outlier networks are exceptional.

Ii Problem Setting

Definition 1

A network sample is a triple , where is a set of nodes, is a set of undirected edges, and is a function labeling each node with a real number.

Let be a network dataset that consists of network samples. We focus on a family of networks whose topologies are relatively stable across different network instances. For example, human subjects usually have similar gene networks with the same number of genes. However, the expression level of each individual gene may differ from subject to subject. Likewise, various snapshots captured from a traffic network often have the same network topology while traffic conditions on each road segment may vary from snapshot to snapshot. In mining outlying networks from a database of network samples , we aim to compute an anomaly score for each network sample and at the same time, to uncover subnetworks that show the most exceptional properties of the network under examination. Collectively, an outlier ranking is generated for the entire dataset and those network samples having the highest anomaly scores will be brought up to the user for further investigation.

Iii Regression on Networks

As mentioned in the previous section, our objective is not only to compute the outlier degree for each network sample but also to discover a small set of subnetworks as explanations for each outlier candidate network. We explore the regression model for our problem since it allows us to formulate outlier detection as a binary prediction. In this section, we formulate the regression problem solely based on the values associated with network’s nodes while the network topology will be taken into account in the next section.

We view each network sample as a potential candidate outlier while comparing its properties against its nearby networks (based on some network distance measures, e.g. cosine distance between node values [13]). Therefore, a network sample can be a local outlier rather than a global one [37, 10], as both network distribution and the outliers themselves can be heterogeneous and one should not presume any canonical form for the distribution. Let us denote as an outlier network candidate, and as one of its neighboring networks (we use the same index as in Def.1 for simplification, but here only ranges over the nearest neighbors of ). We can capture the node-values of a network sample

by a vector

in a high dimensional space . Under the vector format, we aim to optimize the following regression function for each :

(1)

where is the vector of local node values for network ; and while if is among the neighboring networks of ; is the L1-norm of vector . The main role of is to set many coefficients in to zero if the corresponding nodes are less predictive. It is worth mentioning that in a conventional case, one can constrain  [18] for a non-negative constant . However, it is easy to see that is only a scalar and can be replaced by 1 by dividing both and the predicted labels , ’s by . For simplicity, we thus directly use the constraint .

It is possible to see that our Eq.(1

) resembles the form of Lasso regression 

[18]. However, there are two challenging issues in optimizing Eq.(1). First, our regression model is highly imbalanced since we have only a single outlier candidate but a large number of neighboring inliers. In dealing with this issue, we adopt a simple approach of upsampling the outlier candidate in order to ensure that the data become balanced[6]. Essentially,

new samples will be generated (for the outlier class) following the normal distribution with

as the mean vector, and the covariance matrix as the one computed from the statistics of neighboring networks. By doing so, we assure that variations at each node/dimension of the outlier class are not generated randomly but resemble the ones from the inlier class, and thus minimize the impact on the explanation quality of the outlier.

The second, more challenging, issue in optimizing Eq.(1) is that the function is not directly differentiable—it is not smooth due to the appearance of L1-norm imposed on . The solution is at best only suboptimal using methods like sub-gradient [30], in which each component of is optimized individually and in sequence. Moreover, such a solution is less efficient given the large number of nodes in the networks. We thus handle the L1-norm in a more general setting [30] by representing using two non-negative variables and , that are respectively defined as and . Hence, it is easy to see that . We denote the new variable . Coefficients in are thus all non-negative. Now, in combination with the upsampling reasoning above, Eq.(1) can be reformulated in the matrix form as follows:

(2)

where is the matrix with the first rows as the vectors ’s, and the last rows as and its sampling vectors. Correspondingly, the first entries of vector are , predicting ’s as inliers, while the last entries are , predicting and its upsampling samples as outliers.

Iv Role of Network Topology

Our formulation in Eq.(2) gives us a regression form to predict as an outlier candidate based on the local state values associated with network nodes. It, however, has not taken into account the network information, and may lack essential information in learning the most relevant subnetworks that make exceptional. Therefore, we add the network structure information as a constraint in learning ’s coefficients. Intuitively, if two nodes are connected in the network, their behaviors will mutually impact each other and, consequently, their coefficients reflected in ’s entries should be similar. For example, if a congestion happened at a road segment (node), it is likely that the nearby road segments will also be impacted, causing low speed over a region of the network. Towards modeling this network influence, we first define a graph that generalizes the network topology of both and its neighboring networks as follows:

Definition 2

Let be the set of networks that involves the outlier candidate network and its neighboring networks . We define 111The superscript is used for only but it should be understood that it also applies to and since we define for each outlier candidate network . as a graph summarizing the network topology of , where is the union of and ; and . Each edge is associated with a positive weight defined as the popularity of the corresponding edge in either or in its neighboring networks ’s, i.e., with if connects in a network .

We will regularize using ’s topology in order to favor subnetworks that are frequently seen in and/or in its neighboring networks, and not favor subnetworks that appear occasionally in ’s and that absent in (i.e., is small while ). Values for entries in matrix are thus constrained between 0 and 1. Moreover, since all are undirected networks, is a symmetric matrix, with as the total number of nodes in .

In searching for the subnetworks that explain the abnormal properties of network sample , we impose a smoothness constraint on ’s coefficients with respect to the graph topology captured by . In combination with the -norm imposed on (ref. Eq(1)), they will together perform group/subgraph selection that predicts as an outlier network.

Essentially, let us define the degree of a vertex in the graph as , i.e. sum over all unordered pairs for which and are linked in . We assume that is connected (if not, each of its disconnected component will be considered separately) and thus the degree of every node is non-zero. Accordingly, the matrix is defined as follows:

(3)

It is not hard to show that is positive semidefinite and it is the normalized Laplacian matrix of . Thus, the network topology can be taken as the regularization constraint imposed on the via minimizing the following quadratic form:

(4)

It can be seen that if and are connected in with a large value , the function will incur a large penalty wherever and are different from each other. Thus, these coefficients should be similar/smooth in order to minimize this penalty. For example, if node is highly explanatory for the abnormal property of , then there is a high possibility that is also related to the abnormality of if both nodes are strongly connected (i.e., ). Likewise, if is less explanatory for , its non-selection () will make also less likely to be selected. However, in order to appropriately incorporate this network-constrained penalty into our objective function formulated in Eq.(2), we need the following lemma.

Lemma 1

Given the definition , the following equation is satisfied:

(5)

Proof: The proof of this lemma is straightforward with the expansion over the quadratic forms in both sides of Eq.(5).  

Lemma 1 ensures that the network constraint penalty can also be represented using the transformed variable . Following this, we recast our objective function in Eq.(2):

(6)

Notice that if the inequality constraint in Eq.(6) is not equal to 1, i.e., , then the upper bound is inactive and in this case, coefficients in will be widely non-zero. In other words, the majority of nodes in the graph will be selected. This solution is obviously undesirable. Therefore, in order to ensure that only subnetworks with the most explanatory information are used for , this constraint should always be tight [9, 36]. This means that we can safely use the equality constraint , or with as the vector of all 1, we have . Upon this setting, the first term in Eq.(6) can be rewritten as:

(7)

in which we use and to respectively denote and . Consequently, we can combine two terms in Eq.(6) into a single quadratic form by using the following lemma.

Lemma 2

Let be decomposed into and . Then:

(8)

Proof: On one hand, the expansion of the first term gives us:

(9)

On the other hand, as is a normalized Laplacian matrix, it can be eigen-decomposed into where where and

are respectively the matrices of eigenvectors and non-negative eigenvalues of

. Therefore:

(10)

From the 1st row to the 2nd row, we have used the fact that both and have the same size of while has the size of . So, the pairwise addition between the two matrices in the 2nd row is clearly matched.  

Given Lemma 2 in combination with the previous results, we can rewrite Eq.(6) as follows:

(11)

where, like the classical ridge regression 

[18], we add a small amount of -norm regularization in order to improve the stability of solutions when .

V Optimization

In solving the objective function in Eq.(11

), it is possible to note that it is closely related to the dual form of the SVM with the squared loss function 

[20, 34]:

(12)

for any general dataset of samples, where , in which is the diagonal matrix whose entries are class labels (i.e., ) for the corresponding samples ’s, and is the margin parameter. 222Note that we use the same notation in both Eq.(12) and (11) for easy explanation. However, in Eq.(12) should be understood as the Lagrange multipliers (often denoted by in [20, 34]). Likewise, ’s in Eq.(12) and (11) are not necessarily the same.

It is easy to see that our in Lemma 2 can also be represented in this format. Specifically, , where is the vector in which the first entries are 1’s and the last entries are -1’s. Our in Eq.(11) has a similar role as in Eq.(12). The only difference between the two objective functions is that our optimization (Eq.(11)) further requires the constraint . However, it also can be seen that if such a constraint is applied to Eq.(12), then its last term becomes a constant. Indeed, this constraint simply rescales our optimal solution for to be of unit L1-length. The sparseness property of is obviously unchanged by such a normalization step. Similar to the dual-form SVM, we can solve Eq.(11) using several available techniques like coordinate descent[20], internal point [31] or active set method [29]. However, the computation often involves dealing with inequality constraints directly. Therefore, a more practical approach is to consider such a quadratic programming problem in the primal form of an unconstrained problem [34, 21] as follows:

(13)

where, with the introduction of vector above, we have redefined with ’s as its column vectors, and .

In this representation, one can view the first quantity in Eq.(13) as the regularization term while the second one as the loss function. Since there is a flat part in this loss function (i.e., the 2nd term in Eq.(13) is 0 if ), is usually sparse. Moreover, the function is continuously differentiable, which is a great advantage. Hence, in optimizing Eq.(13), we resort to Newton’s method. Note that the function is doubly differentiable. In particular, let us denote and vector as the -th column of matrix . The gradient of can be written as follows:

(14)

in which the summation in the second term is applied to ’s for which . The Hessian is therefore:

(15)

At each iteration of Newton’s method, we update to where is the learning rate found through the line search technique[9]. Given the convergence of (thus also ), the final subnetworks that are used as the explanations for the exceptionality of can be identified via the non-zero entries of . For the outlier score of , denoted by , we follow a similar approach as [10] but computing it only in the subspace spanned by the explanatory subnetworks. The higher the value of , the more deviates from its neighboring networks.

Vi Analysis and Discussion

Algorithm Complexity: We name our algorithm ODeSM that stands for Outlier Detection with Subgraph Mining. Its complexity is briefly analyzed as follows. Searching for neighboring networks and upsampling takes given as the number of network samples and as the number of nodes. The computation of and inversion of both depend on the number of non-zero entries in that is significantly reduced after each iteration. Let denote that number, then computing takes due to the eigen-decomposition, while the inversion of takes similar time. The checking step in Eq.(13)’s 2nd term takes . Due to its reliance on the Newton’s method, ODeSM requires only a few iterations (usually ) to reach its converged solution. As this whole process is applied to each network sample, the overall complexity is therefore .

Convergence: It is straightforward to show that our Hessian matrix derived in Eq.(15) is positive semi-definite. For any given non-negative vector , we have . This is given by the fact that , as the first term in Eq.(15), is a symmetric matrix. So its quadratic form is always non-negative. Likewise, for the second term, each of its component’s quadratic form , while is a non-negative parameter and can be omitted as it always equals 1 by definition. These characteristics are of key importance since they collectively ensure the convexity of the objective function in Eq.(13), making our optimization procedure always converge to the global optimal solution. Also, note that our solution lies in the general family of quadratic programming solutions, often used in both dual and primal SVM. However, unlike SVM that works in the original data space, our algorithm works in the feature space. Nodes (or features) in the final subnetworks thus can be loosely interpreted as the (support) vectors falling inside the discriminative margin. Hence, one can control the subnetwork sizes through adjusting .

Parameter setting: Other than , our algorithm requires two parameters to be set: determining neighboring networks, and

measuring the impact of network constraint. Without any prior knowledge regarding the network distribution, it is hard to choose the right values for both parameters since outlier detection is an unsupervised learning problem. We therefore employ the best-effort-approach that follows the strategy developed in 

[10, 37]. The essential idea is to try on a parameter range, rather than a single value, and use an object-wise maximum ensemble to combine the final outlier score. We set , similar to the range chosen in [10], and . Regarding , it is also noticed that, among neighbors of an outlier candidate, there may exist other outliers with the likelihood that they possess similar anomalous properties. In dealing with this case, one can either exclude closest neighboring samples (with assumption that outliers are closest neighbors), or increase the

value. We have empirically tested both approaches and the results are quite similar. Indeed, since the number of outliers within a database is usually small, the probability of having one within the

neighbors of a sample is usually low. The quality of the outlier detection and explanation is thus not much compromised and still determined by the majority of regular neighbors.

Vii Experiments

Vii-a Methodology

We compare the performance of our algorithm ODesM against techniques in both network studies and high dimensional studies. Specifically, it is compared against the following techniques: (1) Netspot [7] without temporal constraint so allowing it to uncover network regions from each individual network; (2) HiCS [22] that seeks outliers through contrast subspaces for high dimensional data; (3) ABOD [25]

which discovers outliers via variance of angles between vector triples; (4) ODesMw/o, a variant of our method that does not exploit network regularization. The parameter setting for ODesM and ODesMw/o follows the discussion in Section 

VI, while for Netspot, we set the number of failures as suggested in [7]. For HiCS, we choose all settings as suggested by the authors [22] and adopt LOF as its core algorithm. ABOD is a parameter-free technique, so we use its exact version with a polynomial kernel of degree 2.

In evaluating algorithm performance, we use the well established Receiver Operating Characteristic (ROC) curve computed based on the outlier ranking returned by an algorithm against the ground truth labels of normal and outlying networks. A ROC curve provides a visualization over the relationship between the true positive rate (-axis) and the false positive rate (-axis). This curve can be numerically comparable via a single value, when desired, known as the area-under-curve (AUC).

Vii-B CMUFace graph data

Fig. 1: An example of images from a person in the CMUFace graph data, where the first image is labeled as an outlier due to the sunglasses.

Since most network datasets (presented next) lack ground-truth subnetworks, we conduct an experiment on the CMUFace image data (http://archive.ics.uci.edu) since it allows us to evaluate the relevance of uncovered subnetworks via visualization. Though images do not originally involve explicit network structures, studying them as graphs has been extensively studied and deemed advantageous [32]. In particular, it enables the discovery of local image properties, especially in the studies of image denoising and image forensics where pixels can be missing or purposely tampered. Following [32], we first down-sample the number of pixels to and construct a common network topology relying on the remaining pixels. Within each image, a pixel corresponds to a node and has edges connected to the 5 nearest pixels. The value associated with a node is the grey level of the corresponding pixel. In order to evaluate whether any method can deal with the heterogeneity in the network dataset, we select all networks with open-eye images from each person as inliers, and randomly select one with sun-glass from any of 4 poses (straight/up/left/right) as an outlier (images from a random person is depicted in Fig.1). This results in regular network samples and anomalous ones, each containing nodes and edges. Subnetworks extracted from sun-glass areas are therefore the ground truths.

Fig. 2: ROC curve performance of all algorithms on identifying outlier networks from the CMUFace graph dataset.

Outlier identification: In Fig.2, we plot the ROC curve performance of all algorithms. As seen from this figure, both techniques HiCS and ABOD designed for high dimensional data perform moderately well on this dataset. ABOD explores the variance over angles between an outlier candidate and every pair of other two samples, so its approach explores global outliers deviated from a single distribution of inliers. For this dataset, however, we have multiple distributions. These local outliers are thus harder to be explored by solely relying on the variances of high dimensional vectors’ angles. This might explain for the low success rate of ABOD. HiCS, on the other hand, while being designed to find outliers based on contrast subspaces, also does not perform well in this dataset. HiCS attempts to find most information subspace from bottom-up approach and it starts with those of 2-dimension (from a pool of possible subspaces). If such low dimensional subspaces are not well sampled, it becomes much harder to ensure the most contrast subspaces will be found in higher dimensional subspaces. This is because HiCS retains only 100 to 1000 subspaces in order to avoid the exponential complexity. Netspot performs better than these two techniques by relying on the p-value defined at each node in order to explore significant anomalous regions. However, by converting to a p-value, Netspot also removes the contrast among node’s values and thus is less successful in seeking the most potential seed-nodes. Over all techniques, ODesM’s performance yields the best with its AUC achieving 0.84, as compared to 0.78 obtained by the second best ODesMw/o. This large gap in AUC clearly confirms the key role of network topology exploited by ODesM, which not only helps it to narrow down the search space of all subgraphs, but also converges to the most explanatory subnetwork structures.

Fig. 3: Subnetworks selected by ODesM ((a)-(d)) and Netspot ((e)-(f)) in the CMUFace network dataset (detailed explanation is given in text). In each figure, the network topology is shown in grey and the selected subnetworks are shown in blue, while the corresponding full image is shown in background with dimmed colors to improve the visualization (figures are best seen in colors)

Explanatory subnetworks: We further explore the set of subnetworks discovered by ODesM as the explanation for top ranking outliers. Out of top 20 anomalous networks, 8 are true outliers. We plot in Fig.3(a-c) the three top ranked networks that are also truly labeled as outliers and their corresponding images from different poses. In each picture, the full image is shown in background (with dimmed color to boost visualization). The entire network topology is plotted in grey while we color the explanatory subnetworks discovered in blue. As observed, despite coming from different poses, the outlier networks are still well-identified and the subnetworks located around the sunglasses are appropriately selected by ODesM. By visualization, these discriminative substructures clearly explain why an anomalous network is exceptional from regular ones, though they can vary across different outlier networks. We plot in Fig.3(d) a network sample that is also ranked high by ODesM but not a true outlier according to the sunglasses’ labeling. However, by inspecting its discovered substructures, they still reflect some exceptional property of this image, where all subnetworks have been selected at the curve of the face. Generally, such kind of substructures are quite typical for each individual person.

Recall that ABOD, ODesMw/o and HiCS are not network-based techniques. While ABOD identifies outliers based on variance of vector angles, ODesMw/o selects individual nodes and does not explore subneworks. HiCS generates multiple subspaces for a single outlier candidate and there is no obvious way to derive subnetworks from all of them. Hence, we select Netspot for comparison based on its discovered anomalous subnetwork regions. In Fig.3(e-f), we plot two typical true outliers found from 20 top networks ranked by Netspot based on the anomalous score of the selected subnetwork regions. It can be seen that, unlike the subnetworks discovered by our method, it is hard to justify why the corresponding images are exceptional though they are strongly connected.

In both figures, the substructures from entire faces have been selected. This performance probably comes from the fact that, other than p-value, Netspot also relies on the adjacency of network samples to derive the time interval at which significant anomalous regions can appear. However, once the interval is set to 1 (i.e. for each individual network), it has limited information to justify the relevance of a network region since there is no temporal development among network samples. Thus, the p-value computed at each node is likely playing the key role. And as long as its values do not change abruptly, Netspot tends to select all of them, forming a large subnetwork structures as shown in Fig.3(e-f). The patterns discovered between Netspot and our ODesM are thus fundamentally different. For this reason, we do not attempt to compare their uncovered subnetworks in the subsequent experiments.

Vii-C Biological PPI network

Fig. 4: ROC curve performance of all algorithms on identifying outlier networks from the Liver gene network dataset.

The second dataset we use for evaluation is the Liver metastasis in human [23] with the gene network derived from the protein-protein interaction. Values associated with nodes are the gene expression values. The dataset contains genes and edges collected from 101 healthy subjects viewed as inlying network samples, and 15 diseased subjects labeled as outliers.

Outlier identification: We show in Fig.4 the ROC curve of all algorithms on the Liver dataset. The performance of our ODesM method is competitive to that of HiCS and both are better than the remaining techniques. Netspot also performs well on this dataset as indicated by its 0.76 AUC value and slightly better than ODesMw/o. Recall that each network sample of this dataset also contains a large number of nodes. However, unlike the CMUFace graph data where we have multiple data distributions (each representing images from an individual person), here we have only a single network distribution of healthy subjects. The outlier prediction rates of all techniques are thus not as diverse as those we have seen in the CMUFace graph dataset. However, the results still indicate that our ODesM algorithm yields the highest outlier prediction rate.

Fig. 5: Subnetworks frequently discovered by ODesM in its top 15 network samples with the highest outlier scores. Shaded genes are related to the liver metastasis cancer.

Explanatory subnetworks: There are no obvious ground truths for the gene pathways (subetworks) associated with the liver cancer. However, as an attempt to investigate how relevant and explanatory are the subnetworks discovered by ODesM, we compute the most frequent subnetworks found in the top 15 ranked outlying networks. In Fig.5, we plot 3 subgraphs that have the highest frequency. The first subnetwork is found in 6 networks and out of these, 4 are anomalous networks. The second subnetwork is found in 4 networks with 3 as true outliers. Among these, two discovered subnetworks, the genes REG1A, MMP1, MMP2 and TIMP1 (shaded in the Fig.5) are particularly interesting since they are in agreement with the ones found in [23] and have been reported to be involved in liver metastasis. The last subnetwork is found in 5 network samples and among them, only one is a true outlier. Though the genes forming the above subnetworks are not all related to the liver cancer and not all diseased subjects are ranked at the top (7 true outliers are found out of top 15), an important observation from these results is that, the frequent involvement of diseased genes in the discovered subnetworks can signal the appearance of the disease. Moreover, since diseased subjects can suffer from different stages or subtypes of the cancer, the disease-related gene pathways can possibly vary from one subject to another. These uncovered subnetworks thus do carry explanatory information to justify why an unhealthy subject is an outlier.

Vii-D Road traffic networks

The last dataset we use for evaluation is LATraffic—the highway traffic network data of Los Angeles, California (http://pems.dot.ca.gov) during April 2011. LATraffic contains multiple network snapshots of size of 100/128 nodes/edges. Each node in the network corresponds to a road segment and its associated value is the average vehicle speed within 5-minute resolution. In generating outlier labels for the network samples, we rely on the distribution of the average speed computed for each snapshot. Specifically, 300 snapshots are randomly selected around the mean of this distribution and labeled as regular networks. Other 30 snapshots are randomly selected from two extreme tails (15 each) of this distribution and labeled as anomalous networks.

Fig. 6: ROC curve performance of all algorithms on identifying outlier networks from the LATraffic network dataset.

Outlier identification: The ROC curve performance of all algorithms on the LATraffic is shown in Fig.6. For this relatively small network, HiCS handles the subspace candidates well and its Monte-Carlo sampling based approach tends to select high contrast subspaces. Regarding the performance of Netspot, recall that the dataset contains two types of outliers, one with high average speed and the other with low speed. By relying on the notion of network fraction in computing the p-value for each node, Netspot may not be able to find both types of outliers. Among all examined techniques, ODesM is still the best performer with its AUC score at 0.9. Deeper investigation on its outlier ranking further shows that ODesM predicts 16 out of top 20 networks as true outliers and they are from both low and high average speeds.

Explanatory subnetworks: We further explore the set of subnetworks discovered by ODesM for its top ranking network snapshots. In Fig.7, we plot the uncovered subnetworks for top four outlier networks. The networks in (a) and (d) are the true outliers with low speed while the ones in (b) and (c) are the true outliers with high speed. The sets of discovered subnetworks in both cases are quite consistent. Taking a closer look of these explanatory substructures, there is an interesting point to highlight. Apparently, we would expect the explanatory subnetworks for two types of outliers to be different since one was chosen from the low speed distribution while the other one was selected from the high speed distribution. However, it turns out that they share one large subnetwork spanned by the nodes 11,6,9,12 and 25. The common selection of this subnetwork in both kinds of outliers may suggest that such a set of adjacent road segments is highly sensitive to the traffic congestion. For monitoring purposes, these road segments should be the top candidate to be selected since they are likely to reflect the overall condition of the entire traffic network.

Fig. 7: Top 4 outlier networks discovered by ODesM from the LATraffic. Two networks shown in (a) and (d) are from the low speed distribution while the two shown in (b) and (c) are from the high speed distribution. Road segments involved in the explanatory subnetworks are shaded.

Vii-E Impact of parameters

ODesM requires three parameters to be set: determining the number of network neighbors, deciding the influence of network topology, while controlling the discovered subnewtork size. As discussed in Section VI, we select a range of values for and and apply the best-effort-approach [10, 37] to compute outlierness for each network sample. In this experiment, we thus only report the impact of varying on the performance of outlier detection. Since our three datasets are vastly different in network size, we will use specific values to limit the size of selected subnetworks. In Fig.8, we report the AUC performance by varying the total number of nodes for the discovered subnetworks between 10 to 100. For the LATraffic network data, we do not consider the subnetwork size larger than 20 since the whole network has only 100 nodes.

A general trend can be observed from Fig.8. As the total number of nodes for subnetworks becomes larger, the outlier detection rate tends to increase. However, for Liver and CMUFace datasets, when the discovered subnetworks are larger than 70 nodes, the outlier detection rates get reduced. This might happen since choosing larger values for the subnetworks may further include irrelevant substructures, which leads to a higher rate of false positive prediction.

Viii Related work

Outlier detection from network data can generally be divided into two categories: those addressing plain networks [14, 3] and those focusing on attributed networks [16, 28]. In the first category, only information about the network topology is available and most studies adopt structure-based [14, 3] and community-based methods [33] to spot nodes or small groups of nodes that have abnormal connectivity patterns. In the second category, attributes associated with nodes and edges are also available. Discovering outlying patterns therefore seeks not only abnormal connectivity structure but also coherence of network attributes [16, 28]. Network properties like normality [27], conductance [5] and Oddball [3] are often employed to quantify the internal consistency and external separability (collectively anomalous degree) of a set of nodes (local communities). Most of these studies focus on searching outlying patterns from a single network, which contrasts with our work that addresses the problem in a more general setting of multiple networks. Several recent studies [11, 1, 17, 7] developed for dynamic networks are closer to ours. In [11], the authors present 6 types of community-based outliers including shrink, grow, merge, split, born and vanish. Such types of anomalous communities can be identified via tracking the evolution of communities over time. In [1], the temporal distribution of the number of messages exchanged in a social network (like Twitter) is used as means to detect abnormal events. More specifically, if the fraction of edges added to a community with the current time window is significantly larger than the previous one, then it can signal that a special event is within that community. Authors in [8]

introduce a novel problem of mining a heaviest dynamic subgraph (HDS) in a time evolving network. The problem is shown NP-hard, and a heuristic algorithm named MEDEN is developed based on the filter-and-verify framework. This study is recently extended in 

[7] to the NetSpot technique that enables the mining of multiple HDSs. NetSpot approximates HDSs via a local search approach and it alleviates the local optimal solutions via exploring a large range of neighborhood search[7]. Other studies [2] monitor global network parameters/probabilities to detect events/changes while those developed in [17] attempt to spot anomalous nodes and edges. They are thus less relevant to our studies. In contrast, we do not focus on searching for outlying patterns from a single dynamic evolving network but from multiple network samples. Moreover, our focus is on discovering outliers as entire network samples but localizing subnetworks to explain why such network samples are exceptional.

Fig. 8: The AUC Performance of ODesM by varying the subnetwork sizes between 10 to 100 nodes. For LATraffic, the subnetwork size is limited to 20 nodes since the entire network is small with only 100 total nodes..

Outlier detection in high dimensional spaces [37] can also be conceptually related to our studies. Two popular approaches to deal with this problem are from subspace sampling [22] and subspace projection [24]. Techniques from subspace sampling generally assume that outliers only show up in low dimensional subspaces and such subspaces can be discovered via sampling combined with relevant statistical tests. In contrast, methods based on space transformation directly search for a single subspace, often a linear combination of all original features, that maintains certain properties, e.g. variance, of the data. Outliers can be found from this induced low dimensional subspace. Though these techniques are effective in ranking and finding anomalous objects, directly applying them to network data often lacks domain relevance since the nature of mutual interaction among network entities is completely ignored. Additionally, while a novel subspace is effective in computing outlier scores, it barely provides qualitative explanation for each individual outlier.

Ix Conclusions

In this paper, we addressed an important problem of identifying and explaining outlier network samples. A novel algorithm was developed to identify subnetworks that discriminate outlier networks from their neighboring regular network samples. The algorithm was designed in the framework of network regression combined with the constraint on the network topology and the L1-norm shrinkage to perform subnetwork discovery. Our algorithm thus goes beyond both subspace learning and subgraph discovery methods by directly learning the most discriminative subnetworks to justify the exceptional properties of an anomalous network. Evaluation on various real-world network datasets demonstrated that our novel algorithm not only outperformed existing techniques, but also uncovered highly relevant and interpretable local subnetworks.

As future work, we would like to extend our research to handle databases with very large networks. Obviously, directly applying ODesM might not be highly scalable as analyzed in Section VI. To deal with very large networks, we could apply network compression [35] that allows us to summarize both network topology and signals on the nodes. This is equivalent to representing a large network at different scales/resolutions. The open research issues are therefore: (i) How can we trade-off between the size of compressed networks, in exchange for scalability, and the quality of outlier detection? (ii) How can we ensure that the most exceptional information (explaining for an outlier network) is not compromised by such a compression approach?

References

  • [1] C. C. Aggarwal and K. Subbian. Event detection in social streams. In SDM, 2012.
  • [2] C. C. Aggarwal, Y. Zhao, and P. S. Yu. Outlier detection in graph streams. In ICDE, pages 399–409, 2011.
  • [3] L. Akoglu, M. McGlohon, and C. Faloutsos. oddball: Spotting anomalies in weighted graphs. In PAKDD, 2010.
  • [4] L. Akoglu, H. Tong, and D. Koutra.

    Graph based anomaly detection and description: a survey.

    DMKD, 2015.
  • [5] R. Andersen, F. R. K. Chung, and K. J. Lang. Local graph partitioning using pagerank vectors. In FOCS, 2006.
  • [6] G. Batista et al.

    A study of the behavior of several methods for balancing machine learning training data.

    ACM SIGKDD Explorations Newsletter, 6(1), 2004.
  • [7] P. Bogdanov et al. Netspot: Spotting significant anomalous regions on dynamic networks. In SDM, 2013.
  • [8] P. Bogdanov, M. Mongiovì, and A. K. Singh. Mining heavy subgraphs in time-evolving networks. In ICDM, 2011.
  • [9] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
  • [10] M. M. Breunig et al. LOF: identifying density-based local outliers. In SIGMOD, 2000.
  • [11] Z. Chen, W. Hendrix, and N. F. Samatova. Community-based anomaly detection in evolutionary networks. JIIS, 2012.
  • [12] X. H. Dang et al. Discriminative features for identifying and interpreting outliers. In ICDE, 2014.
  • [13] X. H. Dang, A. K. Singh, P. Bogdanov, H. You, and B. Hsu. Discriminative subnetworks with regularized spectral learning for global-state network data. In ECML, 2014.
  • [14] Q. Ding et al. Intrusion as (anti)social communication: characterization and detection. In KDD, 2012.
  • [15] W. Eberle and L. B. Holder. Discovering structural anomalies in graph-based data. In ICDM workshop, 2007.
  • [16] J. Gao et al. On community outliers and their efficient detection in information networks. In KDD, 2010.
  • [17] M. Gupta, J. Gao, Y. Sun, and J. Han. Integrating community matching and outlier detection for mining evolutionary community outliers. In KDD, 2012.
  • [18] T. Hastie et al. The Elements of Statistical Learning. Data Mining, Inference, and Prediction. Springer, 2009.
  • [19] K. Henderson et al. It’s who you know: graph mining using recursive structural features. In KDD, 2011.
  • [20] C. Hsieh et al. A dual coordinate descent method for large-scale linear SVM. In ICML, 2008.
  • [21] S. Keerthi, O. Chapelle, and D. DeCoste.

    Building support vector machines with reduced classifier complexity.

    JMLR, 2006.
  • [22] F. Keller, E. Müller, and K. Böhm. Hics: High contrast subspaces for density-based outlier ranking. In ICDE, 2012.
  • [23] D. H. Ki et al. Whole genome analysis for liver metastasis gene signatures in colorectal cancer. Int J Cancer, 2007.
  • [24] H. Kriegel, P. Kröger, E. Schubert, and A. Zimek. Outlier detection in arbitrarily oriented subspaces. In ICDM, 2012.
  • [25] H. Kriegel, M. Schubert, and A. Zimek. Angle-based outlier detection in high-dimensional data. In SIGKDD, 2008.
  • [26] B. Micenková, R. T. Ng, X. H. Dang, and I. Assent. Explaining outliers by subspace separability. In ICDM, 2013.
  • [27] B. Perozzi and L. Akoglu. Scalable anomaly ranking of attributed neighborhoods. In SDM, 2016.
  • [28] B. Perozzi et al. Focused clustering and outlier detection in large attributed graphs. In KDD, 2014.
  • [29] K. Scheinberg. An efficient implementation of an active set method for svms. JMRL, 2006.
  • [30] M. W. Schmidt, G. Fung, and R. Rosales. Fast optimization methods for L1 regularization: A comparative study and two new approaches. In ECML, 2007.
  • [31] B. Scholkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001.
  • [32] D. I. Shuman et al. The emerging field of signal processing on graphs. IEEE Signal Process. Mag., 2013.
  • [33] J. Sun et al. Neighborhood formation and anomaly detection in bipartite graphs. In ICDM, 2005.
  • [34] J. Taylor and S. Sun. A review of optimization methodologies in support vector machines. Neurocomput., 2011.
  • [35] Y. Tian et al. Efficient aggregation for graph summarization. In SIGMOD, 2008.
  • [36] Q. Zhou et al. A reduction of the elastic net to SVM with an application to GPU computing. In AAAI, 2015.
  • [37] A. Zimek et al. A survey on unsupervised outlier detection in high-dimensional numerical data. Statistical Analysis and Data Mining, 2012.