I Introduction
Multivariate anomaly detection may be categorized broadly into supervised and unsupervised detection. In supervised anomaly detection, training data are labeled by domain experts as normal or anomalous, and a model is trained to classify future observations. In unsupervised anomaly detection, which is the focus of this article, labels are not known, because labeling is too difficult or costly. The goal is to approximately recover the missing expert judgements using empirical characteristics of the data. The data themselves typically first undergo a feature selection and feature engineering process to devise informative covariates. An unsupervised model can be evaluated by comparing its predictions with actual domain expert labels. Potential applications include intrusion detection, fraud detection and process control.
Frequency is commonly chosen as the target criterion for unsupervised anomaly detection. The population definition of anomalous observations then is , where is the data generating density, and
is a userselected threshold. Methods that exactly or approximately fall under this paradigm are density estimators and the closely related nearest neighbor approaches, besides many others; for a review on commonly used anomaly detection methods, see
[1].However, frequency may not align well with expert judgements in some applications. For example, scraping (the automated collection of information from websites) may occur frequently, but it nevertheless constitutes anomalous user behavior. The performance of common approaches to unsupervised anomaly detection may suffer in the presence of such frequently occurring anomalies.
We propose a framework which we call relative anomaly detection to better handle cases where anomalies may occur frequently. We use the term relative to emphasize that in this framework the anomaly of an observation is determined by taking into account not only its own location and that of neighboring observations, but also the location of the most typical system states. The underlying assumption in relative anomaly detection is that large clusters of highdensity system states are indeed normal from an expert’s perspective, and that observations that are far from these most typical system states are anomalous. Such anomalies may occur frequently.
The rest of this paper is organized as follows. In Section II, we discuss the approach to anomaly detection of [2], which is closely related to the PageRank algorithm [3]. We discuss the similarity graph of the observations in the training data set. We show connections with other approaches to anomaly detection, and discuss their shortcomings in the presence of anomalies that occur frequently. In Section III, we introduce two novel relative anomaly detection approaches. In Section IV, we compare our approaches with that of [2], using data sets of potential scraping attempts and WiFi usage from Google, Inc. We conclude in Section V.
Ii Many approaches to anomaly detection target frequency criterion
In this section we show that the anomaly detection approach of [2], which is similar to the PageRank method [3]
, approximately targets the frequency criterion. We show that it is also closely related to kernel density estimation and the nearest neighbor approach. We begin by introducing the similarity graph, which will also serve as a basis for the relative anomaly detection approaches we develop in Section
III.Iia Similarity graph
The relationship between unlabeled observations in a data set may be described through a weighted similarity graph. Observations form the nodes of the graph, and the weight of an edge expresses the similarity between two observations. Two observations and are typically considered similar when their distance is small. However, nonmonotonic transformations can be useful with time series data, to take into account periodic behavior of the underlying system; for a reference on such transformations, see [4, Chapter 4]. A common monotonic transformation from distance to similarity uses the kernel function
(1) 
which is symmetric in its arguments. The parameter controls the degree of localization, meaning how far one observation can lie from another observation for the two to still be considered similar. When , all observations are equally similar to , and when only is similar to itself. More localization is needed when the data come from a complicated distribution. The resulting matrix of similarities,
, holds the edge weights in the similarity graph. In methods that apply the “kernel trick,” such as the support vector machine and kernel principal components analysis, such a similarity matrix is called the kernel matrix.
Common choices for the distance between two real data points, , are Euclidean () and Manhattan () distance. Both of these distance measures assume that each dimension of the data has been appropriately normalized. Euclidean distance has the advantage of being rotation invariant, and the order of the resulting distances typically remains meaningful even in high dimension [5]. Furthermore, data points often approximately lie in a lowerdimensional subspace; then Euclidean distance calculations are effectively carried out in the lowerdimensional subspace. For very highdimensional problems, Manhattan distance may be preferred over Euclidean distance [6]
. However, if the data truly cover the highdimensional space, that means that the system components are barely correlated, even after feature selection and feature engineering. Then a multivariate anomaly analysis may add only little value as compared to running separate univariate analyses. If variables are measured on a nominal or ordinal scale, they may be converted into numerical data using dummy variables, or specialized distance measures for that scale level can be used; for a reference, see
[7, Chapter 14].IiB A random walk approach, its relationships with other methods, and problems in the presence of frequently occurring anomalies
The approach of [2]
proposes to take a random walk on the similarity graph, and to label an observation as anomalous when the stationary probability of the random walk at that observation is low. For cases where the similarity matrix is not irreducible and aperiodic, random restarts are introduced in the random walk, like it was proposed as part of the PageRank algorithm
[3]. In the case that the similarity matrix is irreducible and aperiodic, which we will assume in to following to keep technical discussions at a minimum, the matrix of transition probabilities in the graph is simply the similarity matrix normalized by row,(2) 
where is a column vector of ones. The vector of stationary (unnormalized) probabilities, , follows from the stationarity condition
as the dominant lefteigenvector of
by the Perron–Frobenius theorem; see [8] for a reference.We now show that the approach of [2] is closely related to both a densitybased and a distancebased approach to anomaly detection. To see the connection with densitybased anomaly detection, consider the case when is symmetric; then the dominant lefteigenvector of is, up to scaling, . This follows from plugging in for in , and using that , with , which yields the true statement . We see that the stationary probability at observation is proportional to its (weighted) vertex degree in the similarity graph, where
(3) 
Expression (3) is proportional to a kernel density estimate with Gaussian kernel, whose kernel covariance matrix is diagonal with all diagonal elements equaling . As a density estimate, is typically misspecified, because the kernel matrix is not tuned to fit the particular data generating process. This may actually be desired in anomaly detection problems where a low density observation close to a very typical system state does not make for an interesting anomaly. However, the close connection with kernel density estimation suggests that if anomalous system states occur too frequently, they may not be labeled correctly as anomalies, even if they are for from the most typical system states.
To also see the connection with distancebased anomaly detection, consider a directed nearest neighbor graph instead of a fully connected similarity graph. Here the th element of takes value if is in the set , which contains the nearest neighbors of , and it is zero otherwise. The resulting similarity matrix is a sparse approximation of the full similarity matrix. The additional tuning parameter controls the degree of localization. Localization via the
nearest neighbor graph is also used in spectral clustering, manifold learning, and local multidimensional scaling; for a reference, see
[7, Chapter 14]. Consider a linear expansion of the radial kernel function, defined in Equation (1), around some distance level . Then is approximately an affine decreasing function of the average distance to the nearest neighbors:(4)  
This effectively eliminates the dependency on the kernel parameter . Using the average distance to the nearest neighbors as a measure of anomaly was suggested in both [9] and [10]. However, for relative anomalies, the average distance to the nearest neighbors can be small, and what is an anomalous system state may not be considered anomalous by the anomaly detection model.
Iii Detecting relative anomaly
Approaches to unsupervised anomaly detection that target the frequency criterion may not perform well in the presence of frequently occurring anomalies, as discussed in the previous sections. We now introduce two anomaly detection models that take into account the location of the most typical observations when determining how anomalous a new observation is. Both of these methods have the advantage that they provide a quantitative ordering of the data points in terms of how anomalous they are. We also investigate relationships and differences with other approaches to anomaly detection, especially the approach of [2], which we discussed in Section IIB.
Iiia Popularity approach
We propose to consider a “random walk” between nodes based on the unnormalized similarity matrix —instead of the transition probability matrix considered in Section IIB. From
(5) 
we see that the similarity between two nodes and factors into the transition probability and the vertex degree of . This has the effect that the random walk weakens when transitioning through nodes whose vertex degree is medium or small, and that it strengthens when passing through nodes of high vertex degree. We label an observation as anomalous if its relative anomaly,
(6) 
is small, where is the dominant lefteigenvector of . This eigenvector is unique with all elements positive by the Perron–Frobenius theorem.
We can gain further insight into this algorithm via a connection with the network analysis literature. [11] considers a network of persons, where each person rates each other person as popular or not. Their goal is to determine an overall popularity score for each person, based on the pairwise ratings. They suggest that a measure of overall popularity of person should depend not only on how many people in the network deem that person to be popular, but also whether those people are themselves popular. This leads to the eigenproblem , where is the overall popularity of person , and takes value one if person considers person popular. A person is labeled as overall popular when its entry in the dominant lefteigenvector of the adjacency matrix, called the eigenvector centrality, is large. We see that by measuring anomaly using (6) instead of (3
), how anomalous an observation is depends not only on how many other observations are close, but also on whether these other observations themselves have close neighbors. As a result, high vertex degree observations that are sufficiently far from many other observations in the similarity graph will be labeled anomalous. Asymptotically, the leading eigenvector of a kernel matrix converges to the leading eigenfunction,
, in the following eigenproblem [12]:(7) 
Here is the data density, and
is the eigenvalue that corresponds to
. We see that, asymptotically, the popularity of an observation is high if values that are close to have high density and are popular themselves. Here the size of the surrounding of is determined by the choice of .The power method can be used to find the dominant lefteigenvector of . This iterative method starts from a random initialization, , and then follows the recurrence relation . The convergence is geometric, with ratio , where and denote the first and second dominant eigenvalue of , respectively. We find that the error typically becomes small after just a few iterations. This computation is highly parallelizable.
We find that typically more than half of the smallest elements of the kernel matrix can be set to a small constant—allowing sparse matrix computations and a hence a speedup of more than two—without changing the rank order of the relative anomaly values. Furthermore, for highdimensional problems, we can obtain good starting values for the power iteration as follows. [13] show that can be approximated by , where is a draw of random Fourier features calculated from the original data. If we choose only a small number of Fourier features as compared to the sample size, then rank, and we can cheaply find an approximation to the leading eigenvector of as . Here denotes the leading eigenvector of ; it can again be found using the power iteration. In our experiments, this approach reduces the run time until the leading eigenvector of is found by one fourth.
It is computationally expensive to retrain the model with every new observation. Furthermore, it may not even be desired to update the model in the presence of every new observation, because that new observation may come from a different, anomalous data generating process. We propose to instead determine the relative anomaly of a new observation with respect to the observations in the training data set as follows. Recall that the lefteigenproblem of is , from which we see that . We can use this relation to predict the relative anomaly of a new observation , based solely on training data, as
(8) 
This can be viewed as an application of the Nyström method to approximate the leading eigenvector of the extended kernel matrix; for a reference on the Nyström method, see [14].
IiiB Shortest path approach
We also propose an approach to relative anomaly detection based on highest similarity paths. The idea is to first identify those observations that can be considered very typical, and then to label an observation as anomalous if it is difficult to reach it from any of the typical observations. Here we interpret an element as a “connectivity” value between nodes and . We use the following twostep approach:

Consider those observations for which the vertex degree is higher than that of percent of the observations in the training data set as highly normal. For each observation , we can express this as
, using the empirical cumulative distribution function of vertex degrees in the training data set,
. Note that by choosing the kernel bandwidth large enough we can smooth out local peaks in the data density, such that indeed the observations with highest vertex degrees can be considered normal. 
Now, for each observation that is not considered highly normal, find the length of the bestconnected path from it to any of the observations deemed normal:
(9) Alternatively, solve the equivalent shortest path problem
(10) Then label as anomalous if is large. if is one of the percent of observations which are considered most normal, and otherwise. This shortest path problem can be solved more efficiently when considering a sparsified version of , for example by applying a directed nearest neighbor truncation.
An advantage of this approach is that the tuning parameter allows controlling the number of data points considered typical. Several central regions of the data may emerge for a larger value of . A disadvantage is the higher computational complexity of the shortest path problem, which may however be reduced through subsampling.
We can gain further insight into this approach when used with the kernel function in (1). Then the path length in (10) becomes
(11)  
We see that the squared distance between two observations discourages large jumps, and thereby paths through high density regions are encouraged. While the tuning parameter does not influence the comparison between two path lengths, since it is only a multiplicative constant, it influences the calculation of in (10). A larger value for means that the bandwidth in the vertex degree estimator is higher, thereby smoothing the density more, which can be used to smear away small clusters of frequently occurring anomalies.
IiiC Normalization
A relative anomaly measure can be transformed into a degree of anomaly in for each observation using the empirical distribution function of directed anomalies in the training data:
(12) 
IiiD Determining largest univariate deviations
Once an anomalous state is identified, we can determine which univariate features deviate most from what is normal as follows:

Find that normal observation in the data set which is closest to the anomalous observation:
(13) The threshold determines how large the anomaly of may be to still be considered normal. Here it may be useful to use the distance to judge discrepancy, because the suggested change will be large in a few dimensions, unlike it is the case with distance, which will suggest smaller changes in many dimensions.

Calculate ; the largest elements of this vector difference show which univariate components need to be altered for the system to revert to a normal state.
Iv Application
We compare the relative anomaly detection approaches, introduced in Section III, to the vertex degree anomaly detection approach discussed in Section IIB, using two data sets from Google, of 1,000 data points each. We preprocess each covariate using the Box–Cox transform [15],
(14) 
to reduce skew and normalize kurtosis; special cases of this transform are the logarithmic and squareroot transforms. We find the parameters
as those maximizing the normal loglikelihood of the data. We then standardize the data and form a fully connected similarity graph using the radial basis kernel.Iva Potential scraping data
The first data set contains information about potential scraping attempts. Scraping is the automated collection of information from websites. The two covariates are experimental features that measure aspects of user behavior for each access log.
In Figure 1 we show the anomaly detection results using the vertex degree approach of Section IIB, which targets the frequency criterion. We set . Here and in the following, the lighter the shade of grey is, the higher the respective region’s detected degree of anomaly. The top twenty percent detected anomalies are emphasized. However, domain experts have identified that the observations in the diffuse cluster on the right exhibit behavior that is typical of scrapers. As a result, there are false positives surrounding the very high density area around , and the observations around and are false negatives.
The results for the popularity approach to relative anomaly detection, introduced in Section IIIA, are shown in Figure 2. We set , because we find that the relative anomaly approach generally requires less smoothing than the vertex degree approach. The results are not very sensitive to the exact choice of ; lowering in the vertex degree approach would result in a significant increase in the number of false positives and false negatives. There are no false positives or false negatives, as compared with the expert judgement.
It is extremely laborintensive—potentially even impossible—to assess with certainty whether an individual data point is or is not a scraper. Hence it may be desired to only label users as scrapers if we are very certain. The detected level of relative anomaly in Figure 2 tends to increase while moving away from the high density area on the left. Increasing the threshold of relative anomaly above which a user is labeled as a scraper will have the desired result that only observations on the far right—whose behavior is most different from what is typical—are labeled as anomalous. In contrast, the vertex degree approach will continue labeling observations in the low density area close to the cluster of normal users as anomalous.
In Figure 3 we show how the empirical cumulative distribution of relative anomalies may be useful for determining the threshold above which an observation is labeled an anomaly. For a clearer presentation, we transformed the relative anomaly values as . The top 20 percent of observations have much higher relative anomaly values than the other observations. This approach is particularly useful in higherdimensional problems, where a visual inspection is difficult.
We also apply the shortest path approach from Section IIIB to the scraping data set. In Figure 4 we see that, compared with the approach of Section III, the shortest path approach using yields sharper bounds around the group of normal observations, which may be desired in some applications; insample the classification outcomes are identical.
IvB WiFi usage data
Our second data set contains observations on WiFi channel utilization reported for wireless transmissions at different access points within a specific location in a corporate networking environment. The instantaneous channel utilization at each access point is an indication of how busy the transmission channel is, and whether the access point should change transition to a different channel. Detecting channel utilization anomalies is critical for identifying access points with low performance due to consistent high utilization. The data set contains two covariates for a WiFi access point. The first covariate is measure of overall utilization, and the second covariate measures utilization of rx versus tx. 72 percent of the data points cluster at value , which corresponds to no utilization. According to domain experts, high utilization states are anomalous.
The vertex degree approach yields the results in Figure 5, where again we set . We see that the two smaller clusters around and , as well as the few data points around , are jointly labeled as the top thirteen percent anomalies.
In Figure 6 we show the results for the approach from Section IIIA, again using . Here the cluster of high usage observations on the far right is correctly labeled as anomalous—because it is far from the many observations at the left of the figure. The results for the shortest path approach from Section IIIB are similar.
V Conclusion
Unsupervised approaches to anomaly detection are commonly used because labeling data is too costly or difficult. Many common approaches for unsupervised anomaly detection target a frequency criterion. This means that their performance deteriorates when anomalies occur frequently, as for example in the case of scraping. We proposed a novel concept, relative anomaly detection, that is more robust to such frequently occurring anomalies. It is tailored to be robust towards anomalies that occur frequently, by taking into account their location relative to the most typical observations. We presented two novel algorithms under this paradigm. We also discussed realtime detection for new observations, and how univariate deviations from normal system behavior can be identified. We illustrated these approaches using data on potential scraping and WiFi usage from Google, Inc.
Acknowledgment
We thank Mitch Trott, Phil Keller, Robbie Haertel and Lauren Hannah for many helpful comments, and Dave Peters as well as Taghrid Samak for granting us access to their data sets.
References
 [1] V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: A survey,” ACM computing surveys (CSUR), vol. 41, no. 3, p. 15, 2009.

[2]
H. Moonesinghe and P.N. Tan, “Outlier detection using random walks,” in
Tools with Artificial Intelligence, 2006. ICTAI’06. 18th IEEE International Conference on
. IEEE, 2006, pp. 532–539.  [3] L. Page, S. Brin, R. Motwani, and T. Winograd, “The pagerank citation ranking: bringing order to the web.” 1999.

[4]
C. E. Rasmussen and C. K. Williams, “Gaussian processes for machine learning,” 2006.

[5]
A. Zimek, E. Schubert, and H.P. Kriegel, “A survey on unsupervised outlier
detection in highdimensional numerical data,”
Statistical Analysis and Data Mining: The ASA Data Science Journal
, vol. 5, no. 5, pp. 363–387, 2012.  [6] C. C. Aggarwal, A. Hinneburg, and D. A. Keim, On the surprising behavior of distance metrics in high dimensional space. Springer, 2001.
 [7] T. J. Hastie, R. J. Tibshirani, and J. H. Friedman, The elements of statistical learning: data mining, inference, and prediction. Springer, 2009.
 [8] D. L. Isaacson and R. W. Madsen, Markov chains, theory and applications. Wiley New York, 1976, vol. 4.
 [9] E. Eskin, A. Arnold, M. Prerau, L. Portnoy, and S. Stolfo, “A geometric framework for unsupervised anomaly detection,” in Applications of data mining in computer security. Springer, 2002, pp. 77–101.
 [10] F. Angiulli, S. Basta, and C. Pizzuti, “Distancebased detection and prediction of outliers,” Knowledge and Data Engineering, IEEE Transactions on, vol. 18, no. 2, pp. 145–160, 2006.
 [11] P. Bonacich, “Factoring and weighting approaches to status scores and clique identification,” Journal of Mathematical Sociology, vol. 2, no. 1, pp. 113–120, 1972.
 [12] C. Williams and M. Seeger, “The effect of the input density distribution on kernelbased classifiers,” in Proceedings of the 17th international conference on machine learning, no. EPFLCONF161323, 2000, pp. 1159–1166.
 [13] A. Rahimi and B. Recht, “Random features for largescale kernel machines,” in Advances in neural information processing systems, 2007, pp. 1177–1184.
 [14] C. Williams and M. Seeger, “Using the nyström method to speed up kernel machines,” in Proceedings of the 14th Annual Conference on Neural Information Processing Systems, no. EPFLCONF161322, 2001, pp. 682–688.
 [15] G. E. Box and D. R. Cox, “An analysis of transformations,” Journal of the Royal Statistical Society. Series B (Methodological), pp. 211–252, 1964.
Comments
There are no comments yet.