Setting the threshold for high throughput detectors: A mathematical approach for ensembles of dynamic, heterogeneous, probabilistic anomaly detectors

10/25/2017 ∙ by Robert A. Bridges, et al. ∙ Oak Ridge National Laboratory University of Nebraska–Lincoln 0

Anomaly detection (AD) has garnered ample attention in security research, as such algorithms complement existing signature-based methods but promise detection of never-before-seen attacks. Cyber operations manage a high volume of heterogeneous log data; hence, AD in such operations involves multiple (e.g., per IP, per data type) ensembles of detectors modeling heterogeneous characteristics (e.g., rate, size, type) often with adaptive online models producing alerts in near real time. Because of high data volume, setting the threshold for each detector in such a system is an essential yet underdeveloped configuration issue that, if slightly mistuned, can leave the system useless, either producing a myriad of alerts and flooding downstream systems, or giving none. In this work, we build on the foundations of Ferragut et al. to provide a set of rigorous results for understanding the relationship between threshold values and alert quantities, and we propose an algorithm for setting the threshold in practice. Specifically, we give an algorithm for setting the threshold of multiple, heterogeneous, possibly dynamic detectors completely a priori, in principle. Indeed, if the underlying distribution of the incoming data is known (closely estimated), the algorithm provides provably manageable thresholds. If the distribution is unknown (e.g., has changed over time) our analysis reveals how the model distribution differs from the actual distribution, indicating a period of model refitting is necessary. We provide empirical experiments showing the efficacy of the capability by regulating the alert rate of a system with ≈2,500 adaptive detectors scoring over 1.5M events in 5 hours. Further, we demonstrate on the real network data and detection framework of Harshaw et al. the alternative case, showing how the inability to regulate alerts indicates the detection model is a bad fit to the data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The current state of defense against cyber attacks is a layered defense, primarily of a variety of automated, signature-based detectors and secondarily via manual investigation by security analysts. Typically large cyber operations (e.g., at government facilities) have widespread collection and query capabilities for an enormous amount of logging and alert data. For example, at the network level, firewalls and intrusion detection/prevention systems (IDS/IPS) such as Snort111https://www.snort.org/ produce logs, warnings, and alerts that are collected, in addition to the collection of network flow logs, and sometimes full packet captures are stored and/or analyzed.

Flow Record Example
Time 09:58:32.912
Protocol tcp
SrcIP 192.168.1.100
SrcPort 59860
DstIP 172.16.100.10
DstPort 80
SrcBytes 508526
DstBytes 1186562
TotBytes 1695088
Table I: Flows record the metadata of IP-IP communications.

Additionally, situational awareness tools such as Nessus222https://www.tenable.com/products/nessus-vulnerability-scanner provide lists of software, software version, and known vulnerabilities for each host. Host-based IDS/IPS such as McAfee333https://www.mcafee.com/us/index.html anti-virus (AV) software and AMP444http://www.cisco.com/c/en/us/products/security/advanced-malware-protection/index.html report alerts to cyber security operations, in addition to situational awareness appliances. Hence, security analysts now have access to multiple streams of heterogeneous sources producing data in high volumes. As an example, a large enterprise network operation, with which we collaborate, monitors only a portion of their network flow logs, a volume of 4-7 GB/s, in addition to many other logging and alerting tools employed. Consequently, manual investigation and automated processing of data/incidents must manage the large bandwidth.

While the first line of defense is signature-based methods (e.g., AV, firewall), which operate by matching precise rules that identify known attack patterns, their false negative rate is problematic. In response there is a large body of literature to use anomaly detection (AD) systems for protection [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. AD provides a complimentary monitoring tool that holds the promise of detecting novel attacks by identifying large deviations from normal behavior, and this concept has been proven in many of the previous works. Ideally, accurate detection with a low number of false positives is achieved. At a minimum, an AD IDS should isolate a manageable subset of events that are sufficiently abnormal to warrant next-step operating procedures, such as, passing alerts to a downstream system (e.g., automated signature generator) or to an operator for manual investigation.

While AD has garnered much research attention, such algorithms are met with many challenges when used in practice in the cyber security domain. How to design AD models for accuracy—exploring what statistics, algorithms, and data representations to use so that the detected events correspond with operator-defined positives—is the focus of many previous works [1, 2, 3, 4, 5, 6, 7, 8, 17, 14, 15, 16] and in deployment is likely a network-specific task leveraging both domain expertise (understanding attacks, protocols, etc.) and tacit environmental knowledge (understanding configuration of network appliances and their behaviors). Common trends to increase accuracy involve the use of ensembles of heterogeneous detectors [3, 16, 4, 5, 6, 7, 18, 19, 14] and/or online detection models that adapt in real time and/or upon observations of data [1, 20, 21, 5, 8]. In practice, the need for multiple detectors is enhanced by the diversity in network components (models conditioned on each host, subnet, etc.), data types (models conditioned on flow data, system logs, etc.), and features of interest (rate, distribution of ports used, ratio of data-in to data-out, etc.). E.g., patents of Ferragut et al. [22, 23] detail AD systems using a fleet of dynamic models and producing near real-time alerts on high volume logging data.

I-a Problem Addressed

In this work we do not present novel methods for accurate detection of intrusions. Rather, we address a difficult but important question for AD in IDS applications, namely: How should the alert threshold be set in the case of a large number of heterogeneous detectors, possibly changing in real time, that are producing alerts on high-volume data? Our organization’s cyber operations’ analysts have arrived at the problem of alert rate regulation from three scenarios that all require real-time prioritization of alerts that can accommodate influxes of data as well as the multitude of evolving models, namely, (1) manual alert investigation requires an online way to triage events; (2) data storage limitations, e.g., storing packet captures (PCAPs) from the most anomalous traffic, requires a real-time algorithm for prioritizing events as “anomalous enough”; (3) online automated alert processing (e.g., automated signature generation of anomalous activity) cannot handle influxes, i.e., downstream systems require alert rate regulation to prevent a denial-of-service.

To illustrate the problem, consider the AD system and Skaion data used in Section III-A

. This AD system scores each flow using two evolving probability models per internal IP, totaling about 2,500 dynamic detectors. It is simply not feasible to manually tune the threshold for each model, and even so, since the models are changing in real time, reconfiguration would be periodically necessary. Furthermore, the consequences of misconfigured thresholds are substantial. Altogether, the system produces a collective

2M anomaly scores in about five hours; hence, a threshold that is only slightly too high can produce tens of thousands of alerts per hour! Moreover, this dataset is small compared to many networks, and the detection ensemble grows linearly with the number of IPs to model (network size).

The specific problem of how to set the alert threshold for detection systems in these very realistic scenarios is difficult, underdeveloped, and when not properly addressed, leaves anomaly detectors useless, as their goal is to isolate the most abnormal events from the sea of data. Hence, we arrive at our problem of interest. How does an operator set the threshold for an AD system, given that the method must adequately accommodate a multitude of possibly adaptive models operating on possibly variable speed, high volume data? Second, our work contributes to the related problem of detecting drift of adaptive models over time.

I-B Background & New Contributions

This alert-rate regulation problem is first specified by Ferragut et al. [5]

, and they note that a principled notion of quantitative comparability across detectors is necessary to deal with multiple/dynamic models. Their relevant contribution is twofold. (1) By assuming data is sampled from an accessible probability distribution, they formulate a definition equivalent to Defn. 

II.2. The upshot is that anomalies are events with low p-values (see Defn. II.1), and this technique provides a distribution-independent, comparable anomaly score. (2) They provide a theorem equivalent to Lemma II.3 for alert-rate regulation. No alert rate algorithm nor experimental testing of the theorem’s consequences are presented.

Kreigel et al. [18]

have addressed the problem of comparing multiple outlier detection methods by manually crafting transformation functions that convert the given output to a score to a comparable output in the interval [0,1]. This is only done for a handful of outlier detection algorithms, indicating the obvious drawback of this approach, the necessity to manually investigate each model.

For numerical time-series data, Siffer et al. [24] exploit the extreme value theorem to find distribution-independent bounds on the rate of extreme (large) values.

Our contributions build on the work of Ferragut et al. [5] both in extending the mathematics and in converting these theorems into an operationally-viable solution to the problem. New theorems of Section II provide further mathematical advancements pertinent to understanding the relationship between the p-value threshold and the likelihood of an alert. These results informs an operational workflow. Given (1) a detection capability that uses a probability distribution to score low p-value events as anomalies and (2) knowledge of the data’s rate, the operator has a principled, distribution independent method for setting the threshold to regulate the number of alerts produced. Hence, the algorithm can be applied to an ensemble of possibly dynamic, heterogeneous detectors to prevent overproduction of alerts. Notably, the system will not suppress influxes of anomalies, but asymptotically the operator-given bound is respected. Our math results give hypotheses that ensure equality in the theorem; operationally, this is the case when users can specify, rather than just bound, the number of alerts. As the theorems hold independent of the model (distribution) used, operators can set the threshold a priori. In particular, it remains valid in a streaming setting, where the detection model is updated in real time to accommodate new observations. Because the underlying assumption is that future observations are sampled from the model’s distribution, the alert rate regulation will fail if the model distribution differs from the actual distribution. Hence, the theorem’s contrapositive gives an operational benefit, namely that violations of the operator-given alert rate indicate that the anomaly detection model is not a good fit to the data. In this case a period of relearning the distribution is necessary for the threshold-setting algorithm to remain effective.

We present empirical experiments testing our method in setting the alert rate in two scenarios, both using detectors on network flow data. The first (Section III-A) shows the efficacy of setting the threshold on approximately 2,500 simultaneous, dynamically updated detectors. In this scenario, multiple anomaly scores are computed per flow; hence, the data rate is high and varies according to network traffic. The results show that the mathematics give a method for regulating the alert rate of dynamic detectors that is a priori (in the sense that no knowledge of the specific distribution is necessary for threshold configuration). Our second experiment (Section III-B) uses data from the network AD paper of Harshaw et al. [8]

, which fits a Gaussian to a vector describing the real network traffic every 30 seconds. Hence, it is a single, fixed rate detector. The results from this application illustrate the analytic capabilities for gauging model fit that are made possible by the theorems we develop.

Altogether our work gives new mathematical results regarding the p-value distribution. This informs an algorithm that poses an alternative—Operators can accurately set the threshold of detection ensembles to bound the expected number of alerts or identify a misfit of the detection model.

Ii Mathematical Results

In this section we present mathematical assumptions and results that are the foundation for bounding the alert rate of an anomaly detector. To proceed, we consider probabilistic anomaly detectors, which score data’s anomalousness according to a probability distribution describing the data. We leverage the probabilistic description to provide a theorem that gives sharp bounds on the alert rate in terms of the threshold, independent of the distribution. This gives an a priori method to manage the expected number of alerts for any distribution. As the mathematics is presented with the necessary but possibly abstruse formality and rigor, we include easy-to-understand examples illustrating the results, their implications, and limitations.

Ii-a Setting and Notation

Our setting assumes data is sampled independently from a distribution with probability density function (PDF)

. The ambient space, is assumed to be a -finite measure space with measure and the set of measurable subsets of We define the probability of a measurable set to be ; i.e., denotes the corresponding probability measure with Radon-Nikodym derivative . For almost all applications, is either a subset of with Lebesgue measure, or is a discrete space with counting measure.

We say an anomaly score respects the distribution if if and only if —intuitively, is more anomalous than if and only if is less likely than . While this can be attained by simply letting for a decreasing function (for example, see Tandon and Chan [15] where ), the work of Ferragut et al. [5] notes that such an anomaly score inhibits comparability across detectors. That is, because such a definition puts in one-to-one correspondence with the values of , anomaly scores can vary wildly for different distributions. The consequence is that setting a threshold is dependent on the distribution, which is problematic especially for two settings that are common. The first is a setting where multiple detectors (e.g., detectors for network traffic velocity, IP-distribution, etc.) are used in tandem as each require different models. For instances in the literature using cooperative detectors see [3, 16, 4, 5, 6, 7, 18, 19, 14]. The second setting is when dynamic models (where is updated upon new observations) are used, as this requires comparison of anomaly scores over time. For examples of streaming detection scenarios see [1, 20, 21, 5, 8].

To circumvent this problem, we follow Ferragut et al. [5] by assuming observations are sampled independently from , a PDF, and define anomalies as events with low p-value (as do many detection capabilities). For any distribution the p-value gives the likelihood relative to the distribution. Hence, it always takes values in

, and in the specific case of a univariate Gaussian is just the two-sided z-score.

Definition II.1 (P-Value).

The p-value of with respect to distribution , is denoted and is defined as

It is clear from the definition that if and only if is more likely than , since . Finally, in order to define an anomaly score, simply compose a decreasing function, say, , with the p-value.

Definition II.2 (Anomaly Score).

An anomaly score that respects a distribution is of the form , for strictly decreasing .

Since can be any strictly decreasing function, is a natural choice for simply inverting [0,1] so that low p-values (anomalies) get high scores and conversely. For the theorems that follow, we use both the p-value threshold denoted by , and the corresponding anomaly score threshold is simply .

Ii-B Theorems

In this section we present the mathematical results that make precise the relationship between an anomaly threshold and the likelihood of an alert. Theorem II.4 and ensuing corollaries give sharp estimates for bounding the alert rate in terms of the threshold.

Lemma II.3.

Let denote a probability distribution. For all

Furthermore, equality holds in both if and only if .

Proof.

Suppose for the moment there exists

such that . Then

It follows that

(1)

Hence we have equality in this case, which shows the inequalities are sharp, once proven.

To prove the inequality let There exists such that hence

Since the sets on the right are a nested, increasing family, we have

This proves the first inequality, and establishes equality iff Finally, . ∎

Roughly speaking, the lemma says that if we sample from distribution , and compute its p-values, , the chance that the is less than a fixed number is less than or equal to . The next theorem translates this to the AD setting.

Theorem II.4.

[Alert Rate Regulation Theorem] Let be strictly decreasing so that is an anomaly score that respects the distribution . Let denote the alert threshold (so is called “anomalous” iff ), and set . Let be a set of independent samples from PDF . Then the expected number of alerts in is bounded above by i.e.,

Proof.

By definition of and , we have

with the last inequality provided by the Lemma. ∎

Corollary II.5.

If is surjective, then equality holds in the preceding theorem, lemma for all .

Corollary II.6.

If is a connected topological space,

is not the uniform distribution, and

is continuous, then for some ; hence, equality holds in the preceding theorem and lemma for all .

Corollary II.7.

Suppose is a topological space and, for all Then equality holds in the preceding theorem and lemma.

Proof.

Let such that . Then

(2)

By hypothesis, for any

by continuity of measures from above. Hence, for all and there exists such that .

It follows from Inequality (II-B) that This establishes the condition of Lemma II.3 for equality with . ∎

Ii-C Examples and Explanations

See Figure 1 depicting a simple trinomial distribution and corresponding p-value distribution. This distribution’s plateau forces a discontinuity in the rate of alerts as a function of the threshold.

Figure 1: Trinomial distribution with corresponding p-value distribution. P-value thresholds 1/6, 1/2, and 1 are the only values for which equality holds in the theorems. For these threshold values the expected percentage of alerts are exactly 1/6, 1/2, 1, and, moreover, these are the only percentages possible; e.g., using p-value threshold will yield exactly 0 events and yields an expected 1/6 of the events as alerts. This illustrates a fundamental limitation of PDFs with plateaus. Note that this phenomenon can occur with continuous as well.

In this distribution the operator can either yield exactly none or 1/6th of all events as the threshold changes from below to above . As the extreme case, consider the uniform distribution in which all events are equally likely/anomalous. With the uniform distribution, the operator can yield exactly none or all events. Note that the limitation is independent of the method for choosing the threshold and poses a general problem for AD. This limitation appears in our experiments with real data.

Corollaries II.5, II.6, and II.7

are crafted to identify when this limitation is not present. As a simple example, consider the standard normal distribution, Figure 

2. Regarding the plot of the corresponding p-value distribution is easy to see continuity. Since , where

is the cumulative distribution function (CDF), it follows from Corollary 

II.6 that we have equality in Theorem II.4 for all The same result follows from Corollary II.7 and the fact that has no plateaus. Hence, the equality condition for all threshold values means that one can specify the expected number of alerts; quite explicitly, if one desires exactly the most anomalous 1/1000th of the data, then simply setting the p-value threshold to guarantees the result. Using the contrapositive, we see that if the model admits equality for some p-value threshold, , then an average number of alerts above/below % indicates that the tails of are too small/big, respectively. Without equality one can only detect tails that are too thick.

Finally, we note that while these examples are simple distributions chosen for illustrative purposes, the theorems hold under the specified, very general hypotheses. All that is needed is a known measure for which the probability measure is absolutely continuous.

Ii-D Alert Rate Regulation Algorithm

Under the assumption that data observations are samples from our distribution, we are mathematically equipped to design an algorithm that exploits the relationship between the alert rate and the threshold to prevent an overproduction of alerts. To illustrate this, suppose we receive data points per time interval (e.g., per minute), but operators only have resources to inspect the most anomalous in each time interval.

Figure 2: The standard normal distribution PDF is depicted with corresponding p-value distribution. In this case, the p-value is continuous, and the issue faced by the aforementioned trinomial distribution is avoided. Operationally, this means for any specified percent , a threshold can be set to isolate the most anomalous of the distribution.

Let be a PDF fit to all previous observations . Following the assumption that the next observation, , will be sampled according to , define the anomaly score, where is a fixed, strictly decreasing bijection of the unit interval. Upon receipt of an alert is issued if . Equivalently, if . Finally, we update to now including observation and repeat the cycle upon receipt of the next observation. Leveraging the theorem above, the expected number of alerts per interval is Hence, choosing (equivalently, flagging if the p-value is below ) ensures that the operator’s bound on the number of alerts will hold on average.

The method above is for a fixed time interval, or for constant rate data. As the speed of the data may vary, we now adapt the above method to dynamically change the alert rate to accommodate variable data speed. Let denote arrival time of , and let (alerts per second) be the user-desired upper bound on the alert rate (the analogue on ). Next, for each time interval we periodically estimate the rate of data by letting , so that gives the number of observed events over the th -length interval. Hence is a periodically computed, moving average data speed. Alternatively, one could compute on each data point’s wait time, i.e.,

, which could experience much more variance. Finally, the new threshold at time

shall be given by for the th interval, or equivalently, the p-value threshold is

This new threshold is used for classifying

as soon as is observed. This algorithm can be called at each iteration of a streaming algorithm to regulate the alert rate.

Note that this choice of threshold depends only on the rate of the data ( or ), and the operator’s bound on the alert rate ( or ). In particular, it is independent of the distribution, and therefore can be set a priori, regardless of the distribution. This is especially applicable in a streaming setting, where is constantly changing. Next, this is not a hard bound on the number of alerts per day, but rather bounds the alert rate in expectation. The operational impact is that if there is an influx of anomalous data, all will indeed be flagged, but on average the alert rate will be bounded as desired. Consequently, it is possible to have greater than alerts in some time intervals. Finally, if one has the luxury of performing post-hoc analysis, such an algorithm is not needed; for example, one can prioritize a previous day’s alerts by anomaly score, handling as many as possible. Yet, if real time analysis is necessary, e.g., only a fraction of the alerts can be processed, stored, etc., then such an algorithm allows real time prioritization of alerts with the bound preset.

The underlying assumption is that data is sampled from the model’s probability distribution, so if this assumption fails, for example, if there is a change of state of the system and/or if poorly describes the observations to come, then these bounds may cease to hold. This failure gives an ancillary benefit of the mathematical machinery above, namely, that monitoring the actual versus expected alert rate gives a quantitative measure of how well the distribution fits the data. Our work to characterize conditions that ensure equality (Corollaries II.5, II.6, and II.7) serve this purpose. Under these conditions, the alert rate bound is an actual equality, and deviations from the expected number of alerts over time (both below and above) indicate a poor model for the data. On the other hand, when the bound is strict (and equality does not hold), only an on-average over-production of alerts will signal a model that does not fit the data. Our experiments on real data illustrate this phenomenon as well.

Finally, we note that the adaptive threshold, with the data rate, is conveniently self tuning for variable speed data, it induces a vulnerability. Quite simply, an all-knowing adversary with the capability to increase at the time of attack, can force the threshold to zero, to mask otherwise alerted events. On the other hand, this is easily parried with a simply fixed-rate detector on the data rate, i.e., modeling the statistic , (e.g., a denial of service flooding detector). Note that as is computed each interval, the proposed workaround is a fixed rate detector; hence, the alert rate can be regulated without the vulnerability induced.

Ii-E Impact on Detection Accuracy

For high volume situations, the adaptive threshold will reduce the number of alerts during influxes of data. Consequently, the true/false positive rates (defined, respectively, as the percentage of positives/negatives that are alerts, and hereafter TPR, FPR) will drop, as the number of alerts (numerator) will be reduced with fixed denominator. The effect on the positive predictive value (PPV) also known as precision (defined as the percent of alerts that are positives), will depend on the distribution of anomaly scores to the positive events in the data set. In particular, if this distribution is uniform, then precision will be unaffected. In this case we note that our theorems give a sharp bound (and ability to regulate) the false detection rate (1-PPV).

Iii Empirical Experiments

We present experiments testing the fixed-rate and streaming threshold algorithms on two data sets, Skaion (synthetic) and GraphPrints (real) flow data.

Iii-a Skaion Data & Detection System

To test the alert rate algorithm we implemented a streaming AD system on the network flow data from the Skaion data set. See Acknowledgments Acknowledgements for details on Skaion data source information. This data was, to quote the dataset documentation, “generated by capturing information from a synthetic environment, where benign user activity and malicious attacks are emulated by computer programs.” There is a single PCAP file for benign background traffic and a PCAP file for each of nine attack scenarios. We utilized ARGUS555http://www.qosient.com/ (the Audit Record Generation and Utilization System), an open source, real-time, network flow monitor, for converting the PCAP information into network flow data. As the different PCAP files had mutually disjoint time intervals, we created a single, continuous set of flows by offsetting the timestamps from the 5s20 (Skaion label) attack scenario flows to correspond with a portion of the “background” (i.e., non-attack, 5b5 Skaion label) flows, and then shuffling the ambient and attack data together so they are sorted by time.666 The 5s20 attack scenario, titled “Multiple Stepping Stones,” begins with the attacker scanning internet-facing systems, gaining access to one of them using an OpenSSL exploit, then leveraging this to gain access to several systems behind the firewall. The attacker’s initial scan of the internet-facing systems is not subtle, and therefore, produces a large spike in the AD system.

All together our test data set has 681,220 background traffic flows spanning five hours 37 minutes (337 minutes) with 227,962 flows from the attack PCAP file included from the 227th minute onward. The attack PCAP file includes approximately 20 minutes of data before the attacker initiates the attack. This data set includes 6,905 IPs of which 1,246 are internal IPs (100.*.*.*). To put this in perspective, Section III-B uses flows from a real network of (only) 50 researchers and created twice the Skaion flow volume in half the time. Skaion data is relatively small.

We implement a dynamic fleet of detectors roughly based on the Ferragut et al. [23] patent and currently used in operation. Specifically, for each internal IP we implement two detectors. The first models the previously observed inbound and outbound private ports, numbered 1-2048 (1-1024 for outbound, 1025-2048 for inbound traffic), using a 2048-bin multinomial, and follows the recent publication of Huffer & Reed [25] where it is shown that the role of a host can be characterized by the use of private ports in flow data. The second models the producer-consumer ratio (PCR), which is defined as (source bytes - destination bytes)/(source bytes + destination bytes). Hence, PCR is a metric describing the ratio, data in : data out, per flow and takes a value in the interval [-1,1]. This is modeled by a 10-bin multinomial. Initially, all bins (in both models) are given a lone count; notationally, with bins ( or ) for all . Upon receipt of the th observation, the p-value is computed; with sum over . Finally, the model is updated from to by simply incrementing the count of the bin observed and the denominator. Mathematically, this is the maximum a posteriori multinomial given the previous observations and a uniform prior.

Figure 3: All subplots depict number of alerts per minute (-axis) against time (-axis) for the adaptive threshold (blue curve in all plots) against a fixed p-value threshold, top row: yellow curve is , middle row: red curve is , bottom row: green curve is . Left column shows the whole 337 minute dataset; right column is zoomed in to show better resolution. Total, the system has 1,246 IPs 2 detectors/IP = 2,492 dynamic detectors and produces 1,565,596 anomaly scores (p-values). The spike in minute 247 is caused by the attacker initiating a series of unsubtle port scans against internet-facing systems, causing an increased quantity of alerts. Fixed thresholds and averaging and alerts per minute are used as baselines for comparison. The fixed threshold is computed using our theorem to satisfy the bound of 1 alert per minute on average and averages alerts per minute. Considering the fixed thresholds only, we conclude that improper choice of the threshold will easily flood the analyst with alerts, but that using a fixed threshold informed by our mathematics will prevent overproduction. The adaptive threshold produces 0.43 alerts per minute on average. Considering the bottom row only, we conclude that the adaptive threshold produces less overall alerts than the fixed threshold with the same alert-rate bound, and spreads the alerts more evenly.

Altogether, the system has 1,246 IPs 2 detectors/IP = 2,492 dynamic detectors and produces 1,565,596 anomaly scores (p-values) in the 337 minutes of data. Our goal in designing these detectors was to create a

Figure 4: Rate of Skaion flow data, , (anomaly scores per minute) plotted. Adaptive threshold (not depicted) has inverse relationship, , with the user-given alert rate bound, ( in experiments).

realistic detection capability, and models were chosen at the advice of professional cyber security analysts. Although the focus of this paper is not on pioneering accurate AD, we quickly note that the current detectors are able to easily recognize that the initial port scan activity of the attack is anomalous (Fig. 3, min. 247). Deeper investigation of when and how well these models are effective is out of scope for this effort.

Iii-A1 Skaion Data Results

We test our alert rate threshold analysis against more traditional thresholds. Throughout our discussion, reference Figure 3 giving the number of alerts per minute with various thresholds. Some previous works both within intrusion detection [26, 27, 28, 29] and in other domains [30] using p-value thresholds follow the “ rule-of-thumb” and let 0.3%= .003 serve as the threshold. This follows from the fact that the -tails of a normal distribution have p-value or approximately . Others simply choose a value that is near 2% for an undisclosed reason, or because of a true positive versus false positive analysis in light of labeled events. [31, 32, 8, 33, 10, 34]. While such values may be appropriate for hypothesis testing in many situations, for AD on high volume data this unprincipled manner of setting the threshold is inadequate. Testing these approaches, we find p-value threshold produces an average of alerts per minute, while produces on average alerts per minute (top two rows of Figure 3). The clear conclusion is that the operator will be overwhelmed by alerts with these thresholds.

We contrast this with the a priori analysis furnished by our theorem. Suppose first that our operators can realistically consider alert per minute on average, and that they know m scores will be produced. With these two figures, we use the theorem to compute the static p-value threshold as follows: Hence, we set . This simple calculation shows that the ad-hoc p-value thresholds of and are one and two orders of magnitude too large, respectively. Moreover, testing the fixed threshold yields alerts per minute. See the bottom row of Figure 3. This shows the efficacy of the alert rate theorem even for fixed thresholds on variable speed data.

Next, we remove the assumption that the operator knows the number of events (), and test the dynamically changing threshold with data rate recomputed each minute. To do so, we again let the user-defined bound on the average number of alerts be (following notation of the last Section). Each minute we compute the moving average of the data, , using the previous minute of data and set ; i.e., is defined as in Section II-D with = 1 minute. See Figure 4 depicting . Consulting Figure 3, we see the adaptive threshold yields about 0-4 alerts per minute except for the one large spike induced by the exceptionally anomalous attack activity. On average, the adaptive threshold produces 0.43 alerts per minute, slightly less than the fixed but informed threshold (). Finally, consulting the the bottom-row plots of Figure 3 shows that the distribution of alerts is more spread out with the adaptive threshold. Overall, the streaming alert rate regulation algorithm is effective in regulating the nearly 2,500 adaptive detectors with no a priori information.

Thresh. TPR FPR PPV
.003 .0763 .0015 .7262
2.152e-4 .0040 3.48e-5 .8598
Adaptive .0021 5.33e-5 .3194

Lastly, we present the accuracy metrics for each threshold in the wrapped table. As expected, both TPR and FPR both drop when using the adaptive threshold as a result of an overall decrease in alerts. Precision (PPV) is significantly lower in the adaptive case. To explain this, we regard Figure 4 to see that the data rate nearly doubles on average during the attack times. This is caused by the creation of the data set, combining the non-attack Skaion files with the attack Skaion files. With the adaptive threshold, the number of alerts will roughly halve during the times when positive examples are included; hence, the drop in precision is an artifact of the simulation. In this specific application the adaptive threshold still provides nearly 50 alerts at the onset of the attack. More generally, the effect of our threshold algorithm on precision will depend on the distribution of anomaly scores to the attack events.

Iii-B GraphPrints Data & Detection System

We now present experiments of the alert rate developments on the data and network-level anomaly detector (GraphPrints) from the publication of Harshaw et al. [8], which was shared with us by the authors. In this publication, they chose the tightest threshold that still detects all known positives in the data. We will show how our analysis can inform the understanding of the AD model.

Their work used 175 minutes of network flow data collected from a small office building at our organization, also using ARGUS flow sensor. It included flows from approximately 50 researchers using 642 internal IPs plus another 2,500 internal, reserved IPS (10.*.*.* addresses), and was comprised of 1,725,150 flows. The data contained anomalous bittorrent traffic and occasional IP scanning traffic.

The AD method proposed by Harshaw et al., called GraphPrints, uses graph analytics and robust statistics to identify anomalies in the local topology of network traffic. The method proceeds in three steps.

Figure 5:

GraphPrints data’s actual and expected number of alerts for small p-values. Because a Gaussian distribution was used, the corollaries indicate the two curves should be approximately equal. The extreme disparity in actual vs. expected alerts for small p-values indicates that the data is sampled from a distribution with much thicker tails than the Gaussian used for the AD model. This results in an inability to regulate the number of alerts, indicating problems with their model.

First, a graph is created for each 31 second interval of network traffic (with one second overlap) by representing IPs by nodes and adding a directed edge from source to destination IP nodes of at least one such flow occurred. Moreover, edges are given a binary color indicating if at least one private port occurred in the flow. Second, the graph is converted to a graphlet count vector, where each component of the vector is the count of an observed graphlet, which are small, node-induced subgraphs. Conceptually, each graphlet can be thought of as a building block graph, and the count vector gives the sums of each such small graph observed in the network traffic graph.777See Prvzulj et al. [35] for pioneering work on graphlets. Third, AD is performed by fitting a multivariate Gaussian to the history of observed vectors, and then, given a new vector, alerting if its p-value is below a threshold.888We note that Harshaw et al. detected events with sufficiently high Mahalanobis distance. It is easy to show that Mahalanobis distance is an anomaly score that respects the Gaussian distribution in the sense of Defn. II.2 by proving where denotes the p-value of the variate normal distribution, the Mahalanobis distance, and is the cumulative density of the random variable with degrees of freedom. Harshaw et al. implemented this method as a streaming detection algorithm, initially fitting the Gaussian to the first 150 (of 350) data points, and iteratively scoring, then refitting the Gaussian to each of the subsequent 200 events. In order to prevent unknown attacks or other anomalies in the data from affecting their model, Harshaw et al. used the Minimum Covariance Determinant (MCD) algorithm with . This isolates the most anomalous of the data and fits the Gaussian to the remaining .999See Rousseeuw [36] for algorithmic details.

Iii-C GraphPrints Data Results

First note that because the AD is based on low p-values of a Gaussian distribution, which has no plateaus, Corollaries II.5,II.6, and II.7 all independently imply that equality holds in Theorem II.4. Operationally, this means that users can specify the expected number of alerts (not just bound them), provided events are sampled from the model’s distribution. Testing this for a simple example shows alert rate regulation fails; for example, p-value threshold corresponds to an expected two alerts in the data, but produces over 60! This indicates that our detection model is a bad fit to the data. Digging deeper, Figure 5 shows that for all small p-values the realized alerts far exceed the expected, violating the theorem. We can conclude that the data’s distribution has much thicker tails than the Gaussian used for detection—in short, the detection model is not a good fit to the data. This is perhaps unsurprising recalling the use of MCD fitting, which effectively discarded the most outlying observations before computing the mean and covariance.

The result above illustrates a tradeoff afforded by the mathematical framework—either accurate regulation of the alert rate is possible, or the bound/equality on the expected number of alerts is not obeyed but information on the fitness (or lack thereof) of the distribution is gained.

Iv Conclusion

In this work we consider the problem of setting the threshold of multiple heterogeneous and/or streaming anomaly detectors. By assuming probability models of our data and defining anomalies as low p-value events, we prove theorems for bounding the likelihood of an anomaly. Leveraging the mathematical foundation, we give and test algorithms for setting the threshold to limit the number of alerts of such a system. Our algorithmic developments rely on the underlying assumption that observations are sampled from the model distribution. As the theorems hold independently of the distribution, our threshold-setting method persist as models evolve online or for a heterogeneous collection of models (so long as the assumption holds). Using the Skaion synthetic network flow data, we implement an AD system of adaptive detectors that scores over 1.5m events in 5 hours, and show empirically how to set the threshold and regulate the number of alerts. The mathematical contrapositive of our main theorem operationally provides the user with an alternative—either the alert rate regulation is possible or the detector’s model is a bad fit for the data. We demonstrate the use of this analytical insight by implementing the threshold algorithm on the real network data of Harshaw et al. and proving that their data is sampled from a distribution with much thicker tails than their detection model’s. In summary, our work provides a mathematical foundation and empirically verified method for configuring anomaly detector thresholds to accommodate the hardships necessitated by modern cyber security operations.

Acknowledgements

Thank you J. Laska, V. Protopopescu, M. McClelland, L. Nichols, J. Gerber, and reviewers whose comments helped polish this document. This material is based on research sponsored by the U.S. Department of Homeland Security (DHS) under Grant Award Number 2009-ST-061-CI0001, DHS VACCINE Center under Award 2009-ST-061-CI0003, and Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U. S. Department of Energy, contract DE-AC05-00OR22725. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the DHS. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 25-0517-0143-002. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. The data used in this research and referenced in this paper was created by Skaion Corporation with funding from the Intelligence Advanced Research Project Agency, via www.impactcybertrust.org.

References

  • [1] T. Ahmed et al., “Multivariate online anomaly detection using kernel recursive least squares,” in 26th INFOCOM.   IEEE, 2007, pp. 625–633.
  • [2] S. Axelsson, “The base-rate fallacy and the difficulty of intrusion detection,” ACM Trans. Inf. Syst. Secur., vol. 3, no. 3, pp. 186–205, 2000.
  • [3] M. Christodorescu and S. Rubin, “Can cooperative intrusion detectors challenge the base-rate fallacy?” in Malware Detection.   Springer, 2007, pp. 193–209.
  • [4] E. Ferragut et al., “Automatic construction of anomaly detectors from graphical models,” in CICS.   IEEE, 2011, pp. 9–16.
  • [5] ——, “A new, principled approach to anomaly detection,” in ICMLA, vol. 2.   IEEE, 2012, pp. 210–215.
  • [6] R. Fontugne et al., “Mawilab: Combining diverse anomaly detectors for automated anomaly labeling and performance benchmarking,” ser. Co-NEXT.   ACM, 2010.
  • [7] S. Garcia et al., “An empirical comparison of botnet detection methods,” Comp. & Sec., vol. 45, 2014.
  • [8] C. Harshaw et al., “Graphprints: Towards a graph analytic method for network anomaly detection,” in 11th CISRC.   ACM, 2016, pp. 15–19.
  • [9] A. Lakhina et al., “Diagnosing network-wide traffic anomalies,” SIGCOMM, vol. 34, no. 4, pp. 219–230, Aug. 2004.
  • [10] M. Moore et al., “Modeling inter-signal arrival times for accurate detection of can bus signal injection attacks,” in 12th CISRC.   ACM, 2017.
  • [11] T. Pevnỳ et al., “Identifying suspicious users in corporate networks,” in Proc. Info. Forens. Sec., 2012.
  • [12] M. Rehak et al., “Adaptive multiagent system for network traffic monitoring,” Intel. Sys., vol. 24, 2009.
  • [13] K. Scarfone and P. Mell, “Guide to intrusion detection and prevention systems,” NIST, vol. 800, 2007.
  • [14] J. Sexton et al., “Attack chain detection,” J. SADM, vol. 8, no. 5-6, pp. 353–363, 2015.
  • [15] G. Tandon and P. Chan, “Tracking user mobility to detect suspicious behavior.” in SDM.   SIAM, 2009, pp. 871–882.
  • [16] A. Thomas, “Rapid: Reputation based approach for improving intrusion detection effectiveness,” in IAS.   IEEE, 2010, pp. 118–124.
  • [17] C. Joslyn et al., “Discrete mathematical approaches to graph-based traffic analysis,” in ECSaR, 2014.
  • [18] H. Kriegel et al., “Interpreting and unifying outlier scores,” in SDM.   SIAM, 2011, pp. 13–24.
  • [19] E. Schubert et al., “On evaluation of outlier rankings and outlier scores,” in SDM, 2012, pp. 1047–1058.
  • [20] R. Bridges et al., “Multi-level anomaly detection on time-varying graph data,” in ASONAM’15.   New York, NY, USA: ACM, 2015, pp. 579–583.
  • [21] ——, “A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization,” Soc. Netw. Anal. & Mining, vol. 6, no. 1, p. 99, 2016.
  • [22] E. Ferragut et al., “Detection of anomalous events,” Jun. 7 2016, US Patent 9,361,463.
  • [23] ——, “Real-time detection and classification of anomalous events in streaming data,” Apr. 19 2016, US Patent 9,319,421.
  • [24] A. Siffer et al., “Anomaly detection in streams with extreme value theory,” in 23rd ACM SIGKDD, 2017.
  • [25] K. Huffer and J. Reed, “Situational awareness of network system roles (SANSR),” in 12th CISRC.   ACM, 2017.
  • [26] R. Bhaumik et al., Securing collaborative filtering against malicious attacks through anomaly detection.   AAAI, 2006, vol. WS-06-10, pp. 50–59.
  • [27] J. Cucurull et al., Anomaly Detection and Mitigation for Disaster Area Networks.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 339–359.
  • [28] N. Ye et al, “Multivariate statistical analysis of audit trails for host-based intrusion detection,” Trans. Comp., vol. 51, no. 7, pp. 810–820, 2002.
  • [29] N. Ye and Q. Chen, “An anomaly detection technique based on a chi-square statistic for detecting intrusions into information systems,” Qual. Reli. Eng. Int., vol. 17, no. 2, pp. 105–112, 2001.
  • [30] R. Patera, “Space event detection method,” J. Space. and Rock., vol. 45, no. 3, pp. 554–559, 2008.
  • [31] J. Buzen and A. Shum, “MASF - multivariate adaptive statistical filtering.” in Int. CMG Conference.   Comp. Meas. Gr., 1995, pp. 1–10.
  • [32] R. Campello et al., “Hierarchical density estimates for data clustering, visualization, and outlier detection,” TKDD, vol. 10, no. 1, p. 5, 2015.
  • [33] A. Lazarevic et al., A Comparative Study of Anomaly Detection Schemes in Network Intrusion Detection.   SIAM, 2003, pp. 25–36.
  • [34] M. Reddy et al., “Probabilistic detection methods for acoustic surveillance using audio histograms,” Circ., Sys., Sig. Proc., vol. 34, no. 6, pp. 1977–1992, 2015.
  • [35] N. Pržulj et al., “Modeling interactome: scale-free or geometric?” Bioinform., vol. 20, no. 18, pp. 3508–3515, 2004.
  • [36] P. Rousseeuw and K. Driessen, “A fast algorithm for the minimum covariance determinant estimator,” Technometrics, vol. 41, no. 3, pp. 212–223, 1999.