Discovering Signals from Web Sources to Predict Cyber Attacks

06/08/2018 ∙ by Palash Goyal, et al. ∙ USC Information Sciences Institute 0

Cyber attacks are growing in frequency and severity. Over the past year alone we have witnessed massive data breaches that stole personal information of millions of people and wide-scale ransomware attacks that paralyzed critical infrastructure of several countries. Combating the rising cyber threat calls for a multi-pronged strategy, which includes predicting when these attacks will occur. The intuition driving our approach is this: during the planning and preparation stages, hackers leave digital traces of their activities on both the surface web and dark web in the form of discussions on platforms like hacker forums, social media, blogs and the like. These data provide predictive signals that allow anticipating cyber attacks. In this paper, we describe machine learning techniques based on deep neural networks and autoregressive time series models that leverage external signals from publicly available Web sources to forecast cyber attacks. Performance of our framework across ground truth data over real-world forecasting tasks shows that our methods yield a significant lift or increase of F1 for the top signals on predicted cyber attacks. Our results suggest that, when deployed, our system will be able to provide an effective line of defense against various types of targeted cyber attacks.



There are no comments yet.


page 6

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In today’s interconnected world, all types of private and proprietary information—from personal health records and communications, to government records, bank information, and intellectual property—are accessible over the Internet. While the benefits of nearly ubiquitous, on-demand access to information are significant, equally significant are the risks that such access poses to individuals and organizations by exposing them to cyber attacks. The risks posed by cyber threat include financial losses and political instability, as demonstrated by high-profile attacks, including massive Equifax and Yahoo! data breaches and Wannacry ransomware, which paralyzed critical infrastructure worldwide to include hospitals in the US and UK. For society to continue enjoying the benefits of an open, worldwide Internet, it is critical that we tame the rapidly growing cyber threats posed by a variety of state and non-state actors.

One approach to combating cyber threats is to develop technologies that anticipate them before an actual cyber attack occurs. The intuition behind this forecasting approach is the following. Cyber attacks do not occur in a vacuum. To conduct a cyber attack, hackers first have to choose a target, identify the attack surface (i.e, vulnerabilities in the target’s software and hardware infrastructure), acquire the necessary exploits, malware and expertise to use them, and potentially recruit other participants. Other actors—system administrators, security analysts, and even victims—may discuss vulnerabilities or coordinate a response to exploits. These activities are often conducted online, leaving a variety of digital traces that can be mined to extract signals of pending attacks well before suspicious activity is noted on the target system.

Identifying useful signals of impending cyber attacks poses several research challenges. First, while some of the data relating to activities of cyber actors is openly available, malicious actors often obfuscate their actions using anonymized and encrypted Internet protocols. Second, the behavioral processes generating activities of interest are likely to be weak, sparse, and transient, posing significant challenges to picking them out from among massive quantities of entirely innocuous activity. Finally, translating the signals to generate a warning about a cyber attack presents yet another challenge.

Under the IARPA-funded CAUSE program, USC Information Sciences Institute has developed an end-to-end prototype, called EFFECT, to forecast emerging cyber threats. This paper describes two machine learning methods for time series prediction that are used by EFFECT to forecast cyber attacks. The methods take as input historical data to learn a model of cyber attacks. These models capture patterns present in historical data that help forecast new cyber attacks. We show that we can improve the predictions of these baseline models by leveraging signals from external Web data sources.

To construct external signals, EFFECT harvests data from a variety of sources, including vulnerability databases, malicious email and malware trackers, but also from sources not conventionally used in security applications, such as social media, blogs and darkweb forums. From these data sources we extract a variety of time series, each representing the number of daily occurrences of cyber security-related terms. The time series are used as external signals in the forecasting task.

To train the forecasting models, and to evaluate their predictions, we use the ground truth data about cyber attacks provided through the CAUSE program from two companies. The ground truth data comprise of attacks intercepted at both organizations, which correspond to three types of events: malware installed on user’s computer (endpoint-malware), malicious email (malicious-email) and malicious destination a user navigates to (malicious-destination).

Specifically, our paper makes the following contributions:

  • Describes and evaluates a time series forecasting method based on autoregressive models that leverage external time series in the prediction task

  • Describes and evaluates a time series forecasting method based on a neural network.

  • Identifies signals from online data sources that consistently improve predictions of cyber attacks in the ground truth data.

The rest of the paper is organized as follows. We first describe the Web data sources used by EFFECT. Next, we describe in detail the two machine learning algorithms used in the forecasting task, as well as how they are trained. Finally, we evaluate prediction results and discuss the significance of predictive signals identified by the system.

2 Data

In this section, we briefly describe the Web data sources used by EFFECT, how signals are extracted from these sources, and the ground truth data used to train and validate forecasting models.

2.1 Web Data Sources

2.1.1 Dark/deep web

Deep and Dark (D2Web) web are non-indexed sites on the open Internet which are accessed using anonymization protocols (most notably the TOR protocol). These web sites host discussion forums and marketplaces, which are often used for malicious or illicit purposes, for example, to buy and sell drugs, guns, hacked data or exploits. It has been shown that the activity on these websites can signal potential cyber attacks [1, 2, 3, 4]. The infrastructure used to collect the data is described in  [5, 6]. Close to 300 different D2Web sites were used for this study.

2.1.2 Twitter

It has been shown that discussions on social media can be used as signals for detecting cyber threats [4, 7]. Therefore we collected tweets which where either posted by security experts or contained cyber security related keywords by manually compiling a list of almost 250 experts and 1500 multilingual security related terms.

2.1.3 Blogs

Security blogs are posts written by expert analysts regarding news and events in the cyber security domain at the time of writing. The EFFECT system is crawling about 70 different websites to collect blog posts. Using security blogs as a data source for predicting cyber events is relatively new and was originally proposed in [3, 4].

2.1.4 Vulnerability Database

Software vulnerabilities are often exploited by malicious actors in cyber attacks [8, 9]. National Vulnerability Database (NVD) is the largest publicly available repository which contains information about reported vulnerabilities in software. We collected vulnerabilities for different software products to evaluate their role in predicting cyber attacks.

2.1.5 Honeypots

A honeypot is a security resource with the goal of having a system probed and attacked. Traffic reaching honeypots may be malicious and can provide a window into hacker activities. We collected data from a network of ten honeypots deployed by the EFFECT team [GoogleDorks]: specifically, the number of queries received daily by each honeypot serves as an external signal in the cyber forecasting task. The honeypots were deployed and data was collected starting in October 2017.

2.2 External Signals

Data sources already providing the number of daily events of a specific type were used as is. To use the textual data collected by EFFECT for the prediction task, we compiled a list of 50 important keywords in the cyber security domain. These keywords included terms, such as 0day, exploits, vpn, and vulnerabilities. The full list of terms can be found in the Supplementary Information (SI). We created external signals from the data sources by extracting the time series of the number of daily occurrences of each cyber term, giving us 50 external signals for each of the D2Web, social media and blogs domains.

2.3 Ground Truth

We use as ground truth (GT) data about three types of cyber attacks provided by two organizations to the CAUSE program (refered to as OrgA and OrgB). The data contains occurrence times of three types of cyber attacks:

  • An endpoint-malware attack is recorded when anti-virus software used by the organization finds malware installed on end-user’s system.

  • A malicious-email attack is the receipt of an email that contains a malicious email attachment and/or a link to a known malicious destination.

  • A malicious-destination attack is recorded when end-user clicks on a malicious URL.

These data cover a period from July 2017 to January 2018.

3 Methods

In this section, we describe the time series modeling approaches we use for the problem of cyber attack prediction. We give an overview of classical approaches based on autoregressive models and more recent approaches which use neural networks for prediction. Next, we describe the training and predicting methods for the two machine learning approaches used in forecasting cyber attacks.

3.1 Forecasting Task

Fig. 1: Illustration of cyber attack forecasting. We assume that ground truth data representing historical cyber attacks is provided for training prediction models, along with external signals. The predictions are made for events occurring during the future time period.

Figure 1 illustrates the forecasting task. Given a time series describing observed events in the ground truth data, our goal is to use this information, plus information from external signals, to predict new events occurring during some future forecasting time span. The prediction model is trained on the historical GT data and external signals. The illustration highlights the common case where up-to-date historical GT data may not be available, but we assume that most recent external signals are always available.

3.2 Forecasting Models

3.2.1 Autoregressive Models

We apply the popular ARIMA and ARIMAX models to the forecasting task. ARIMA stands for autoregressive integrated moving average. The key idea behind ARIMA is that the number of current events () depends on the past counts and forecast errors. Formally, ARIMA(,,) defines an autoregressive model with autoregressive lags, difference operations, and moving average lags (see [10]). Given the observed series of events , ARIMA(,,) applies () difference operations to transform to a stationary series . Then the predicted value at time point can be expressed in terms of past observed values and forecasting errors which is as follows:


Here is a constant, is the autoregressive (AR) coefficient at lag , is the moving average (MA) coefficient at lag , is the forecast error at lag , and

is assumed to be the white noise (

). The AR model is essentially an ARIMA model without moving average terms.

ARIMAX (Autoregressive Integrated Moving Average with Exogenous variables) is an autoregressive model that leverages (optional) external signals. In this model, the observation at a particular time point depends on immediate past observations, past forecast errors, and external variables. Like ARIMA, ARIMAX(,,) is defined with the same three autoregressive order terms as ARIMA. Given the observed series of events and optional external features
,the model is defined as follows:


Here is the stationary series after difference operations, is a constant, is the autoregressive (AR) coefficient at lag , is the moving average (MA) coefficient at lag , is the forecast error at lag , is the coefficient for feature and is assumed to be the white noise.


We use maximum likelihood estimation for learning the parameters; more specifically, parameters are optimized with LBFGS method 

[11]. These models assume that are known and the series is weakly stationary. To select the values for we employ grid search over the values of and select the one with minimum AIC score.


For ARIMA, we use the learned parameters to estimate the event counts for the period with missing GT and then use these counts with learned parameters to predict next month’s/week’s cyber attack counts. For ARIMAX, similar to ARIMA, we first estimate the event counts for the period with missing GT using learned parameters, past GT, and external sources. The model then predicts next month’s/week’s cyber attack count with the estimated count for missing GT period and the external sources.

3.2.2 Neural Network Models

Neural network based models have been widely used for time series analysis as far back as 2003 with  [12]

. The autoregressive models express the predicted event count as a linear function of external signals and historical counts. However, neural network based models can capture the non-linearity by using multiple layers of non-linear activation functions. The caveat is that such models typically require a large amount of training data to accurately estimate the model parameters.

Recently, many variants of recurrent neural network units have been proposed including Long Short Term Memory (LSTM) 

[13], Phased LSTM [14]

and Gated Recurrent Unit (GRU) 

[15]. The units are composed of various gates such as input, forget and output. The variants differ in the number, type and connections between gates.

Fig. 2: Long Short-Term Memory architecture.

LSTM is composed of the above gates and a cell state, a unit is displayed in Figure 2. The outputs of these gates are calculated as follows:


where and

are weight matrices and bias vectors respectively. The cell state is updated using:



is the cell state. The parameters are learned using backpropagation and gradient descent.

PhasedLSTM was recently developed to process irregular and sparse time series. The model extends LSTM and adds a new time gate which controls when the cell state is updated. The gate oscillates between open and closed. In open state, the LSTM cell state is updated, whereas, while in the closed state the cell state is propagated from previous time step. This gate introduces two parameters: and which control the ratio of duration of the open phase to full period and time period of oscillation respectively.

GRU unit uses gates similar to LSTM unit. However, it merges the forget and input gate into an update gate which is calculated as follows:


It also merges hidden state and output which is computed as follows:


where is the reset gate and is computed similar to update gate as .


We use Adaptive Moment Estimation (Adam) 


to learn the neural network parameters. We select the hyperparameters including network architecture and learning rate using cross-validation on held out data set. For monthly and weekly analyses, we use our historical data until the start of the previous month and previous week respectively.


We use the learned parameters on training data to predict next month’s/week’s cyber attack counts. All three methods were used; however, the results for GRUs are presented as this method is computationally more efficient with similar or better performance.

3.2.3 Baseline Models

We compare our proposed methods against a baseline ARIMA model, which predicts the number of future events from a Poisson distribution with rate

as the average number of past events over a time window . Formally,


where is selected as the number of time units in a training period.

3.3 Evaluation Metrics

We use two different measures for quantitative evaluation of predictions made by the models.

First, we use program-wide metrics to evaluate the accuracy of predictions. The metrics work by matching predictions against the ground truth data. To that end, we convert predicted event counts for each day to warnings of attacks predicted for that day. We then match the warnings against ground truth data using the Hungarian matching algorithm [17]. The algorithm compares predicted warnings to ground truth events . The algorithm identifies the mutually exclusive pairs , such that the sum of similarities is maximized. If occurs within some time window of event , then equals the quality score. Otherwise

. The window around the actual events varies based on the event type: for endpoint-malware, it is 0.875 days, malicious-destination within 1.625 days and malicious-email within 1.375 days. Using the matching algorithm, we can consistently quantify the precision and recall of the predictions, and calculate the resulting F1-score, which gives the geometric mean of precision and recall.

  • Precision

  • Recall

  • F1

Here, , and denote the true positives, true negatives and false negatives respectively.

To measure the forecasting error of the model, we use three measures: (a) mean absolute error (MAE), b) root mean squared error (RMSE), and c) mean absolute scaled error (MASE) [18]. These measures are defined in terms of forecasting error, , at daily time steps , where and are the true and predicted values, respectively.

  • Mean Absolute Error

  • Root Mean Squared Error

  • Mean Absolute Scaled Error

4 Results

4.1 Baseline Monthly vs Weekly Analysis

Fig. 3: Baseline Monthly vs Weekly Predictions

In order to understand how the granularity of the prediction window affects performance we test two prediction windows: (i) month, and (ii) week. For monthly prediction, all historical GT data and external signals through the previous month are used to make predictions for the next month. For weekly prediction, GT data up to the start of the previous month and external signals through the previous week are used to make next week’s predictions. We use this framework, because GT data is released on a monthly basis within the CAUSE program, while external signals are available on a continual basis.

Figure 3 shows prediction performance of the baseline models, which use historical GT data through the end of the previous month only, as a function of time for the two target organizations and three event types. Information contained in the historical data allows models to achieve decent prediction performance, except for the malicious-destination event type, which contains too few events for ARIMAX to learn from. In contrast, the GRU model is able to learn from even sparse data. With respect to the granularity of prediction, making predictions for a week should yield higher performance than monthly predictions. However, here we observe that there is either comparable performance, with no statistical difference between monthly and weekly, or decreased performance from monthly to weekly. This is in part due to the sparsity of the data, as removing any weeks with zero events (and thus zero F1) raises the weekly score to be comparable to the monthly in most cases. This is also in part due to the fact that when we train on weeks, we may experience something similar to Fig. 5, where we see strong performance in one or two weeks of a month get washed out by weak performance during the rest of the month. Considering these aspects, monthly performance is used an the evaluation time frame in our analyses. We only consider monthly predictions for the rest of the paper and results for weekly performance can be found at the end of the document.

4.2 Finding Correlated Signals

Correlation analysis is done as a pre-processing step to pick out signals that may have predictive value. For each target time series, we compute the lagged cross correlation with all other signals. The lagged signal is created by shifting the time series by a certain amount. We used lags from to days and chose the lag with highest correlation. Figure  4 shows the correlation of three data sources with ground truth data. For OrgA, Blogs overall have the highest correlations followed by D2Web signals and then Twitter. For OrgB, D2Web has the highest correlations, with Blogs following and then Twitter. In both cases, Twitter is the lowest. OrgB endpoint malware has the highest correlation with any of the external signals and oracle, accounts, blackmail and malwares are the keywords with highest average correlations. The same analysis was done for the other two data sources, vulnerabilities and honeypots. Refer to the Appendix section for their corresponding plots. Out of these, vulnerabilities related to f5 big-ip and Oracle have high correlation with malicious email, especially orgB. Oracle vulnerabilities also have high correlations with malware endpoints as do Mozilla and Novell.

Fig. 4: Correlation of word count signals with ground truth

4.3 Identifying Predictive Signals

From the correlation analysis we have 285 signals we can use in our models. As these 285 signals span several data sources and are individually evaluated on several different prediction tasks, it is difficult to reason about any performance measure without summarizing it visually. We analyze the performance of each signal within a data source and then provide comparison across various data sources.

Before using external signals in prediction, we first align them using correlation analysis with GT: we determine the lag where maximum correlation occurred between a temporal feature and GT, and use the lag for alignment. For ARIMAX we set a maximum auto-regressive lag of 7 days. In each section, we report the five signals with the highest average lift (defined as the ratio of the model trained with signal X to the baseline model) in F1 score for each event-type-target combination. Summary results using forecasting error measures, specifically, RMSE are in the supplemental material in Table LABEL:tab:perf-monthly-orga-gru to Table LABEL:tab:perf-weekly-orga-gru.

Fig. 5: Difference in Weekly GRU F1 Performance between signal and baseline - OrgA Endpoint Malware

4.3.1 Blog Signals

Fig. 6: Monthly GRU F1 performance of blog signals.

Figure 6 shows improvements in prediction performance due to signals from blogs, evaluated using GRUs. The results using ARIMAX can be found in the supplemental material. For OrgA the top signals work best for endpoint-malware and for OrgB, the top signals work best with the malicious-destination events. Additionally, OrgB’s malicious-destination events can be better predicted than other event from either organization, with usb, blackmail and zeroday being the best keywords. For OrgA, endpoint-malware was the most predictive with keyword is phishing. For malicious-email, it is very difficult for blogs to predict this event type well with neither organization above a lift of 2. However, vulnerability and ransomware were both in the top 3 signals for both organizations. Interestingly, the majority of the keywords that were the best for OrgA were not best for OrgB. The only terms that carried over were ransomware, vulnerability and zeroday. The ground truth events and associated best signals are different for the two companies analyzed. This is indicated in the correlation analysis above and Figure 6.

4.3.2 D2Web Signals

Fig. 7: Monthly GRU F1 performance of D2web signals.

Figure 7 highlights the results of key terms in D2Web posts for GRU and this performance is better than ARIMAX predictions. Similar to GRU blogs, predictive signals work best with malicious-destination, especially for OrgB. This may indicate that blogs and d2web as a source are very similar in their underlying signals. Other similarities to blogs are how key term signals perform better than the baseline for endpoint-malware works better for OrgA than for OrgB. The keywords overlap more than for blogs, but still not a significant amount. However, the top two keywords for malicious-destination for both organizations were account and hack.

4.3.3 Twitter Signals

Fig. 8: Monthly GRU F1 performance of Twitter signals.

Figure 8 illustrates the performance of Twitter keywords as external signals for cyber attack prediction using GRU. Similar to blogs and D2Web, malicious-destination events for OrgB achieve higher lift using the external signals than other event types with endpoint-malware being a close second for OrgA. Extremely low improvement over baseline for malicious email suggests that keyword counts on Twitter do not provide information predictive of malicious email counts which is the same for blogs and D2web. For malicious destination, the terms trojan, trojans, account and accounts are in the top 5 predictive signals for OrgB. The unique aspect of the Twitter signal is how the endpoint-malware for OrgA and the malicious-destination for OrgB using GRUs has more of a gradient than blogs or D2Web. As such, 0day has a significantly more comparative advantage over the second best keyword of breach for OrgA endpoint-malware.

4.3.4 Vulnerability Signals

Fig. 9: Monthly GRU F1 performance of vulnerability signals.

The predictive capacity of published vulnerabilities in software is illustrated in Figure 9. The overall performance gain is similar to other sources for the GRU models. Here we observe that Redhat vulnerabilities are of predictive improvement on malicious-destination attacks for OrgB which is counter to what we would expect since both the companies use Microsoft products. In general, tracking vulnerabilities can help identify susceptibility of the organizations to attacks but only for one event type per organization. We observe that malicious emails continue to prove challenging to predict using such signals. In this case, it is logical as malicious emails do not require exploiting software vulnerabilities. Similar to other signals, ARIMAX does not leverage external signals well (see Figure LABEL:fig:arimax-vulnerability in Appendix) with maximum improvement about 200% of baseline. This is due to how GRU handles sparsity in a signal far better than ARIMAX and vulnerabilities are the most sparse source.

4.3.5 Honeypot Signals

Fig. 10: Monthly GRU F1 performance of honeypots signals.

Fig. 10 shows the common theme that malicious-destination, particularly for OrgB is the best to predict using GRUs. It is clear that this may be the case due to poor baseline performance on malicious-destination; any performance gain over baseline does not have to be substantial to achieve substantial lift. Additionally, the GRU model works better than ARIMAX, again due to the sparsity of the data. Furthermore, the honeypot signal has the least predictive power for all the sources considered. This is evident that the best honeypot signal had a lift around 11 whereas all other sources had best signals well above a lift of 15. This can be explained however by the fact that we did not start collecting honeypot data until the end of October 2017, missing key activity between July and October.

4.4 Selecting the Best Signals Overall

Fig. 11: Comparing monthly performance models trained on the best signal for each configuration against baseline.

In order to assess how well the relative lift identifies predictive signals, we consider the signals with the highest relative lift for any particular event-type-target configuration and compare its absolute F1 performance against baseline. Fig. 11 illustrates that when we choose the best signal accordingly, we see that the model on average outperforms the baseline every time. In Table  I we see that these signals with the highest absolute F1 score come predominantly from vulnerabilities, with blogs and D2Web following in contributions. Additional plots of best performing monthly and weekly signals identified by both models can be found in Supplementary Information (SI) file.

In Fig. 12

we observe substantial variance in the density of predictions made across the evaluation period, from GRU trained on

d2web_zeroday having made predictions each week to ARIMAX trained on d2web_zeroday not having substantial information to make predictions past the first week. Interestingly, for ARIMAX malicious-email trained on OrgB, the baseline_arima model seems to perform the best.

The consistency among best performing signals in monthly forecasts suggests the methods work well to identify useful signals (see SI for details and comparison to weekly forecasts). For malicious destination-type event, twitter_trojan, d2web_hack and d2web_account work well across both targets with both models. To forecast malware-type events, the best signals are twitter_0day, twitter_breach, twitter_vulnerability for OrgA and twitter_windows7, twitter_cpe and twitter_ransomware for OrgB. These choices make sense, as CPE (Common Product Enumeration) numbers identify vulnerable software. Interestingly, for malicious email events, the vulnerabilities data source provides best signals for OrgA, while for OrgB, the best performing signals are twitter_phishing, twitter_malware, and blogs_ransomware. Ransomware is a type of malware that is usually spread through email.

Fig. 12: Temporal performance of the best signals.
Model Event_Type Org Signal F1
GRU mal-dest. A twitter_trojan 48.00
GRU mal-dest. B d2web_account 50.50
GRU ep-malware A twitter_0day 49.66
GRU ep-malware B twitter_windows7 61.91
GRU mal-email A vuln._fedoraproj_fedora 65.30
GRU mal-email B twitter_phishing 43.12
ARIMAX mal-dest. A d2web_oracle 19.05
ARIMAX mal-dest. B blogs_oracle 34.69
ARIMAX ep-malware A vuln._graphicsmagick 38.46
ARIMAX ep-malware B baseline 62.22
ARIMAX mal-email A vuln._apple_os_x 51.34
ARIMAX mal-email B baseline 32.40
TABLE I: Best performing signals for each configuration.

5 Related Work

Due to their disruptive nature, predicting cyber attacks is an important research effort. Most research efforts focus on using network traffic for forecasting as in[19, 20]. These methods leverage network traffic or sensors at different layers as the underlying data to forecasting models. We specifically avoid network data and base our predictions on open source information. Other efforts include [21] which only used the National Vulnerability Database with moderate success and they highlight the difficulty in using public sources for building effective models. The main difference is that our work is based on actual cyber event ground truth as reported by the two target organizations. The closest to our research is Gandotra et al [22] who outlined a number of cyber prediction efforts using statistical modeling and algorithmic modeling. They highlight several significant challenges that we tried to address. The first challenge is that open source ground truth is often incomplete and should be compiled from multiple sources and analysis doesn’t scale to real world scenarios. We were able to get ground truth data from two companies, this ground truth is across three different attack vectors and is over a two year time period. The additional challenges in [22] focus on the volume, speed and heterogeneity of network data which we avoid since we are attempting to prevent cyber events specifically with non-network data. They also present two modeling approaches of statistical modeling and algorithmic modeling. We used statistical models not unlike what they present as classical time series models with auto-regressive, integrated moving average with historical data and external signals.

Developing a precise model for the dynamic behavior of time series is a challenging problem and an essential one for the success of forecasting methods. Researchers have extensively studied and used time-series analysis in many domains, such as finance [23], epidemiology [24, 25], geophysics [10], and sociology [26]. A popular strategy for analyzing time series data is using classical autoregressive models such as AR, ARMA, ARIMA, and ARIMAX [27, 10, 28]. Autoregressive models are widely used in intrusion detection, detecting DoS attacks, and network monitoring [29]. These models assume that the underlying data-generating process is linear, i.e., the value at a time point is a linear combination of the past values. However, real-world time series exhibit volatility and nonlinearity. A way to deal with the problem of volatility is to employ ARCH and GARCH, which are extensions of classical autoregressive models [30].

Neural network based models have been widely used for time series prediction tasks such as weather forecasting [31], sentence completion [32] and oilfield production prediction [33]. Recent success in recurrent neural networks [34] on such tasks has led to produce several variants of the models such as Long Short Term Memory [13], Peephole-LSTM [35], Depth Gated RNN [36], Clockwork RNN [37], Gated Recurrent Unit (GRU) [15] and Phased LSTM [14]. Cyber security community recently adapted neural network models. DeepExploit [3] uses neural embeddings to predict exploit likelihood of a vulnerability. Filonov et al [38] developed RNN based model for early detection of cyber attacks. Our model, on the other hand, considers various recurrent neural network based models on external signals to identify predictive signals and we report results on GRUs only.

6 Conclusions

In this paper, we tackle the challenging problem of predicting targeted cyber attacks. Cyber attacks do not emerge randomly, rather, they are caused by a vast amount of hidden factors which include motivational factors like financial, espionage fun or grudge; the exploitation of new or known software and hardware vulnerabilities; the appearance of new vulnerabilities and malwares, to list a few. Our approach is to systematically, and in a fully automated manner, harness these hidden factors to identify predictive signals from various public data sources such as social media signals, Internet-based sensors, dark Web, blogs, and more.

We used state of the art machine learning methods for time series prediction. We showed that historical ground truth data already carried enough information to enable these methods to learn patterns to predict future events. We then showed how incorporating information from external signals improves on these forecasts and quantified the improvement. In this manner, we are able to identify the best signals to predict cyber threats for each target organization.

Our framework provides a systematic way that can be used to improve decision making in Cyber Security Policy. Our results show that depending on the specific target and type of attack, different data sources should be monitored for the early prediction and mitigation of cyber threats. Indeed, while some external signals are good predictors for both organizations, the best performing signals are unique to each target. Thus, providing suggestive evidence that our method is able to recognize the idiosyncrasies and specific vulnerabilities of each organization.

Future work will be devoted to enhancing the predictive power of external signals. One direction is by inferring latent factors that are common to all signals that have better predictive power than the signal by itself. A second idea is to evaluate various linear combinations of predictive signals as well as exploiting the semantics of the identified signals. Signal fusion may be especially useful for weekly-level predictions, as different signals may complement each other and allow for more robust performance over time.

Supplemental material can be found here:


This work was supported by the Office of the Director of National Intelligence (ODNI) and the Intelligence Advanced Research Projects Activity (IARPA) via the Air Force Research Laboratory (AFRL) contract number FA8750-16-C- 0112. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, AFRL, or the U.S. Government.


  • [1] E. Marin, A. Diab, and P. Shakarian, “Product offerings in malicious hacker markets,” in ISI, Sept 2016, pp. 187–189.
  • [2] S. Samtani, K. Chinn, C. Larson, and H. Chen, “Azsecure hacker assets portal: Cyber threat intelligence and malware analysis,” in ISI, Sept 2016, pp. 19–24.
  • [3] N. Tavabi, P. Goyal, M. Almukaynizi, P. Shakarian, and K. Lerman, “Darkembed: Exploit prediction with neural language models,” in

    IAAI 2018: Thirtieth Annual Conference on Innovative Applications of Artificial Intelligence

    , 2018.
  • [4] A. Sapienza, A. Bessi, S. Damodaran, P. Shakarian, K. Lerman, and E. Ferrara, “Early warnings of cyber threats in online discussions,” in 2017 IEEE International Conference on Data Mining Workshops (ICDMW).   IEEE, 2017, pp. 667–674.
  • [5] J. Robertson, A. Diab, E. Marin, E. Nunes, V. Paliath, J. Shakarian, and P. Shakarian, Darkweb Cyber Threat Intelligence Mining.   Cambridge University Press, 2017. [Online]. Available:
  • [6] E. Nunes, A. Diab, A. Gunn, E. Marin, V. Mishra, V. Paliath, J. Robertson, J. Shakarian, A. Thart, and P. Shakarian, “Darknet and deepnet mining for proactive cybersecurity threat intelligence,” in ISI.   IEEE, 2016, pp. 7–12.
  • [7] C. Sabottke, O. Suciu, and T. Dumitras, “Vulnerability disclosure in the age of social media: Exploiting twitter for predicting real-world exploits,” in USENIX, 2015, pp. 1041–1056. [Online]. Available:
  • [8] G. Martin, J. Kinross, and C. Hankin, “Effective cybersecurity is fundamental to patient safety,” 2017.
  • [9] N. Tavabi, P. Goyal, M. Almukaynizi, P. Shakarian, and K. Lerman, “Darkembed: Exploit prediction with neural language models,” in Proceedings of AAAI Conference on Innovative Applications of AI (IAAI2018), 2018.
  • [10] R. H. Shumway and D. S. Stoffer, Time series analysis and its applications: with R examples.   Springer Science & Business Media, 2010.
  • [11] S. Seabold and J. Perktold, “Statsmodels: Econometric and statistical modeling with python,” in Proceedings of the 9th Python in Science Conference, vol. 57, 2010, p. 61.
  • [12] G. P. Zhang, “Time series forecasting using a hybrid arima and neural network model,” Neurocomputing, vol. 50, pp. 159–175, 2003.
  • [13] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  • [14] D. Neil, M. Pfeiffer, and S.-C. Liu, “Phased lstm: Accelerating recurrent network training for long or event-based sequences,” in Advances in Neural Information Processing Systems, 2016, pp. 3882–3890.
  • [15] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using rnn encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014.
  • [16] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [17] H. W. Kuhn, “The hungarian method for the assignment problem,” Naval Research Logistics (NRL), vol. 2, no. 1-2, pp. 83–97, 1955.
  • [18] R. Hyndman and A. Koehler, “Another look at measures of forecast accuracy,” International journal of forecasting, vol. 22, no. 4, pp. 679–688, 2006.
  • [19] H. Park, S.-O. D. Jung, H. Lee, and H. P. In, “Cyber weather forecasting: Forecasting unknown internet worms using randomness analysis,” in IFIP International Information Security Conference.   Springer, 2012, pp. 376–387.
  • [20] E. Pontes, A. E. Guelfi, S. T. Kofuji, and A. A. Silva, “Applying multi-correlation for improving forecasting in cyber security,” in Digital Information Management (ICDIM), 2011 Sixth International Conference on.   IEEE, 2011, pp. 179–186.
  • [21] S. Zhang, X. Ou, and D. Caragea, “Predicting cyber risks through national vulnerability database,” Information Security Journal: A Global Perspective, vol. 24, no. 4-6, pp. 194–206, 2015.
  • [22] E. Gandotra, D. Bansal, and S. Sofat, “Computational techniques for predicting cyber threats,” in Intelligent Computing, Communication and Devices, Advance in Intelligent Systems and Computing, 2015, pp. 247–253.
  • [23] A. Lendasse, E. De Bodt, V. Wertz, and M. Verleysen, “Non-linear financial time series forecasting-application to the bel 20 stock market index,” European Journal of Economic and Social Systems, vol. 14, no. 1, pp. 81–91, 2000.
  • [24] P. Chakraborty, P. Khadivi, B. Lewis, A. Mahendiran, J. Chen, P. Butler, E. O. Nsoesie, S. R. Mekaru, J. S. Brownstein, M. V. Marathe et al., “Forecasting a moving target: Ensemble models for ili case count predictions,” in Proceedings of the 2014 SIAM international conference on data mining.   SIAM, 2014, pp. 262–270.
  • [25] Z. Wang, P. Chakraborty, S. R. Mekaru, J. S. Brownstein, J. Ye, and N. Ramakrishnan, “Dynamic poisson autoregression for influenza-like-illness case count prediction,” in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’15.   New York, NY, USA: ACM, 2015, pp. 1285–1294. [Online]. Available:
  • [26] J. M. Box-Steffensmeier, J. R. Freeman, M. P. Hitt, and J. C. Pevehouse, Time series analysis for the social sciences.   Cambridge University Press, 2014.
  • [27] G. E. Box, G. M. Jenkins, G. C. Reinsel, and G. M. Ljung, Time series analysis: forecasting and control.   John Wiley & Sons, 2015.
  • [28] R. Prado and M. West, Time series: modeling, computation, and inference.   CRC Press, 2010.
  • [29] J. Viinikka, H. Debar, L. Mé, A. Lehikoinen, and M. Tarvainen, “Processing intrusion detection alert aggregates with time series modeling,” Information Fusion, vol. 10, no. 4, pp. 312–324, 2009.
  • [30] R. Douc, E. Moulines, and D. Stoffer, Nonlinear time series: theory, methods and applications with R examples.   CRC Press, 2014.
  • [31] S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo, “Convolutional lstm network: A machine learning approach for precipitation nowcasting,” in Advances in neural information processing systems, 2015, pp. 802–810.
  • [32] P. Mirowski and A. Vlachos, “Dependency recurrent neural language models for sentence completion,” arXiv preprint arXiv:1507.01193, 2015.
  • [33] C. M. Cheung, P. Goyal, V. K. Prasanna, and A. S. Tehrani, “Oreonet: Deep convolutional network for oil reservoir optimization,” in Big Data (Big Data), 2017 IEEE International Conference on.   IEEE, 2017, pp. 1277–1282.
  • [34] L. Medsker and L. Jain, “Recurrent neural networks,” Design and Applications, vol. 5, 2001.
  • [35] F. A. Gers, N. N. Schraudolph, and J. Schmidhuber, “Learning precise timing with lstm recurrent networks,” Journal of machine learning research, vol. 3, no. Aug, pp. 115–143, 2002.
  • [36] K. Yao, T. Cohn, K. Vylomova, K. Duh, and C. Dyer, “Depth-gated recurrent neural networks,” arXiv preprint, 2015.
  • [37] J. Koutnik, K. Greff, F. Gomez, and J. Schmidhuber, “A clockwork rnn,” arXiv preprint arXiv:1402.3511, 2014.
  • [38] P. Filonov, F. Kitashov, and A. Lavrentyev, “Rnn-based early cyber-attack detection for the tennessee eastman process,” arXiv preprint arXiv:1709.02232, 2017.