An insider threat is a malicious threat from people within the organization. It may involve intentional fraud, the theft of confidential or commercially valuable information, or the sabotage of computer systems. The subtle and dynamic nature of insider threats makes detection extremely difficult. The 2018 U.S. State of Cybercrime Survey indicates that 25% of the cyberattacks are committed by insiders, and 30% of respondents indicate incidents caused by insider attacks are more costly or damaging than outsider attacks .
Various insider threat detection approaches have been proposed [2, 3, 4, 5, 6, 7, 8]. However, most of the existing approaches only focus on operation type (web visit, send email, etc) information and do not consider the crucial activity time information. In this paper, we study how to develop a detection model that captures both activity time and type information. In literature, the marked temporal point process (MTPP) is a general mathematical framework to model the event time and type information of a sequence. It has been widely used for predicting the earthquakes and aftershocks . The traditional MTPP models make assumptions about how the events occur, which may be violated in reality. Recently, researchers [10, 11] proposed to combine the temporal point process with recurrent neural networks (RNNs). Since the neural network models do not need to make assumptions about the data, the RNN-based MTPP models usually achieve better performance than the traditional MTPP models.
However, one challenge of applying RNN-based temporal point processes in insider threat detection is it cannot model the time information in multiple time scales. For example, user activities are often grouped into sessions that are separated by operations like “LogOn” and “LogOff”. The dynamics of activities within sessions are different from the dynamics of sessions. To this end, we propose a hierarchical RNN-based temporal point process model that is able to capture both the intra-session and inter-session time information. Our model contains two layers of long short term memory networks (LSTM), which are variants of the traditional RNN. The lower-level LSTM captures the activity time and types in the intra-session level, while the upper-level LSTM captures the time length information in the inter-session level. In particular, we adopt a sequence to sequence model in the lower-level LSTM, which is trained to predict the next session given the previous session. The upper-level LSTM takes the first and last hidden states from the encoder of the lower-level LSTM as inputs to predict the interval of two sessions and the duration of next session. By training the proposed hierarchical model with the activity sequences generated by normal users, the model can predict the activity time and types in the next session by leveraging the lower-level sequence to sequence model, the time interval between two consecutive sessions and the session duration time from the upper-level LSTM. In general, we expect our model trained by normal users can predict the normal session with high accuracy. If there is a significant difference between the predicted session and the observed session, the observed session may contain malicious activities from insiders.
Our work makes the following contributions: (1) we develop an insider threat detection model that uses both activity type and time information; (2) we propose a hierarchical neural temporal point process model that can effectively capture two time-scale information; (3) the experiments on two datasets demonstrate that combining the activity type and multi-scale time information achieves the best performance for insider threat detection.
Ii Related Work
Ii-a Insider Threat Detection
Much of the research work on the characterization of insiders. Based on the intention of the attack, there are three types of insiders, i.e., traitors who misuse his privileges to commit malicious activities, masqueraders who conduct illegal actions on behalf of legitimate employees of an institute, and unintentional perpetrators who unintentionally make mistakes . Based on the malicious activities conducted by the insiders, the insider threats can also be categorized into three types, IT sabotage which indicate to directly uses IT to make harm to an institute, theft of intellectual property which indicates to steal information from the institute, fraud which indicates unauthorized modification, addition, or deletion of data .
It is well accepted that the insiders’ behaviors are different from the behaviors of legitimate employees. Hence, analyzing the employees’ behaviors via the audit data plays an important role in detecting insiders. In general, there are three types of data sources, host-based, network-based, and context data. The host-based data record activities of employees on their own computers, such as command lines, mouse operations and etc. The network-based data indicate the logs recorded by network equipment such as routers, switches, firewalls and etc. The context data indicate the data from an employee directory or psychological data.
Given different types of data sources, various insider threat detection algorithms have been proposed. For example, some researchers propose to adopt decoy documents or honeypots to lure and identify the insiders 
. Meanwhile, one common scenario is to consider the insider threat detection as an anomaly detection task and adopt the widely-used anomaly detection approaches, e.g., one-class SVM, to detect the insider threats. Moreover, some approaches treat the employee’s actions over a period of time on a computer as a sequence. The sequences that are frequently observed are normal behavior, while the sequences that are seldom observed are abnormal behavior that could be from insiders. Research in 
adopts Hidden Markov Models (HMMs) to learn the behaviors of normal employees and then predict the probability of a given sequence. An employee activity sequence with low probability predicted by HMMS could indicate an abnormal sequence. Research in
evaluates an insider threat detection workflow using supervised and unsupervised learning algorithms, including Self Organizing Maps (SOM), Hidden Markov Models (HMM), and Decision Trees (DT). However, the existing approaches do not model the activity time information. In this work, we aim to capture both the activity time and type information for insider threat detection.
Ii-B Temporal Point Process
A temporal point process (TPP) is a sWe model a user’s behavior as a sequence of activities that can be extracted from various types of raw data, such as user logins, emails, Web browsing, and FTP. Formally, we model the up-to-date activities of a user as sequence where indicates his -th activity session. For example, each session in our scenario is a sequence of activities starting with “LogOn” and ending with “LogOff”. denotes the -th activity in the user’s -th session and contains activity type and occurred time . We define as the inter-activity duration between activities and , as the length time of the -th session, and as the time interval between the -th and -th sessions. Note that is the occurred time of the last activity in the -th session.
The goal of learning in our threat detection is to predict whether a new session is normal or fraudulent. To address the challenge that there are often no or very few records of known insider attacks for training our model, we propose a generative model that models normal user behaviors from a training dataset consisting of only sequences of normal users. The learned model is then used to calculate the fraudulent score of the new session . We quantify the fraudulence of from two perspectives, activity information (including both type and time) within sessions, and session time information (i.e., when a session starts and ends). For example, a user who foresees his potential layoff may have activities of uploading documents to Dropbox and visiting job-searching websites although he may try to hide these abnormal activities in multiple sessions; he may have “LogOn” and “LogOff” times different from his normal sessions as he may become less punctual or may have more sessions during weekends or nights, resulting different session durations and intervals between sessions. Moreover, when a user’s account is compromised, activity and session information from the attacker will also be different even if the attacker tries to mimic the normal user’s behaviors.tochastic process composed of a time series of events that occur in continuous time . The temporal point process is widely used for modeling the sequence data with time information, such as health-care analysis, earthquakes and aftershocks modeling and social network analysis [17, 9, 18]. The traditional methods of temporal point processes usually make parametric assumptions about how the observed events are generated, e.g., by Poisson processes or self-exciting point processes. If the data do not follow the prior knowledge, the parametric point processes may have poor performance. To address this problem, researchers propose to learn a general representation of the dynamic data based on neural networks without assuming parametric forms [10, 11]. Those models are trained by maximizing log likelihood. Recently, there are also emerging works incorporating the objective function from generative adversarial network [19, 20]21] to further improve the model performance. However, the current TPP models only focus on one granularity of time. In our scenario, we propose a hierarchical RNN framework to model the multi-scale time information.
Iii-a Marked Temporal Point Process
Marked temporal point process is to model the observed random event patterns along time. A typical temporal point process is represented as an event sequence . Each event is associated with an activity type and an occurred time . Let be the conditional density function of the event happening at time given the history events up to time , where as the collected historical events before time . Throughout this paper, we use notation to denote that the function depends on the history. The joint likelihood of the observed sequence is:
There are different forms of . However, for mathematical simplicity, it usually assumes the times and mark are conditionally independent given the history , i.e., , where models the distribution of event types; is the conditional density of the event occurring at time given the timing sequences of past events .
A temporal point process can be characterized by the conditional intensity function, which indicates the expected instantaneous rate of future events at time :
where indicates the number of events occurred in a time interval . Given the conditional density function and the corresponding cumulative distribution at time , the intensity function can be also defined as:
where is the survival function that indicates the probability that no new event has ever happened up to time since . Then, the conditional density function can be described as:
Particular functional forms of the conditional intensity function for Poisson process, Hawkes process, self-correcting process, and autoregressive conditional duration process have been widely studied . For example, a Hawkes process captures the self-excitation phenomenon among events . The conditional intensity function of a Hawkes process is defined as:
where is the base intensity that indicates the intensity of events triggered by external signals instead of previous events; is the triggering kernel which is usually predefined as . The Hawkes process models the self-excitation phenomenon that the arrival of an event increases the conditional intensity of observing events in the near future. Recently, the Hawkes process is widely used to model the information diffusion on online social networks.
However, these different parameterization models make different assumptions about the latent dynamics that is never known in practice. For example, the self-excitation assumption of the Hawkes process may not be held in many scenarios. The model misspecification can seriously degrade the predictive performance.
Iii-B Sequence-to-Sequence Model
In general, a sequence-to-sequence (seq2seq) model is used to convert sequences from one domain to sequences in another domain. The seq2seq consists of two components, one encoder and one decoder. Both encoder and decoder are long short-term memory (LSTM) models and can model the long-term dependency of sequences. The seq2seq model is able to encode a variable-length input to a fixed-length vector and further decode the vector back to a variable-length output. The length of the output sequence could be different from that of the input sequence. The goal of the seq2seq model is to estimate the conditional probability, where is an input sequence and is the corresponding output sequence. The encoder
encodes the input sequence to a hidden representation with an LSTM modelwhere is the up-to-date input, is the previous hidden state, and is the learned current hidden state. The last hidden state captures the information of the whole input sequence. The decoder computes the conditional probability by another LSTM model whose initial hidden state is set as :
In seq2seq model, , where is the -th hidden vector of the decoder; is usually a softmax function.
Iv Insider Threat Detection
We model a user’s behavior as a sequence of activities that can be extracted from various types of raw data, such as user logins, emails, Web browsing, and FTP. Formally, we model the up-to-date activities of a user as sequence where indicates his -th activity session. For example, each session in our scenario is a sequence of activities starting with “LogOn” and ending with “LogOff”. denotes the -th activity in the user’s -th session and contains activity type and occurred time . We define as the inter-activity duration between activities and , as the length time of the -th session, and as the time interval between the -th and -th sessions. Note that is the occurred time of the last activity in the -th session.
The goal of learning in our threat detection is to predict whether a new session is normal or fraudulent. To address the challenge that there are often no or very few records of known insider attacks for training our model, we propose a generative model that models normal user behaviors from a training dataset consisting of only sequences of normal users. The learned model is then used to calculate the fraudulent score of the new session . We quantify the fraudulence of from two perspectives, activity information (including both type and time) within sessions, and session time information (i.e., when a session starts and ends). For example, a user who foresees his potential layoff may have activities of uploading documents to Dropbox and visiting job-searching websites although he may try to hide these abnormal activities in multiple sessions; he may have “LogOn” and “LogOff” times different from his normal sessions as he may become less punctual or may have more sessions during weekends or nights, resulting different session durations and intervals between sessions. Moreover, when a user’s account is compromised, activity and session information from the attacker will also be different even if the attacker tries to mimic the normal user’s behaviors.
We develop a unified hierarchical model capable of capturing a general nonlinear dependency over the history of all activities. Our detection model does not rely on any predefined signatures and instead use deep learning models to capture user behaviors reflected in raw data. Specifically, our hierarchical model learns the user behaviors in two time scales, intra-session level and inter-session level. For the intra-session level, we adopt the seq2seq model to predictbased on the previous and use the marked temporal point process model to capture the dynamic difference of activities. Note that the number of activities of the predicted session could be different from that of the previous as well as the true . For the inter-session level, we aim to model the session interval and the session duration of the -th session.
The whole framework of predicting future events with two time scales is shown in Figure 1. We do not assume any specific parametric form of the conditional intensity function. Instead, we follow  to seek to learn a general representation to approximate the unknown dependency structure over the history. We also emphasize that the neural temporal point processes of two levels are connected in our framework. The upper-level LSTM takes the first and last hidden states from the encoder of the lower-level LSTM as inputs to predict the interval of two sessions and the session duration. This connection guarantees the upper-level LSTM incorporates activity type information in its modeling. For insider threat detection, since our model is trained by benign sessions, the predicted session would be close to the observed when is normal, and different from when is abnormal. In Session IV-D, we will present details about how to derive fraudulent score by comparing with , where indicates the predicted value.
Iv-B Intra-Session Insider Threat Detection
In this work, we propose to use the seq2seq model to estimate the joint likelihood of -th session given the -th session. In particular, the encoder of the seq2seq model is to encode the activity time and type information at -th session to a hidden representation. The decoder is to model the activity time interval and type information at -th session given the history.
Encoder: To map the -th session to a hidden representation, the encoder first maps each activity occurring at time with type to an embedding vector :
where is the inter-activity duration between and ; is a time-mapping parameter; is an activity embedding matrix; is a one-hot vector of the activity type . Then, by taking the entire sequence of -th session as inputs to the encoder LSTM, the encoder projects the -th session to a hidden representation .
Decoder: The decoder is trained to predict the pairs of activity type and time at the -th session given the information of -th session. To predict the activity type information, given the hidden state of the decoder , the probability of the next activity having type value can be derived by a softmax function:
where is the -th row of the weight matrix in the softmax function.
where the exponential function is deployed to ensure the intensity function is always positive; is a weight vector; and are scalars. Then, we can derive the conditional density function given the history until time :
Hence, given the observed activity time information, we can calculate the conditional density function of the time interval between two consecutive activities at -th session:
Since the lower-level LSTM is to model the time interval and type information, given a collection of activity sessions from benign employees, we combine the likelihood functions of the event type (Equation 8) and time (Equation 11) to have the negative joint log-likelihood of the observation sessions:
where is the total number of sessions in the training dataset; is the number of activities in a session. The lower-level LSTM along with the decoder LSTM is trained by minimizing the negative log-likelihood shown in Equation 12.
When the model is deployed for detection, to obtain the predicted activity type , we simply choose the type with the largest probability (calculated by Equation 8):
We further calculate the expected inter-activity duration between -th and -th activities :
The difference between and the observed will be used to calculate the fraudulent score in terms of the timing information of intra-session activities.
Iv-C Inter-Session Insider Threat Detection
The inter-session duration is crucial for insider threat detection. To capture such information, we further incorporate an upper-level LSTM into the framework, which focuses on modeling the inter-session behaviors of employees. Specifically, the upper-level LSTM is trained to predict the inter-session duration between -th and -th sessions () and the -th session duration ().
To predict the inter-session duration , the input of the upper-level LSTM is from the last hidden state of -th session from the lower-level LSTM as shown in Equation 15, while to predict the -th session duration , the input of the upper-level LSTM is from the first hidden state of -th session as shown in Equation 16.
where is an input weight matrix for the upper-level LSTM.
Then, we can get the hidden states ( and ) of the upper-level sequence based on an LSTM model. Finally, the conditional density functions of the inter-session duration and session duration are:
where and can be calculated based on Equation 10.
To train the upper-level LSTM, the negative log-likelihood of inter-session sequences can be defined as:
where is the total number of inter-session level sequences in the training dataset; indicates the number of sessions in an inter-session level sequence. In our experiments, we use the upper-level LSTM to model the employee sessions in a week. Then, indicates the total number of weeks in the training dataset, and is the number of sessions in a week. The upper-level LSTM is trained by minimizing the negative log-likelihood shown in Equation 19. After training, the upper-level LSTM can capture the patterns of the inter-session duration and session duration.
When the model is deployed for detection, we calculate the predicted inter-session duration between -th and ()-th sessions and the -th session duration , shown in Equations 20.
The difference between () and the observed () will be used to calculate the fraudulent score in terms of the session timing information.
Iv-D Fraudulent Score
After obtaining the predicted session, we compare the generated times and types in a session with the observed session, respectively. For the activity types, we adopt the Bilingual Evaluation Understudy (BLEU) 
score to evaluate the difference between the observed session and generated session. The BLEU metric was originally used for evaluating the similarity between a generated text and a reference text, with values closer to 1 representing more similar texts. BLEU is derived by counting matching n-grams in the generated text to n-grams in the reference text and insensitive to the word order. Hence, BLEU is suitable for evaluating the generated sequences and the observed sequences. We define the fraudulent score in terms of intra-session activity type as:
where indicates the observed activity types in -th session while indicates the predicted session. If is high, it means the observed session is a potentially malicious session in terms of session activity types.
For the activity time, as shown in Equation 22, we define the fraudulent score in terms of intra-session activity time by computing the mean absolute error (MAE) of the predicted time of each activity with the observed occurring time:
Since the upper-level LSTM takes each session’s first and last hidden states as inputs to predict the time lengths of sessions and inter-sessions, we can further derive the time scores by comparing the predicted time lengths with the observed ones. We define the fraudulent score in terms of inter-session duration as:
Similarly, we define the fraudulent score in terms of session duration as:
Note that although can be derived based on all the predicted activity time from lower-level LSTM, the error usually is high due to the accumulated error over the whole sequence. Hence, we use the upper-level LSTM to get the session time length. Finally, by combining Equations 21, 22, 23 and 24, we define the total fraudulent score () of a session as:
where are hyper-parameters, which can be set based on the performance of insider threat detection via using each score alone.
V-a Experiment Setup
Dataset. We adopt the CERT Insider Threat Dataset , which is the only comprehensive dataset publicly available for evaluating the insider threat detection. This dataset consists of five log files that record the computer-based activities for all employees, including logon.csv that records the logon and logoff operations of all employees, email.csv that records all the email operations (send or receive), http.csv that records all the web browsing (visit, download, or upload) operations, file.csv that records activities (open, write, copy or delete) involving a removable media device, and decive.csv that records the usage of a thumb drive (connect or disconnect). Table II shows the major activity types recorded in each file. The CERT dataset also has the ground truth that indicates the malicious activities committed by insiders. We use the latest version (r6.2) of CERT dataset that contains 3995 benign employees and 5 insiders.
We join all the log files, separate them by each employee, and then sort the activities of each employee based on the recorded timestamps. We randomly select 2000 benign employees as the training dataset and another 500 employees as the testing dataset. The test dataset includes all sessions from five insiders. The statistics of the training and testing datasets is shown in Table I. Based on the activities recorded in the log files, we extract 19 activity types shown in Table II. The activity types are designed to indicate the malicious activities.
|Training Dataset||Testing Dataset|
|# of Employees||2000||500|
|# of Sessions||1039805||142600|
|# of Insiders||0||5|
|# of Malicious Sessions||0||68|
|logon.csv||Weekday Logon (employee logs on a computer on a weekday at work hours)|
|Afterhour Weekday Logon (employee logs on a computer on a weekday after work hours)|
|Weekend Logon (employees logs on at weekends)|
|Logoff (employee logs off a computer)|
|email.csv||Send Internal Email (employee sends an internal email)|
|Send External Email (employee sends an external email)|
|View Internal Email (employee views an internal email)|
|View external Email (employee views an external email)|
|http.csv||WWW Visit (employee visits a website)|
|WWW Download (employee downloads files from a website)|
|WWW Upload (employee uploads files to a website)|
|device.csv||Weekday Device Connect (employee connects a device on a weekday at work hours)|
|Afterhour Weekday Device Connect (employee connects a device on a weekday after hours)|
|Weekend Device Connect (employee connects a device at weekends)|
|Disconnect Device (employee disconnects a device)|
|file.csv||Open doc/jpg/txt/zip File (employee opens a doc/jpg/txt/zip file)|
|Copy doc/jpg/txt/zip File (employee copies a doc/jpg/txt/zip file)|
|Write doc/jpg/txt/zip File (employee writes a doc/jpg/txt/zip file)|
|Delete doc/jpg/txt/zip File (employee deletes a doc/jpg/txt/zip file)|
We compare our model with two one-class classifiers: 1) One-class SVM(OCSVM) 
adopts support vector machine to learn a decision hypersphere around the positive data, and considers samples located outside this hypersphere as anomalies; 2) Isolation Forest (iForest)  detects the anomalies with short average path lengths on a set of trees. For both baselines, we consider each activity type as an input feature and the feature value is the number of activities of the corresponding type in a session. In this paper, we do not compare with other RNN based insider threat detection methods (e.g., ) as these methods were designed to detect the insiders or predict the days that contain insider threat activities.
Hyperparameters. We map the extracted activity types to the type embeddings. The dimension of the type embeddings is 50. The dimension of the LSTM models is 100. We adopt Adam  as the stochastic optimization method to update the parameters of the framework. When training the upper-level LSTM by Equation 19, we fix the parameters in the lower-level LSTM and only update the parameters in the upper-level LSTM.
V-B Experiment Results
We aim to detect all the 68 malicious sessions from the totally 142,600 sessions in the testing set. Figure 2 shows the receiver operating characteristic (ROC) curves of our model for insider threat detection by leveraging various fraudulent scores. By using each fraudulent score separately, we can notice that the derived from intra-session activity types achieves the highest area under cure (AUC) score, which indicates the activity types of malicious sessions are different from the normal sessions. Meanwhile, the session duration time and inter-session duration time also make positive contributions to the malicious session detection. The derived from the session duration time and derived from the inter-session duration time achieve good performance with AUC=0.6851 and 0.7073, respectively, which indicates the duration of malicious sessions and inter-sessions are usually different from those of normal sessions. We also notice that the based on the inter-activity activity time information does not help much on insider threat detection. The AUC derived from is 0.3021. After examining the data, we find that there is no much difference in terms of inter-activity time information between malicious sessions and normal sessions. Since adopting does not achieve reasonable performance in the CERT Insider Threat dataset, we set when deriving the total insider threat detection . As a result, our detection model using the total insider threat detection , which combines all the intra- and inter-session information, achieves the best performance with the AUC=0.9033 when the hyper-parameters in Equation 25 are .
Figure 3 further shows the ROC curves of our model and two baselines. We can observe that our model achieves better performance than baselines in terms of AUC score. Especially, we can notice that when our model only adopts the activity type information () for malicious session detection, our model is slightly better than baselines in terms of AUC. With further combining the activity time and type information, our model significantly outperforms the baselines with the AUC=0.9033.
V-C Vandal Detection
Dataset. Due to the limitation of the CERT dataset where the inter-activity duration times are randomly generated, the inter-activity time in the intra-session level does not make contributions to the insider threat detection. To further show the advantage of incorporating activity time information, we apply our model for detecting vandals on Wikipedia. Vandals can be considered as insiders in the community of Wikipedia contributors. The study has shown that the behaviors of vandals and benign users are different in terms of edit time, e.g., vandals make faster edits than benign users . Hence, we expect that using the inter-activity time information can boost the performance of vandal detection.
We conduct our evaluation on the UMDWikipedia dataset . This dataset contains information of around 770K edits from Jan 2013 to July 2014 (19 months) with 17105 vandals and 17105 benign users. Each user edits a sequence of Wikipedia pages. We adopt half of the benign users for training and the other half of the benign users and all the vandals for testing. Since user activities on Wikipedia do not have explicit indicators, such as LogON or LogOff, to split the user activity sequence into sessions, we consider user activities in a day as a user session. As a result, the session duration is always 24hrs, and the inter-session duration is 0. Therefore, in this experiment, we focus on vandalism session detection with only using information from the intra-session level and adopt the lower-level LSTM shown in Figure 1 accordingly. Note that we filter out all the sessions with number of activities less than 5. The seq2seq model takes a feature vector as an input and predicts the next edit time and type. In this experiment, we consider the activity type as whether the current edit will be reverted or not. The feature vector of the user’s t-th edit is composed by: (1) whether or not the user edited on a meta-page; (2) whether or not the user consecutively edited the pages less than 1 minute, 3 minutes, or 5 minutes; (3) whether or not the user’s current edit page had been edited before.
Experiment Results. From Figure 4, we can observe that only using the inter-activity time information can achieve surprisingly good performance on vandalism session detection with AUC=0.9121, which indicates the inter-activity time information is crucial for vandalism session detection. Meanwhile, adopting the activity type information can also achieve the vandalism session detect with AUC=0.7399. Hence, using inter-activity time information achieves better performance than using the activity type information in terms of AUC. It also means vandals have significantly different patterns in activity time compared with benign users. Finally, with combining the activity type and time information, our model can achieve even better performance with AUC=0.9496.
We further compare our model with two baselines, i.e., One-class SVM and Isolation Forest. For the baselines, we consider the same features as the seq2seq model and further combine activity types. The value of each feature is the mean value of the corresponding feature in a day. Figure 5 indicates that our model significantly outperforms baselines in terms of AUC on the vandalism session detection task. Similar to the results on the CERT dataset, when our model only adopts the activity type information ( shown in Figure 4), the model achieves similar performance as baselines. With considering the activity time information, the performance of our model is improved by a large margin.
In this paper, we have proposed a two-level neural temporal point process model for insider threat detection. In the lower-level, we combined the seq2seq model with marked temporal point processes to dynamically capture the intra-session information in terms of activity times and types. The upper-level LSTM takes the first and last hidden states from the encoder of the lower-level LSTM as inputs to predict the interval of two sessions and the session duration based on activity history. Experimental results on an insider threat detection dataset and a Wikipedia vandal detection dataset demonstrated the effectiveness of our model.
This work was supported in part by NSF 1564250 and the Department of Energy under Award Number DE-OE0000779.
-  CSO, U. S. S. C. D. of SRI-CMU, and ForcePoint, “2018 u.s. state of cybercrime,” Tech. Rep., 2018.
-  H. Eldardiry, E. Bart, J. Liu, J. Hanley, B. Price, and O. Brdiczka, “Multi-domain information fusion for insider threat detection,” in 2013 IEEE Security and Privacy Workshops. IEEE, 2013, pp. 45–51.
-  T. Rashid, I. Agrafiotis, and J. R. Nurse, “A new take on detecting insider threats: Exploring the use of hidden markov models,” in Proceedings of the 8th ACM CCS International Workshop on Managing Insider Security Threats, 2016.
-  D. C. Le and A. N. Zincir-Heywood, “Evaluating insider threat detection workflow using supervised and unsupervised learning,” in 2018 IEEE Security and Privacy Workshops, 2018.
-  M. B. Salem, S. Hershkop, and S. J. Stolfo, “A survey of insider attack detection research,” in Insider Attack and Cyber Security: Beyond the Hacker, 2008.
-  A. Sanzgiri and D. Dasgupta, “Classification of insider threat detection techniques,” in Proceedings of the 11th Annual Cyber and Information Security Research Conference, 2016.
-  A. Tuor, S. Kaplan, B. Hutchinson, N. Nichols, and S. Robinson, “Deep learning for unsupervised insider threat detection in structured cybersecurity data streams,” in AI for Cyber Security Workshop, 2017.
-  T. E. Senator, H. G. Goldberg, A. Memory, and et al., “Detecting insider threats in a real corporate database of computer usage activity,” in Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2013.
-  A. Reinhart, “A review of self-exciting spatio-temporal point processes and their applications,” arXiv:1708.02647 [stat], 2017.
-  N. Du, H. Dai, R. Trivedi, U. Upadhyay, M. Gomez-Rodriguez, and L. Song, “Recurrent marked temporal point processes: Embedding event history to vector,” in Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016.
-  H. Mei and J. Eisner, “The neural hawkes process: A neurally self-modulating multivariate point process,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017.
-  S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
-  L. Liu, O. D. Vel, Q. Han, J. Zhang, and Y. Xiang, “Detecting and preventing cyber insider threats: A survey,” IEEE Communications Surveys Tutorials, vol. 20, no. 2, pp. 1397–1417, 2018.
-  I. Homoliak, F. Toffalini, J. Guarnizo, Y. Elovici, and M. Ochoa, “Insight into insiders and it: A survey of insider threat taxonomies, analysis, modeling, and countermeasures,” ACM Comput. Surv., vol. 52, no. 2, pp. 30:1–30:40, Apr. 2019.
-  L. Spitzner, “Honeypots: catching the insider threat,” in Proceedings of the 19th Annual Computer Security Applications Conference, 2003.
-  J. G. Rasmussen, “Lecture notes: Temporal point processes and the conditional intensity function,” arXiv:1806.00221 [stat], 2018.
-  Z. Zhou, D. S. Matteson, D. B. Woodard, S. G. Henderson, and A. C. Micheas, “A spatio-temporal point process model for ambulance demand,” Journal of the American Statistical Association, vol. 110, no. 509, pp. 6–15, 2015.
-  M. Farajtabar, “Point process modeling and optimization of social networks,” Ph.D. dissertation, 2018.
-  S. Xiao, M. Farajtabar, X. Ye, J. Yan, L. Song, and H. Zha, “Wasserstein learning of deep generative point process models,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017.
H. Zha, J. Yan, X. Liu, L. Shi, and C. Li, “Improving maximum likelihood
estimation of temporal point process via discriminative and adversarial
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018.
-  S. Li, S. Xiao, S. Zhu, N. Du, Y. Xie, and L. Song, “Learning temporal point processes via reinforcement learning,” in Proceedings of the 32Nd International Conference on Neural Information Processing Systems, 2018.
-  A. G. Hawkes, “Spectra of some self-exciting and mutually exciting point processes,” Biometrika, vol. 58, no. 1, pp. 83–90, 1971.
-  K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, 2002.
-  J. Glasser and B. Lindauer, “Bridging the gap: A pragmatic approach to generating insider threat data,” in IEEE Security and Privacy Workshops, 2013.
-  D. M. J. Tax and R. P. W. Duin, “Support vector data description,” Machine Learning, vol. 54, no. 1, pp. 45–66, 2004.
-  F. T. Liu, K. M. Ting, and Z. Zhou, “Isolation forest,” in 2008 Eighth IEEE International Conference on Data Mining, 2008.
-  D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of the 3rd International Conference on Learning Representations, 2015.
-  S. Kumar, F. Spezzano, and V. Subrahmanian, “Vews: A wikipedia vandal early warning system,” in Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, 2015.